Description of problem: Kubefed webhook defaulting is not correct after changing the kubefed controller manager deployment scope Version-Release number of selected component (if applicable): KubeFed controller-manager version: version.Info{Version:"v4.2.0", GitCommit:"b8ae65cee603cc9c746911debd3dc23b922222d8", GitTreeState:"clean", BuildDate:"2019-08-13T23:14:02Z", GoVersion:"go1.12.6", Compiler:"gc", Platform:"linux/amd64"} How reproducible: 100% Steps to Reproduce: 1. Installed a kubefed-operator 2. Installed a cluster scoped kubefed webhook $ oc get kubefedwebhook kubefedwebhook-resource -n default -oyaml apiVersion: operator.kubefed.io/v1alpha1 kind: KubeFedWebHook metadata: creationTimestamp: "2019-08-15T05:05:25Z" generation: 1 name: kubefedwebhook-resource namespace: default resourceVersion: "578751" selfLink: /apis/operator.kubefed.io/v1alpha1/namespaces/default/kubefedwebhooks/kubefedwebhook-resource uid: 4a8ac06d-bf1a-11e9-a616-42010a000004 spec: scope: Cluster status: version: 0.1.0 3. Created a project kube-federation-system 4. Installed a cluster scoped kubefed controller manager $ oc get kubefedconfig kubefed -n kube-federation-system -ojson|jq .spec.scope "Cluster" 5. Delete the kubefed object under kube-federation-system to uninstall the kubefed controller manager. 6. Created a new project federation-system 7. Try to install a namespace scoped kubefed controller manager $ oc get kubefed kubefed-resource -n federation-system -ojson|jq .spec.scope "Namespaced" 8. Check the kubefedconfig Actual results: $ oc get kubefedconfig kubefed -n federation-system -ojson|jq .spec.scope "Cluster" Expected results: The scope should be "Namespaced" Additional info:
There is a mismatch between the intent expressed in the operator's kubefed CR and the kubefedconfig that is currently defaulted by the kubefed controller, if none exists. I see a few options we can take depending on if there exists any precedence: 1. Delete the "kubefed" KubeFedConfig upon uninstalling the kubefed controller manager. If this is normal practice for an operator to remove resources associated with the component it's managing upon uninstallation, we should follow suit. Otherwise see the next options. 2. If there already exists a KubeFedConfig whose scope does not match the scope in the operator's kubefed CR, overwrite it. 3. Same as 2, except instead of overwriting it, report an error.
The kubefedconfig webhook does not perform any defaulting on the deployment scope as that is a required field that needs to be provided by the user. Currently there is a workaround for the kubefed controller manager to set this scope based on the environment variable DEFAULT_KUBEFED_SCOPE that is set by the operator. I am updating the title to better reflect the issue reported in this bug.
Ping, The latest downstream kubefed images should have this fixed. Aniket.
Failed verification with KubeFed controller-manager version: version.Info{Version:"v4.2.0", GitCommit:"7f002471b9dd8366e1e0f080b46bc79864682f71", GitTreeState:"clean", BuildDate:"2019-08-25T20:09:07Z", GoVersion:"go1.12.6", Compiler:"gc", Platform:"linux/amd64"} Still have the same issue.
Verified with: kubefedctl version: version.Info{Version:"v4.2.0", GitCommit:"d33c8586092041e14d47555b464ede2e99b8bb5f", GitTreeState:"clean", BuildDate:"2019-09-09T18:21:24Z", GoVersion:"go1.12.8", Compiler:"gc", Platform:"linux/amd64"} kubeFed controller-manager version: version.Info{Version:"v4.2.0", GitCommit:"d33c8586092041e14d47555b464ede2e99b8bb5f", GitTreeState:"clean", BuildDate:"2019-09-09T19:12:18Z", GoVersion:"go1.12.8", Compiler:"gc", Platform:"linux/amd64"}
Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory, and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://access.redhat.com/errata/RHBA-2019:2922