Description of problem: kubefedconfig.spec.scope field is null when installing a cluster scoped kubefed operator Version-Release number of selected component (if applicable): $ oc get clusterversion NAME VERSION AVAILABLE PROGRESSING SINCE STATUS version 4.2.0-0.nightly-2019-06-19-210910 True False 3h52m Cluster version is 4.2.0-0.nightly-2019-06-19-210910 kubefed-operator: branch release-4.2(10f2d7f25bbbb9d81406931576da3c2bb4fe2e98) How reproducible: 100% Steps to Reproduce: 1. Install a cluster scoped kubefed operator with cmd: ./scripts/install-kubefed.sh -n federation-test -d cluster -s Cluster 2. Check kubefedconfig object Actual results: $ oc get kubefedconfig kubefed -n federation-test -ojson|jq .spec.scope "" Expected results: It should be "Cluster". Additional info: $ ./scripts/install-kubefed.sh -n federation-test -d cluster -s Cluster NS=federation-test LOC=cluster Operator Image Name=quay.io/sohankunkerkar/kubefed-operator:v0.1.0 Scope=Cluster namespace/federation-test created customresourcedefinition.apiextensions.k8s.io/kubefeds.operator.kubefed.io created kubefed.operator.kubefed.io/kubefed-resource created Reading the image name and sed it in deployment.apps/kubefed-operator created Reading the namespace in clusterrolebinding and sed it in rolebinding.rbac.authorization.k8s.io/kubefed-operator created clusterrolebinding.rbac.authorization.k8s.io/kubefed-operator-rolebinding created role.rbac.authorization.k8s.io/kubefed-operator created clusterrole.rbac.authorization.k8s.io/kubefed-operator created serviceaccount/kubefed-operator created customresourcedefinition.apiextensions.k8s.io/clusterpropagatedversions.core.kubefed.k8s.io created customresourcedefinition.apiextensions.k8s.io/dnsendpoints.multiclusterdns.kubefed.k8s.io created customresourcedefinition.apiextensions.k8s.io/domains.multiclusterdns.kubefed.k8s.io created customresourcedefinition.apiextensions.k8s.io/federatedservicestatuses.core.kubefed.k8s.io created customresourcedefinition.apiextensions.k8s.io/federatedtypeconfigs.core.kubefed.k8s.io created customresourcedefinition.apiextensions.k8s.io/ingressdnsrecords.multiclusterdns.kubefed.k8s.io created customresourcedefinition.apiextensions.k8s.io/kubefedclusters.core.kubefed.k8s.io created customresourcedefinition.apiextensions.k8s.io/kubefedconfigs.core.kubefed.k8s.io created customresourcedefinition.apiextensions.k8s.io/propagatedversions.core.kubefed.k8s.io created customresourcedefinition.apiextensions.k8s.io/replicaschedulingpreferences.scheduling.kubefed.k8s.io created customresourcedefinition.apiextensions.k8s.io/servicednsrecords.multiclusterdns.kubefed.k8s.io created customresourcedefinition.apiextensions.k8s.io/kubefeds.operator.kubefed.io unchanged Deployed all the operator yamls for kubefed-operator in the cluster $ oc get kubefed kubefed-resource -n federation-test -ojson|jq .spec.scope "Cluster" $ oc get pod -n federation-test NAME READY STATUS RESTARTS AGE kubefed-controller-manager-64684cb98f-2vn52 1/1 Running 0 14m kubefed-operator-7cc785c7b4-gx69p 1/1 Running 0 14m $ oc get pod -n federation-test -ojson|jq .items[].spec.containers[].image "quay.io/anbhat/kubefed:v0.1.0-rc2-PR971" "quay.io/sohankunkerkar/kubefed-operator:v0.1.0"
Install namespace scoped kubefed operator with cmd: $ ./scripts/install-kubefed.sh -n federation-test -d cluster can get a correct scope in kubefedconfig. $ oc get kubefedconfig kubefed -n federation-test -ojson|jq .spec.scope "Namespaced"
This should be fixed upstream by https://github.com/kubernetes-sigs/kubefed/pull/1015 and https://github.com/kubernetes-sigs/kubefed/pull/941, and these fixes have been released as of rc3.
Verified with the images: quay.io/openshift-release-dev/ocp-v4.0-art-dev:v4.2.0-201908061459-ose-kubefed-operator quay.io/openshift-release-dev/ocp-v4.0-art-dev:v4.2.0-201908061126-ose-kubefed $ oc get kubefed kubefed-resource -n federation-system -ojson|jq .spec.scope "Cluster" $ oc get kubefedconfig kubefed -n federation-system -ojson|jq .spec.scope "Cluster"
Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory, and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://access.redhat.com/errata/RHBA-2019:2922