Bug 1722754 - kubefedconfig.spec.scope field is null when installing a cluster scoped kubefed operator
Summary: kubefedconfig.spec.scope field is null when installing a cluster scoped kubef...
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: OpenShift Container Platform
Classification: Red Hat
Component: Federation
Version: 4.2.0
Hardware: Unspecified
OS: Unspecified
high
low
Target Milestone: ---
: 4.2.0
Assignee: Paul Morie
QA Contact: Qin Ping
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2019-06-21 08:05 UTC by Qin Ping
Modified: 2019-10-16 06:32 UTC (History)
1 user (show)

Fixed In Version:
Doc Type: If docs needed, set a value
Doc Text:
Clone Of:
Environment:
Last Closed: 2019-10-16 06:32:19 UTC
Target Upstream Version:
Embargoed:


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
Red Hat Product Errata RHBA-2019:2922 0 None None None 2019-10-16 06:32:31 UTC

Description Qin Ping 2019-06-21 08:05:09 UTC
Description of problem:
kubefedconfig.spec.scope field is null when installing a cluster scoped kubefed operator

Version-Release number of selected component (if applicable):
$ oc get clusterversion
NAME      VERSION                             AVAILABLE   PROGRESSING   SINCE   STATUS
version   4.2.0-0.nightly-2019-06-19-210910   True        False         3h52m   Cluster version is 4.2.0-0.nightly-2019-06-19-210910

kubefed-operator: branch release-4.2(10f2d7f25bbbb9d81406931576da3c2bb4fe2e98)

How reproducible:
100%

Steps to Reproduce:
1. Install a cluster scoped kubefed operator with cmd:
./scripts/install-kubefed.sh -n federation-test -d cluster -s Cluster
2. Check kubefedconfig object


Actual results:
$ oc get kubefedconfig kubefed -n federation-test -ojson|jq .spec.scope
""

Expected results:
It should be "Cluster".

Additional info:

$ ./scripts/install-kubefed.sh -n federation-test -d cluster -s Cluster
NS=federation-test
LOC=cluster
Operator Image Name=quay.io/sohankunkerkar/kubefed-operator:v0.1.0
Scope=Cluster
namespace/federation-test created
customresourcedefinition.apiextensions.k8s.io/kubefeds.operator.kubefed.io created
kubefed.operator.kubefed.io/kubefed-resource created
Reading the image name and sed it in
deployment.apps/kubefed-operator created
Reading the namespace in clusterrolebinding and sed it in
rolebinding.rbac.authorization.k8s.io/kubefed-operator created
clusterrolebinding.rbac.authorization.k8s.io/kubefed-operator-rolebinding created
role.rbac.authorization.k8s.io/kubefed-operator created
clusterrole.rbac.authorization.k8s.io/kubefed-operator created
serviceaccount/kubefed-operator created
customresourcedefinition.apiextensions.k8s.io/clusterpropagatedversions.core.kubefed.k8s.io created
customresourcedefinition.apiextensions.k8s.io/dnsendpoints.multiclusterdns.kubefed.k8s.io created
customresourcedefinition.apiextensions.k8s.io/domains.multiclusterdns.kubefed.k8s.io created
customresourcedefinition.apiextensions.k8s.io/federatedservicestatuses.core.kubefed.k8s.io created
customresourcedefinition.apiextensions.k8s.io/federatedtypeconfigs.core.kubefed.k8s.io created
customresourcedefinition.apiextensions.k8s.io/ingressdnsrecords.multiclusterdns.kubefed.k8s.io created
customresourcedefinition.apiextensions.k8s.io/kubefedclusters.core.kubefed.k8s.io created
customresourcedefinition.apiextensions.k8s.io/kubefedconfigs.core.kubefed.k8s.io created
customresourcedefinition.apiextensions.k8s.io/propagatedversions.core.kubefed.k8s.io created
customresourcedefinition.apiextensions.k8s.io/replicaschedulingpreferences.scheduling.kubefed.k8s.io created
customresourcedefinition.apiextensions.k8s.io/servicednsrecords.multiclusterdns.kubefed.k8s.io created
customresourcedefinition.apiextensions.k8s.io/kubefeds.operator.kubefed.io unchanged
Deployed all the operator yamls for kubefed-operator in the cluster

$ oc get kubefed kubefed-resource -n federation-test -ojson|jq .spec.scope
"Cluster"

$ oc get pod -n federation-test
NAME                                          READY   STATUS    RESTARTS   AGE
kubefed-controller-manager-64684cb98f-2vn52   1/1     Running   0          14m
kubefed-operator-7cc785c7b4-gx69p             1/1     Running   0          14m

$ oc get pod -n federation-test -ojson|jq .items[].spec.containers[].image
"quay.io/anbhat/kubefed:v0.1.0-rc2-PR971"
"quay.io/sohankunkerkar/kubefed-operator:v0.1.0"

Comment 1 Qin Ping 2019-06-21 08:11:13 UTC
Install namespace scoped kubefed operator with cmd: $ ./scripts/install-kubefed.sh -n federation-test -d cluster can get a correct scope in kubefedconfig.

$ oc get kubefedconfig kubefed -n federation-test -ojson|jq .spec.scope
"Namespaced"

Comment 2 Maru Newby 2019-07-23 18:02:53 UTC
This should be fixed upstream by https://github.com/kubernetes-sigs/kubefed/pull/1015 and https://github.com/kubernetes-sigs/kubefed/pull/941, and these fixes have been released as of rc3.

Comment 3 Qin Ping 2019-08-08 02:28:04 UTC
Verified with the images:
quay.io/openshift-release-dev/ocp-v4.0-art-dev:v4.2.0-201908061459-ose-kubefed-operator
quay.io/openshift-release-dev/ocp-v4.0-art-dev:v4.2.0-201908061126-ose-kubefed

$ oc get kubefed kubefed-resource -n federation-system -ojson|jq .spec.scope
"Cluster"

$ oc get kubefedconfig kubefed -n federation-system -ojson|jq .spec.scope
"Cluster"

Comment 4 errata-xmlrpc 2019-10-16 06:32:19 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHBA-2019:2922


Note You need to log in before you can comment on or make changes to this bug.