Bug 1732307 - Install kubefed operator with "All namespaces on the cluster" mode, but the kubefed operator can not be used in all namespaces
Summary: Install kubefed operator with "All namespaces on the cluster" mode, but the k...
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: OpenShift Container Platform
Classification: Red Hat
Component: Federation
Version: 4.2.0
Hardware: Unspecified
OS: Unspecified
high
medium
Target Milestone: ---
: 4.2.0
Assignee: Sohan Kunkerkar
QA Contact: Qin Ping
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2019-07-23 07:09 UTC by Qin Ping
Modified: 2019-10-16 06:30 UTC (History)
3 users (show)

Fixed In Version:
Doc Type: If docs needed, set a value
Doc Text:
Clone Of:
Environment:
Last Closed: 2019-10-16 06:30:44 UTC
Target Upstream Version:
Embargoed:


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
Red Hat Product Errata RHBA-2019:2922 0 None None None 2019-10-16 06:30:59 UTC

Description Qin Ping 2019-07-23 07:09:18 UTC
Description of problem:
Installed the kubefed operator with "All namespaces on the cluster" mode, the operator installed successfully, but when trying to deploy kubefed control plane under federation-system namespace, the kubefed control plane deployment failed.

Version-Release number of selected component (if applicable):
$ oc get clusterversion
NAME      VERSION                             AVAILABLE   PROGRESSING   SINCE   STATUS
version   4.2.0-0.nightly-2019-07-22-202159   True        False         4h30m   Cluster version is 4.2.0-0.nightly-2019-07-22-202159
kubefed-operator: v0.1.0

How reproducible:
100%


Steps to Reproduce:
1. Login into web console with kubeadmin user
2. Install kubefed operator with "All namespaces on the cluster" mode(Operators->OperatorHub)
3. Create a project federation-system(Home->Projects)
4. Create a kubefed instance in federation-system namespace
apiVersion: operator.kubefed.io/v1alpha1
kind: KubeFed
metadata:
  name: kubefed
  namespace: federation-system
spec:
  scope: Cluster
5. Check deployment of kubefed control plane


Actual results:
kubefed control plane is not installed as expected.
$ oc get deployment -n federation-system
No resources found.


Expected results:
kubefed control plane is installed successfully.


Additional info:
1. kubefed operator is installed under openshift-operators namespace
$ oc get deployment -n openshift-operators
NAME               READY   UP-TO-DATE   AVAILABLE   AGE
kubefed-operator   1/1     1            1           12m

2. If creating kubefed instance under openshift-operators namespace, kubefed control plane can be deployed under openshift-operators namespace
$ oc get kubefed kubefed -n openshift-operators -oyaml
apiVersion: operator.kubefed.io/v1alpha1
kind: KubeFed
metadata:
  creationTimestamp: "2019-07-23T07:08:10Z"
  generation: 1
  name: kubefed
  namespace: openshift-operators
  resourceVersion: "93668"
  selfLink: /apis/operator.kubefed.io/v1alpha1/namespaces/openshift-operators/kubefeds/kubefed
  uid: a10c68c3-ad18-11e9-b79b-0a2401d2c6e6
spec:
  scope: Cluster
status:
  version: 0.1.0

$ oc get deployment -n openshift-operators
NAME                         READY   UP-TO-DATE   AVAILABLE   AGE
kubefed-controller-manager   2/2     2            2           36s
kubefed-operator             1/1     1            1           13m

Comment 1 Maru Newby 2019-07-23 08:21:16 UTC
I think our intention is to only support cluster-scoped kubefed deployed to the kube-federation-system namespace. I don't think it matters whether this restriction is in code or documentation. Has deployment to the kube-federation-system namespace been validated?

Comment 5 Sohan Kunkerkar 2019-07-31 15:55:02 UTC
This issue should be fixed after this PR is merged.
https://github.com/openshift/kubefed-operator/pull/19

Comment 7 Qin Ping 2019-08-08 02:24:47 UTC
Verified with images:
quay.io/openshift-release-dev/ocp-v4.0-art-dev:v4.2.0-201908061126-ose-kubefed
quay.io/openshift-release-dev/ocp-v4.0-art-dev:v4.2.0-201908061459-ose-kubefed-operator

Comment 8 errata-xmlrpc 2019-10-16 06:30:44 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHBA-2019:2922


Note You need to log in before you can comment on or make changes to this bug.