Bug 1733490 - install kubefed operator failed with the downstream image
Summary: install kubefed operator failed with the downstream image
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: OpenShift Container Platform
Classification: Red Hat
Component: Federation
Version: 4.2.0
Hardware: Unspecified
OS: Unspecified
urgent
medium
Target Milestone: ---
: 4.2.0
Assignee: Aniket Bhat
QA Contact: Qin Ping
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2019-07-26 08:49 UTC by Qin Ping
Modified: 2019-10-16 06:33 UTC (History)
1 user (show)

Fixed In Version:
Doc Type: If docs needed, set a value
Doc Text:
Clone Of:
Environment:
Last Closed: 2019-10-16 06:33:49 UTC
Target Upstream Version:


Attachments (Terms of Use)


Links
System ID Priority Status Summary Last Updated
Red Hat Product Errata RHBA-2019:2922 None None None 2019-10-16 06:33:59 UTC

Description Qin Ping 2019-07-26 08:49:47 UTC
Description of problem:
install kubefed operator failed with the downstream image

Version-Release number of selected component (if applicable):
$ oc get clusterversion
NAME      VERSION                             AVAILABLE   PROGRESSING   SINCE   STATUS
version   4.2.0-0.nightly-2019-07-26-010334   True        False         5h8m    Cluster version is 4.2.0-0.nightly-2019-07-26-010334

kubefed-operator version: brew-pulp-docker01.web.prod.ext.phx2.redhat.com:8888/openshift/ose-kubefed-operator:v4.2.0

How reproducible:
100%

Steps to Reproduce:
1. Pull kubefed-operator:v4.2.0 from brew registry and push it to the private registry
2. Create an operatorsource kubefed, it's metadata includes the downstream image.
$ cat operatorsource.yaml 
apiVersion: operators.coreos.com/v1
kind: OperatorSource
metadata:
  name: kubefed
  namespace: openshift-marketplace
spec:
  authorizationToken:
    secretName: kubefed
  displayName: kubefed downstream Operators
  endpoint: https://quay.io/cnr
  publisher: kubefed
  registryNamespace: piqin
  type: appregistry
3. Install kubefed-operator from operator hub
4. Check the installation

Actual results:
$ oc get pod
NAME                                READY   STATUS             RESTARTS   AGE
kubefed-operator-59c4865986-rtkz4   0/1     CrashLoopBackOff   6          7m44s



Expected results:
kubefed operator should be installed successfully.


Additional info:

$ oc get pod kubefed-operator-59c4865986-rtkz4 -o json| jq .spec.containers[].image
"image-registry.openshift-image-registry.svc:5000/openshift/ose-kubefed-operator:v4.2.0"


$ oc logs kubefed-operator-59c4865986-rtkz4
{"level":"info","ts":1564130342.887514,"logger":"cmd","msg":"Go Version: go1.12.6"}
{"level":"info","ts":1564130342.8875341,"logger":"cmd","msg":"Go OS/Arch: linux/amd64"}
{"level":"info","ts":1564130342.887538,"logger":"cmd","msg":"Version of operator-sdk: v0.7.0+git"}
{"level":"info","ts":1564130342.887541,"logger":"cmd","msg":"Operator Version: 0.1.0"}
{"level":"info","ts":1564130342.887553,"logger":"cmd","msg":"Starting dir: /"}
{"level":"info","ts":1564130342.8878043,"logger":"leader","msg":"Trying to become the leader."}
{"level":"info","ts":1564130343.018284,"logger":"leader","msg":"Found existing lock with my name. I was likely restarted."}
{"level":"info","ts":1564130343.018307,"logger":"leader","msg":"Continuing as the leader."}
{"level":"info","ts":1564130343.111181,"logger":"cmd","msg":"Registering Components."}
{"level":"info","ts":1564130343.1113234,"logger":"manifestival","msg":"Reading file","name":"deploy/resources"}
{"level":"info","ts":1564130343.114468,"logger":"kubebuilder.controller","msg":"Starting EventSource","controller":"kubefed-controller","source":"kind source: /, Kind="}
{"level":"info","ts":1564130343.1146507,"logger":"manifestival","msg":"Reading file","name":"deploy/resources/webhook"}
{"level":"info","ts":1564130343.1186898,"logger":"kubebuilder.controller","msg":"Starting EventSource","controller":"kubefedwebhook-controller","source":"kind source: /, Kind="}
{"level":"error","ts":1564130343.198646,"logger":"kubebuilder.source","msg":"if kind is a CRD, it should be installed before calling Start","kind":"KubeFedWebHook.operator.kubefed.io","error":"no matches for kind \"KubeFedWebHook\" in version \"operator.kubefed.io/v1alpha1\"","stacktrace":"github.com/openshift/kubefed-operator/vendor/github.com/go-logr/zapr.(*zapLogger).Error\n\t/go/src/github.com/openshift/kubefed-operator/vendor/github.com/go-logr/zapr/zapr.go:128\ngithub.com/openshift/kubefed-operator/vendor/sigs.k8s.io/controller-runtime/pkg/source.(*Kind).Start\n\t/go/src/github.com/openshift/kubefed-operator/vendor/sigs.k8s.io/controller-runtime/pkg/source/source.go:89\ngithub.com/openshift/kubefed-operator/vendor/sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).Watch\n\t/go/src/github.com/openshift/kubefed-operator/vendor/sigs.k8s.io/controller-runtime/pkg/internal/controller/controller.go:122\ngithub.com/openshift/kubefed-operator/pkg/controller/kubefedwebhook.add\n\t/go/src/github.com/openshift/kubefed-operator/pkg/controller/kubefedwebhook/kubefedwebhook_controller.go:68\ngithub.com/openshift/kubefed-operator/pkg/controller/kubefedwebhook.Add\n\t/go/src/github.com/openshift/kubefed-operator/pkg/controller/kubefedwebhook/kubefedwebhook_controller.go:50\ngithub.com/openshift/kubefed-operator/pkg/controller.AddToManager\n\t/go/src/github.com/openshift/kubefed-operator/pkg/controller/controller.go:13\nmain.main\n\t/go/src/github.com/openshift/kubefed-operator/cmd/manager/main.go:114\nruntime.main\n\t/opt/rh/go-toolset-1.12/root/usr/lib/go-toolset-1.12-golang/src/runtime/proc.go:200"}
{"level":"error","ts":1564130343.1987147,"logger":"cmd","msg":"","error":"no matches for kind \"KubeFedWebHook\" in version \"operator.kubefed.io/v1alpha1\"","stacktrace":"github.com/openshift/kubefed-operator/vendor/github.com/go-logr/zapr.(*zapLogger).Error\n\t/go/src/github.com/openshift/kubefed-operator/vendor/github.com/go-logr/zapr/zapr.go:128\nmain.main\n\t/go/src/github.com/openshift/kubefed-operator/cmd/manager/main.go:115\nruntime.main\n\t/opt/rh/go-toolset-1.12/root/usr/lib/go-toolset-1.12-golang/src/runtime/proc.go:200"}

Comment 1 Aniket Bhat 2019-07-26 15:04:39 UTC
I think the CSV you are using to install is coming from community-operators repo. I ran in to this as well. 
The thing to make sure before installing the operator from your catalog source is deleting the community operators catalog source. It seems like the Kubefed Operator CSV from that catalog source happily steps on the custom one. Notice how the CSV that is applied doesn’t have the KubeFedWebhook CRD.

MacBook-Pro:ping_config anbhat$ oc get csv kubefed-operator.v0.1.0 -n federation-system -o yaml | grep -i KubeFedWebHook
MacBook-Pro:ping_config anbhat$

Comment 2 Qin Ping 2019-07-29 08:04:48 UTC
Hi Aniket,

I tried with the latest kubefed-operator manifests(release-4.2, commit 3725e11bd13a90238a8b3084a3e10fd5699d7835).

It can work well with the ose-kubefed-oeprator:v4.2.0 downstream image well.

So, I'll mark the bug as verified.

Comment 3 errata-xmlrpc 2019-10-16 06:33:49 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHBA-2019:2922


Note You need to log in before you can comment on or make changes to this bug.