Hide Forgot
Is there any chance there are two catalog operator pods running (anywhere in the cluster)? I have seen this issue when we accidentally had two catalog operators running at once. I moved the severity to low since InstallPlan application is idempotent (no errors will occur from having two created for you)
Evan, No, only one catalog operator running at that time. > I moved the severity to low since InstallPlan application is idempotent (no errors will occur from having two created for you) Yes, no errors. But, it's confusing for users. I think we should find the root cause and fix it although it's a little problem.
I encounter this issue again, details: [root@qe-jiazha-311-gce-1-master-etcd-1 ~]# oc get subscription NAME PACKAGE SOURCE CHANNEL mongodb-enterprise-qss5n mongodb-enterprise certified-operators preview [root@qe-jiazha-311-gce-1-master-etcd-1 ~]# oc get installplan NAME CSV SOURCE APPROVAL APPROVED install-mongodboperator.v0.3.2-722mp mongodboperator.v0.3.2 certified-operators Automatic false install-mongodboperator.v0.3.2-hfzq6 mongodboperator.v0.3.2 certified-operators Automatic false Only one catalog operator running. [root@qe-jiazha-311-gce-1-master-etcd-1 ~]# oc get pods -n operator-lifecycle-manager NAME READY STATUS RESTARTS AGE catalog-operator-76c846684c-pm4n9 1/1 Running 0 1h olm-operator-5b7f7c4556-qwfhm 1/1 Running 0 1h [root@qe-jiazha-311-gce-1-master-etcd-1 ~]# oc get csv NAME DISPLAY VERSION REPLACES PHASE mongodboperator.v0.3.2 MongoDB 0.3.2 Succeeded [root@qe-jiazha-311-gce-1-master-etcd-1 ~]# oc get pods NAME READY STATUS RESTARTS AGE mongodb-enterprise-operator-7b7b8b9889-67trr 1/1 Running 0 1m
What about `oc get pods --all-namespaces`? I'm concerned there might be a catalog operator in another namespace somehow.
Evan, No, only one catalog-operator running in the cluster: [root@share-wmenghaglb3116-mez-1 ~]# oc get pods --all-namespaces | grep catalog-operator operator-lifecycle-manager catalog-operator-76c846684c-r6h6m 1/1 Running 0 50m [root@share-wmenghaglb3116-mez-1 ~]# oc get pods --all-namespaces | grep operator jian etcd-operator-7b49974f5b-5j9lf 3/3 Running 0 17m jian3 etcd-operator-7b49974f5b-k5k6w 3/3 Running 0 42s openshift-monitoring cluster-monitoring-operator-75c6b544dd-npkxx 1/1 Running 0 1d openshift-monitoring prometheus-operator-564dd668b-8t6cf 1/1 Running 0 1d operator-lifecycle-manager catalog-operator-76c846684c-r6h6m 1/1 Running 0 47m operator-lifecycle-manager olm-operator-5b7f7c4556-wg89n 1/1 Running 0 47m [root@share-wmenghaglb3116-mez-1 ~]# oc get subscription -n jian3 NAME PACKAGE SOURCE CHANNEL etcd-v58f8 etcd rh-operators alpha Two installplan created by the same subscription: [root@share-wmenghaglb3116-mez-1 ~]# oc get installplan -n jian3 NAME CSV SOURCE APPROVAL APPROVED install-etcdoperator.v0.9.2-7w9mg etcdoperator.v0.9.2 rh-operators Automatic false install-etcdoperator.v0.9.2-gq2b9 etcdoperator.v0.9.2 rh-operators Automatic false [root@share-wmenghaglb3116-mez-1 ~]# oc get csv -n jian3 NAME DISPLAY VERSION REPLACES PHASE etcdoperator.v0.9.2 etcd 0.9.2 etcdoperator.v0.9.0 Succeeded [root@share-wmenghaglb3116-mez-1 ~]# oc get pods -n jian3 NAME READY STATUS RESTARTS AGE etcd-operator-7b49974f5b-k5k6w 3/3 Running 0 2m [root@share-wmenghaglb3116-mez-1 ~]# oc get pods NAME READY STATUS RESTARTS AGE catalog-operator-76c846684c-r6h6m 1/1 Running 0 51m olm-operator-5b7f7c4556-wg89n 1/1 Running 0 51m [root@share-wmenghaglb3116-mez-1 ~]# oc rsh olm-operator-5b7f7c4556-wg89n sh-4.2$ olm -version OLM version: 0.6.0 git commit: 3df6bea image: registry.reg-aws.openshift.com:443/openshift3/ose-operator-lifecycle-manager:v3.11 imageID: docker-pullable://registry.reg-aws.openshift.com:443/openshift3/ose-operator-lifecycle-manager@sha256:a706b51ae87dc9faa98b484ab06bf8ad2f4c3283de0c024b253f273c981318cc
It seems like a bug then, but I've moved it to 4.0 since it doesn't affect users at all.
*** Bug 1633885 has been marked as a duplicate of this bug. ***
I don't encounter this issue anymore for a long by using the OLM in OCP 4.0. So, verify it. Please feel free to reopen it if you encounter this issue. Current version: mac:aws-ocp jianzhang$ oc exec olm-operator-5fdc6d559f-2x7zp -- olm -version OLM version: 0.8.0 git commit: c53c51a
Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory, and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://access.redhat.com/errata/RHBA-2019:0758