Description of problem: Found during 4.1 testing on bare metal, various clusteroperators FAILING status type value is reported empty. As multiple operators are involved, i couldn't find right bug component but filing in networking for now. Empty values of FAILING status would block oc adm upgrades and other features i believe. Opening this bug to confirm is this is expected behaviour on bare metal. Observed the same behavious on AWS as well $ oc get clusteroperators.config.openshift.io NAME VERSION AVAILABLE PROGRESSING FAILING SINCE authentication 4.1.0-0.nightly-2019-04-22-005054 True False False 12h cloud-credential 4.1.0-0.nightly-2019-04-22-005054 True False False 12h cluster-autoscaler 4.1.0-0.nightly-2019-04-22-005054 True False False 12h console 4.1.0-0.nightly-2019-04-22-005054 True False False 12h dns 4.1.0-0.nightly-2019-04-22-005054 True False False 12h image-registry 4.1.0-0.nightly-2019-04-22-005054 True False False 12h ingress 4.1.0-0.nightly-2019-04-22-005054 True False False 12h kube-apiserver 4.1.0-0.nightly-2019-04-22-005054 True False 12h kube-controller-manager 4.1.0-0.nightly-2019-04-22-005054 True False 12h kube-scheduler 4.1.0-0.nightly-2019-04-22-005054 True False 12h machine-api 4.1.0-0.nightly-2019-04-22-005054 True False False 12h machine-config 4.1.0-0.nightly-2019-04-22-005054 True False False 12h marketplace 4.1.0-0.nightly-2019-04-22-005054 True False False 12h monitoring 4.1.0-0.nightly-2019-04-22-005054 True False False 12h network 4.1.0-0.nightly-2019-04-22-005054 True False 81m node-tuning 4.1.0-0.nightly-2019-04-22-005054 True False False 12h openshift-apiserver 4.1.0-0.nightly-2019-04-22-005054 True False 12h openshift-controller-manager 4.1.0-0.nightly-2019-04-22-005054 True False 12h openshift-samples 4.1.0-0.nightly-2019-04-22-005054 True False False 12h Excerpt taken fron network operator pod logs shows 'Degraded' type, not sure what does that might mean 2019/04/22 09:25:23 Reconciling update to openshift-sdn/sdn 2019/04/22 09:25:23 Updated ClusterOperator with status: conditions: - lastTransitionTime: "2019-04-22T09:14:36Z" status: "False" type: Degraded <<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<< - lastTransitionTime: "2019-04-22T09:14:51Z" status: "False" type: Progressing - lastTransitionTime: "2019-04-22T09:14:51Z" status: "True" type: Available extension: null versions: - name: operator version: 4.1.0-0.nightly-2019-04-22-005054 Version-Release number of selected component (if applicable): 4.1.0-0.nightly-2019-04-22-005054 How reproducible: Always Steps to Reproduce: 1.Setup 4.1 cluster on bare metal or on aws Actual results: Clusteroperators status not reporting values for FAILING type Expected results: Clusteroperators status expected to show values for all types Additional info:
Merged. Now operators that have not yet transitioned from Faiking to Degraded will get blanks, but we can fix them on a per-operator basis.
This also happened on IPI install, not related to baremetal install, changing title to reflect that. # oc get clusteroperator NAME VERSION AVAILABLE PROGRESSING FAILING SINCE authentication 4.1.0-0.nightly-2019-04-22-005054 True False False 28m cloud-credential 4.1.0-0.nightly-2019-04-22-005054 True False False 40m cluster-autoscaler 4.1.0-0.nightly-2019-04-22-005054 True False False 40m console 4.1.0-0.nightly-2019-04-22-005054 True False False 29m dns 4.1.0-0.nightly-2019-04-22-005054 True False False 40m image-registry 4.1.0-0.nightly-2019-04-22-005054 True False False 31m ingress 4.1.0-0.nightly-2019-04-22-005054 True False False 30m kube-apiserver 4.1.0-0.nightly-2019-04-22-005054 True False 36m kube-controller-manager 4.1.0-0.nightly-2019-04-22-005054 True False 38m kube-scheduler 4.1.0-0.nightly-2019-04-22-005054 True False 35m machine-api 4.1.0-0.nightly-2019-04-22-005054 True False False 40m machine-config 4.1.0-0.nightly-2019-04-22-005054 True False False 39m marketplace 4.1.0-0.nightly-2019-04-22-005054 True False False 31m monitoring 4.1.0-0.nightly-2019-04-22-005054 True False False 28m network 4.1.0-0.nightly-2019-04-22-005054 True False 40m node-tuning 4.1.0-0.nightly-2019-04-22-005054 True False False 32m openshift-apiserver 4.1.0-0.nightly-2019-04-22-005054 True False 35m openshift-controller-manager 4.1.0-0.nightly-2019-04-22-005054 True False 39m openshift-samples 4.1.0-0.nightly-2019-04-22-005054 True False False 33m operator-lifecycle-manager 4.1.0-0.nightly-2019-04-22-005054 True False False 38m operator-lifecycle-manager-catalog 4.1.0-0.nightly-2019-04-22-005054 True False False 38m service-ca 4.1.0-0.nightly-2019-04-22-005054 True False False 40m service-catalog-apiserver 4.1.0-0.nightly-2019-04-22-005054 True False False 32m service-catalog-controller-manager 4.1.0-0.nightly-2019-04-22-005054 True False False 32m storage 4.1.0-0.nightly-2019-04-22-005054 True False False 32m
*** Bug 1701460 has been marked as a duplicate of this bug. ***
@trking I am seeing "Degraded" now on latest build 4.1.0-0.nightly-2019-04-23-100608 but lot of operators report empty value against it. Is it an expected behavior? I believe operators with empty value are not expected to support Degraded? $ oc get clusteroperators.config.openshift.io NAME VERSION AVAILABLE PROGRESSING DEGRADED SINCE authentication 4.1.0-0.nightly-2019-04-23-100608 True False 13m cloud-credential 4.1.0-0.nightly-2019-04-23-100608 True False 22m cluster-autoscaler 4.1.0-0.nightly-2019-04-23-100608 True False 22m console 4.1.0-0.nightly-2019-04-23-100608 True False False 13m dns 4.1.0-0.nightly-2019-04-23-100608 True False 22m image-registry 4.1.0-0.nightly-2019-04-23-100608 True False 15m ingress 4.1.0-0.nightly-2019-04-23-100608 True False 13m kube-apiserver 4.1.0-0.nightly-2019-04-23-100608 True False False 20m kube-controller-manager 4.1.0-0.nightly-2019-04-23-100608 True False False 21m kube-scheduler 4.1.0-0.nightly-2019-04-23-100608 True False False 20m machine-api 4.1.0-0.nightly-2019-04-23-100608 True False 22m machine-config 4.1.0-0.nightly-2019-04-23-100608 True False 22m marketplace 4.1.0-0.nightly-2019-04-23-100608 True False 18m monitoring 4.1.0-0.nightly-2019-04-23-100608 True False 11m network 4.1.0-0.nightly-2019-04-23-100608 True False False 23m node-tuning 4.1.0-0.nightly-2019-04-23-100608 True False 18m openshift-apiserver 4.1.0-0.nightly-2019-04-23-100608 True False False 19m openshift-controller-manager 4.1.0-0.nightly-2019-04-23-100608 True False False 21m openshift-samples 4.1.0-0.nightly-2019-04-23-100608 True False 12m operator-lifecycle-manager 4.1.0-0.nightly-2019-04-23-100608 True False 21m operator-lifecycle-manager-catalog 4.1.0-0.nightly-2019-04-23-100608 True False 21m service-ca 4.1.0-0.nightly-2019-04-23-100608 True False 22m service-catalog-apiserver 4.1.0-0.nightly-2019-04-23-100608 True False 18m service-catalog-controller-manager 4.1.0-0.nightly-2019-04-23-100608 True False 18m storage 4.1.0-0.nightly-2019-04-23-100608 True False 18m
All operators should support Degraded soon. For example, I'm fixing the registry as part of https://github.com/openshift/cluster-image-registry-operator/pull/260
(In reply to W. Trevor King from comment #6) > All operators should support Degraded soon. For example, I'm fixing the > registry as part of > https://github.com/openshift/cluster-image-registry-operator/pull/260 Understood. Thanks, Trevor!
Verifying based on the fact that we have now replaced FAILING with DEGRADED on 4.1.0-0.nightly-2019-04-23-100608 and separate bugs needs to be reported on per operator basis by component owners as per comment 1 and discussion btw @eparis @trking @pruan and me. @pruan has sent an email to aos-qe to file migration bugs targeting 4.1 for any operators that show up blank in the newest nightly build. Thanks!
Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory, and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://access.redhat.com/errata/RHBA-2019:0758