Bug 1702087
Summary: | Various operator's status for FAILING type is reported empty | ||
---|---|---|---|
Product: | OpenShift Container Platform | Reporter: | Anurag saxena <anusaxen> |
Component: | Installer | Assignee: | W. Trevor King <wking> |
Status: | CLOSED ERRATA | QA Contact: | Johnny Liu <jialiu> |
Severity: | medium | Docs Contact: | |
Priority: | medium | ||
Version: | 4.1.0 | CC: | aos-bugs, evb, jokerman, mmccomas, wking, zzhao |
Target Milestone: | --- | Keywords: | BetaBlocker, Regression |
Target Release: | 4.1.0 | ||
Hardware: | Unspecified | ||
OS: | Unspecified | ||
Whiteboard: | |||
Fixed In Version: | Doc Type: | No Doc Update | |
Doc Text: | Story Points: | --- | |
Clone Of: | Environment: | ||
Last Closed: | 2019-06-04 10:47:50 UTC | Type: | Bug |
Regression: | --- | Mount Type: | --- |
Documentation: | --- | CRM: | |
Verified Versions: | Category: | --- | |
oVirt Team: | --- | RHEL 7.3 requirements from Atomic Host: | |
Cloudforms Team: | --- | Target Upstream Version: | |
Embargoed: |
Description
Anurag saxena
2019-04-22 22:59:46 UTC
Merged. Now operators that have not yet transitioned from Faiking to Degraded will get blanks, but we can fix them on a per-operator basis. This also happened on IPI install, not related to baremetal install, changing title to reflect that. # oc get clusteroperator NAME VERSION AVAILABLE PROGRESSING FAILING SINCE authentication 4.1.0-0.nightly-2019-04-22-005054 True False False 28m cloud-credential 4.1.0-0.nightly-2019-04-22-005054 True False False 40m cluster-autoscaler 4.1.0-0.nightly-2019-04-22-005054 True False False 40m console 4.1.0-0.nightly-2019-04-22-005054 True False False 29m dns 4.1.0-0.nightly-2019-04-22-005054 True False False 40m image-registry 4.1.0-0.nightly-2019-04-22-005054 True False False 31m ingress 4.1.0-0.nightly-2019-04-22-005054 True False False 30m kube-apiserver 4.1.0-0.nightly-2019-04-22-005054 True False 36m kube-controller-manager 4.1.0-0.nightly-2019-04-22-005054 True False 38m kube-scheduler 4.1.0-0.nightly-2019-04-22-005054 True False 35m machine-api 4.1.0-0.nightly-2019-04-22-005054 True False False 40m machine-config 4.1.0-0.nightly-2019-04-22-005054 True False False 39m marketplace 4.1.0-0.nightly-2019-04-22-005054 True False False 31m monitoring 4.1.0-0.nightly-2019-04-22-005054 True False False 28m network 4.1.0-0.nightly-2019-04-22-005054 True False 40m node-tuning 4.1.0-0.nightly-2019-04-22-005054 True False False 32m openshift-apiserver 4.1.0-0.nightly-2019-04-22-005054 True False 35m openshift-controller-manager 4.1.0-0.nightly-2019-04-22-005054 True False 39m openshift-samples 4.1.0-0.nightly-2019-04-22-005054 True False False 33m operator-lifecycle-manager 4.1.0-0.nightly-2019-04-22-005054 True False False 38m operator-lifecycle-manager-catalog 4.1.0-0.nightly-2019-04-22-005054 True False False 38m service-ca 4.1.0-0.nightly-2019-04-22-005054 True False False 40m service-catalog-apiserver 4.1.0-0.nightly-2019-04-22-005054 True False False 32m service-catalog-controller-manager 4.1.0-0.nightly-2019-04-22-005054 True False False 32m storage 4.1.0-0.nightly-2019-04-22-005054 True False False 32m *** Bug 1701460 has been marked as a duplicate of this bug. *** @trking I am seeing "Degraded" now on latest build 4.1.0-0.nightly-2019-04-23-100608 but lot of operators report empty value against it. Is it an expected behavior? I believe operators with empty value are not expected to support Degraded? $ oc get clusteroperators.config.openshift.io NAME VERSION AVAILABLE PROGRESSING DEGRADED SINCE authentication 4.1.0-0.nightly-2019-04-23-100608 True False 13m cloud-credential 4.1.0-0.nightly-2019-04-23-100608 True False 22m cluster-autoscaler 4.1.0-0.nightly-2019-04-23-100608 True False 22m console 4.1.0-0.nightly-2019-04-23-100608 True False False 13m dns 4.1.0-0.nightly-2019-04-23-100608 True False 22m image-registry 4.1.0-0.nightly-2019-04-23-100608 True False 15m ingress 4.1.0-0.nightly-2019-04-23-100608 True False 13m kube-apiserver 4.1.0-0.nightly-2019-04-23-100608 True False False 20m kube-controller-manager 4.1.0-0.nightly-2019-04-23-100608 True False False 21m kube-scheduler 4.1.0-0.nightly-2019-04-23-100608 True False False 20m machine-api 4.1.0-0.nightly-2019-04-23-100608 True False 22m machine-config 4.1.0-0.nightly-2019-04-23-100608 True False 22m marketplace 4.1.0-0.nightly-2019-04-23-100608 True False 18m monitoring 4.1.0-0.nightly-2019-04-23-100608 True False 11m network 4.1.0-0.nightly-2019-04-23-100608 True False False 23m node-tuning 4.1.0-0.nightly-2019-04-23-100608 True False 18m openshift-apiserver 4.1.0-0.nightly-2019-04-23-100608 True False False 19m openshift-controller-manager 4.1.0-0.nightly-2019-04-23-100608 True False False 21m openshift-samples 4.1.0-0.nightly-2019-04-23-100608 True False 12m operator-lifecycle-manager 4.1.0-0.nightly-2019-04-23-100608 True False 21m operator-lifecycle-manager-catalog 4.1.0-0.nightly-2019-04-23-100608 True False 21m service-ca 4.1.0-0.nightly-2019-04-23-100608 True False 22m service-catalog-apiserver 4.1.0-0.nightly-2019-04-23-100608 True False 18m service-catalog-controller-manager 4.1.0-0.nightly-2019-04-23-100608 True False 18m storage 4.1.0-0.nightly-2019-04-23-100608 True False 18m All operators should support Degraded soon. For example, I'm fixing the registry as part of https://github.com/openshift/cluster-image-registry-operator/pull/260 (In reply to W. Trevor King from comment #6) > All operators should support Degraded soon. For example, I'm fixing the > registry as part of > https://github.com/openshift/cluster-image-registry-operator/pull/260 Understood. Thanks, Trevor! Verifying based on the fact that we have now replaced FAILING with DEGRADED on 4.1.0-0.nightly-2019-04-23-100608 and separate bugs needs to be reported on per operator basis by component owners as per comment 1 and discussion btw @eparis @trking @pruan and me. @pruan has sent an email to aos-qe to file migration bugs targeting 4.1 for any operators that show up blank in the newest nightly build. Thanks! Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory, and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://access.redhat.com/errata/RHBA-2019:0758 |