Bug 1702087 - Various operator's status for FAILING type is reported empty
Summary: Various operator's status for FAILING type is reported empty
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: OpenShift Container Platform
Classification: Red Hat
Component: Installer
Version: 4.1.0
Hardware: Unspecified
OS: Unspecified
medium
medium
Target Milestone: ---
: 4.1.0
Assignee: W. Trevor King
QA Contact: Johnny Liu
URL:
Whiteboard:
: 1701460 (view as bug list)
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2019-04-22 22:59 UTC by Anurag saxena
Modified: 2019-06-04 10:47 UTC (History)
6 users (show)

Fixed In Version:
Doc Type: No Doc Update
Doc Text:
Clone Of:
Environment:
Last Closed: 2019-06-04 10:47:50 UTC
Target Upstream Version:
Embargoed:


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
Github openshift cluster-version-operator pull 172 0 None None None 2019-04-22 23:13:51 UTC
Red Hat Product Errata RHBA-2019:0758 0 None None None 2019-06-04 10:47:58 UTC

Description Anurag saxena 2019-04-22 22:59:46 UTC
Description of problem: Found during 4.1 testing on bare metal, various clusteroperators FAILING status type value is reported empty. As multiple operators are involved, i couldn't find right bug component but filing in networking for now. Empty values of FAILING status would block oc adm upgrades and other features i believe. Opening this bug to confirm is this is expected behaviour on bare metal. Observed the same behavious on AWS as well  

$ oc get clusteroperators.config.openshift.io
NAME                                 VERSION                             AVAILABLE   PROGRESSING   FAILING   SINCE
authentication                       4.1.0-0.nightly-2019-04-22-005054   True        False         False     12h
cloud-credential                     4.1.0-0.nightly-2019-04-22-005054   True        False         False     12h
cluster-autoscaler                   4.1.0-0.nightly-2019-04-22-005054   True        False         False     12h
console                              4.1.0-0.nightly-2019-04-22-005054   True        False         False     12h
dns                                  4.1.0-0.nightly-2019-04-22-005054   True        False         False     12h
image-registry                       4.1.0-0.nightly-2019-04-22-005054   True        False         False     12h
ingress                              4.1.0-0.nightly-2019-04-22-005054   True        False         False     12h
kube-apiserver                       4.1.0-0.nightly-2019-04-22-005054   True        False                   12h
kube-controller-manager              4.1.0-0.nightly-2019-04-22-005054   True        False                   12h
kube-scheduler                       4.1.0-0.nightly-2019-04-22-005054   True        False                   12h
machine-api                          4.1.0-0.nightly-2019-04-22-005054   True        False         False     12h
machine-config                       4.1.0-0.nightly-2019-04-22-005054   True        False         False     12h
marketplace                          4.1.0-0.nightly-2019-04-22-005054   True        False         False     12h
monitoring                           4.1.0-0.nightly-2019-04-22-005054   True        False         False     12h
network                              4.1.0-0.nightly-2019-04-22-005054   True        False                   81m
node-tuning                          4.1.0-0.nightly-2019-04-22-005054   True        False         False     12h
openshift-apiserver                  4.1.0-0.nightly-2019-04-22-005054   True        False                   12h
openshift-controller-manager         4.1.0-0.nightly-2019-04-22-005054   True        False                   12h
openshift-samples                    4.1.0-0.nightly-2019-04-22-005054   True        False         False     12h

Excerpt taken fron network operator pod logs shows 'Degraded' type, not sure what does that might mean

2019/04/22 09:25:23 Reconciling update to openshift-sdn/sdn
2019/04/22 09:25:23 Updated ClusterOperator with status:
conditions:
- lastTransitionTime: "2019-04-22T09:14:36Z"
  status: "False"
  type: Degraded   <<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<
- lastTransitionTime: "2019-04-22T09:14:51Z"
  status: "False"
  type: Progressing
- lastTransitionTime: "2019-04-22T09:14:51Z"
  status: "True"
  type: Available
extension: null
versions:
- name: operator
  version: 4.1.0-0.nightly-2019-04-22-005054


Version-Release number of selected component (if applicable): 4.1.0-0.nightly-2019-04-22-005054

How reproducible: Always

Steps to Reproduce:
1.Setup 4.1 cluster on bare metal or on aws

Actual results: Clusteroperators status not reporting values for FAILING type

Expected results: Clusteroperators status expected to show values for all types

Additional info:

Comment 1 W. Trevor King 2019-04-23 04:10:47 UTC
Merged.  Now operators that have not yet transitioned from Faiking to Degraded will get blanks, but we can fix them on a per-operator basis.

Comment 2 Johnny Liu 2019-04-23 07:26:26 UTC
This also happened on IPI install, not related to baremetal install, changing title to reflect that.

# oc get clusteroperator
NAME                                 VERSION                             AVAILABLE   PROGRESSING   FAILING   SINCE
authentication                       4.1.0-0.nightly-2019-04-22-005054   True        False         False     28m
cloud-credential                     4.1.0-0.nightly-2019-04-22-005054   True        False         False     40m
cluster-autoscaler                   4.1.0-0.nightly-2019-04-22-005054   True        False         False     40m
console                              4.1.0-0.nightly-2019-04-22-005054   True        False         False     29m
dns                                  4.1.0-0.nightly-2019-04-22-005054   True        False         False     40m
image-registry                       4.1.0-0.nightly-2019-04-22-005054   True        False         False     31m
ingress                              4.1.0-0.nightly-2019-04-22-005054   True        False         False     30m
kube-apiserver                       4.1.0-0.nightly-2019-04-22-005054   True        False                   36m
kube-controller-manager              4.1.0-0.nightly-2019-04-22-005054   True        False                   38m
kube-scheduler                       4.1.0-0.nightly-2019-04-22-005054   True        False                   35m
machine-api                          4.1.0-0.nightly-2019-04-22-005054   True        False         False     40m
machine-config                       4.1.0-0.nightly-2019-04-22-005054   True        False         False     39m
marketplace                          4.1.0-0.nightly-2019-04-22-005054   True        False         False     31m
monitoring                           4.1.0-0.nightly-2019-04-22-005054   True        False         False     28m
network                              4.1.0-0.nightly-2019-04-22-005054   True        False                   40m
node-tuning                          4.1.0-0.nightly-2019-04-22-005054   True        False         False     32m
openshift-apiserver                  4.1.0-0.nightly-2019-04-22-005054   True        False                   35m
openshift-controller-manager         4.1.0-0.nightly-2019-04-22-005054   True        False                   39m
openshift-samples                    4.1.0-0.nightly-2019-04-22-005054   True        False         False     33m
operator-lifecycle-manager           4.1.0-0.nightly-2019-04-22-005054   True        False         False     38m
operator-lifecycle-manager-catalog   4.1.0-0.nightly-2019-04-22-005054   True        False         False     38m
service-ca                           4.1.0-0.nightly-2019-04-22-005054   True        False         False     40m
service-catalog-apiserver            4.1.0-0.nightly-2019-04-22-005054   True        False         False     32m
service-catalog-controller-manager   4.1.0-0.nightly-2019-04-22-005054   True        False         False     32m
storage                              4.1.0-0.nightly-2019-04-22-005054   True        False         False     32m

Comment 4 Casey Callendrello 2019-04-23 09:04:35 UTC
*** Bug 1701460 has been marked as a duplicate of this bug. ***

Comment 5 Anurag saxena 2019-04-23 14:38:19 UTC
@trking I am seeing "Degraded" now on latest build 4.1.0-0.nightly-2019-04-23-100608 but lot of operators report empty value against it. Is it an expected behavior? I believe operators with empty value are not expected to support Degraded?

$ oc get clusteroperators.config.openshift.io 
NAME                                 VERSION                             AVAILABLE   PROGRESSING   DEGRADED   SINCE
authentication                       4.1.0-0.nightly-2019-04-23-100608   True        False                    13m
cloud-credential                     4.1.0-0.nightly-2019-04-23-100608   True        False                    22m
cluster-autoscaler                   4.1.0-0.nightly-2019-04-23-100608   True        False                    22m
console                              4.1.0-0.nightly-2019-04-23-100608   True        False         False      13m
dns                                  4.1.0-0.nightly-2019-04-23-100608   True        False                    22m
image-registry                       4.1.0-0.nightly-2019-04-23-100608   True        False                    15m
ingress                              4.1.0-0.nightly-2019-04-23-100608   True        False                    13m
kube-apiserver                       4.1.0-0.nightly-2019-04-23-100608   True        False         False      20m
kube-controller-manager              4.1.0-0.nightly-2019-04-23-100608   True        False         False      21m
kube-scheduler                       4.1.0-0.nightly-2019-04-23-100608   True        False         False      20m
machine-api                          4.1.0-0.nightly-2019-04-23-100608   True        False                    22m
machine-config                       4.1.0-0.nightly-2019-04-23-100608   True        False                    22m
marketplace                          4.1.0-0.nightly-2019-04-23-100608   True        False                    18m
monitoring                           4.1.0-0.nightly-2019-04-23-100608   True        False                    11m
network                              4.1.0-0.nightly-2019-04-23-100608   True        False         False      23m
node-tuning                          4.1.0-0.nightly-2019-04-23-100608   True        False                    18m
openshift-apiserver                  4.1.0-0.nightly-2019-04-23-100608   True        False         False      19m
openshift-controller-manager         4.1.0-0.nightly-2019-04-23-100608   True        False         False      21m
openshift-samples                    4.1.0-0.nightly-2019-04-23-100608   True        False                    12m
operator-lifecycle-manager           4.1.0-0.nightly-2019-04-23-100608   True        False                    21m
operator-lifecycle-manager-catalog   4.1.0-0.nightly-2019-04-23-100608   True        False                    21m
service-ca                           4.1.0-0.nightly-2019-04-23-100608   True        False                    22m
service-catalog-apiserver            4.1.0-0.nightly-2019-04-23-100608   True        False                    18m
service-catalog-controller-manager   4.1.0-0.nightly-2019-04-23-100608   True        False                    18m
storage                              4.1.0-0.nightly-2019-04-23-100608   True        False                    18m

Comment 6 W. Trevor King 2019-04-23 14:52:46 UTC
All operators should support Degraded soon.  For example, I'm fixing the registry as part of https://github.com/openshift/cluster-image-registry-operator/pull/260

Comment 7 Anurag saxena 2019-04-23 15:29:03 UTC
(In reply to W. Trevor King from comment #6)
> All operators should support Degraded soon.  For example, I'm fixing the
> registry as part of
> https://github.com/openshift/cluster-image-registry-operator/pull/260

Understood. Thanks, Trevor!

Comment 8 Anurag saxena 2019-04-23 20:01:21 UTC
Verifying based on the fact that we have now replaced FAILING with DEGRADED on 4.1.0-0.nightly-2019-04-23-100608 and separate bugs needs to be reported on per operator basis by component owners as per comment 1 and discussion btw @eparis @trking @pruan and me. 

@pruan has sent an email to aos-qe to file migration bugs targeting 4.1 for any operators that show up blank in the newest nightly build. Thanks!

Comment 10 errata-xmlrpc 2019-06-04 10:47:50 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHBA-2019:0758


Note You need to log in before you can comment on or make changes to this bug.