Bug 2092168

Summary: clusteroperator status filter doesn't match all values in Status column
Product: OpenShift Container Platform Reporter: OpenShift BugZilla Robot <openshift-bugzilla-robot>
Component: Management ConsoleAssignee: Jakub Hadvig <jhadvig>
Status: CLOSED ERRATA QA Contact: Yadan Pei <yapei>
Severity: medium Docs Contact:
Priority: unspecified    
Version: 4.11CC: yapei
Target Milestone: ---   
Target Release: 4.10.z   
Hardware: Unspecified   
OS: Unspecified   
Whiteboard:
Fixed In Version: Doc Type: If docs needed, set a value
Doc Text:
Story Points: ---
Clone Of: Environment:
Last Closed: 2022-06-13 14:38:56 UTC Type: ---
Regression: --- Mount Type: ---
Documentation: --- CRM:
Verified Versions: Category: ---
oVirt Team: --- RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: --- Target Upstream Version:
Embargoed:
Bug Depends On: 2091854    
Bug Blocks: 2094244    

Description OpenShift BugZilla Robot 2022-06-01 01:50:52 UTC
+++ This bug was initially created as a clone of Bug #2091854 +++

Created attachment 1885427 [details]
Status value 'Unavailable' not found in status filter

Description of problem:
clusteroperator Status column shows some status which can not be found in Status filters dropdown, these clusteroperators can not be filtered out

Version-Release number of selected component (if applicable):
4.11.0-0.nightly-2022-05-25-193227

How reproducible:
Always

Steps to Reproduce:
1. monitoring is in Degraded: True, Available: False and Progressing: False status
$ oc get clusterversion
NAME      VERSION                              AVAILABLE   PROGRESSING   SINCE   STATUS
version   4.11.0-0.nightly-2022-05-25-193227   True        False         5h39m   Error while reconciling 4.11.0-0.nightly-2022-05-25-193227: the cluster operator monitoring has not yet successfully rolled out

$ oc get co
NAME                                       VERSION                              AVAILABLE   PROGRESSING   DEGRADED   SINCE   MESSAGE
authentication                             4.11.0-0.nightly-2022-05-25-193227   True        False         False      5h40m   
baremetal                                  4.11.0-0.nightly-2022-05-25-193227   True        False         False      5h54m   
cloud-controller-manager                   4.11.0-0.nightly-2022-05-25-193227   True        False         False      5h56m   
cloud-credential                           4.11.0-0.nightly-2022-05-25-193227   True        False         False      5h56m   
cluster-autoscaler                         4.11.0-0.nightly-2022-05-25-193227   True        False         False      5h54m   
config-operator                            4.11.0-0.nightly-2022-05-25-193227   True        False         False      5h55m   
console                                    4.11.0-0.nightly-2022-05-25-193227   True        False         False      5h45m   
csi-snapshot-controller                    4.11.0-0.nightly-2022-05-25-193227   True        False         False      5h55m   
dns                                        4.11.0-0.nightly-2022-05-25-193227   True        False         False      5h54m   
etcd                                       4.11.0-0.nightly-2022-05-25-193227   True        False         False      5h53m   
image-registry                             4.11.0-0.nightly-2022-05-25-193227   True        False         False      5h49m   
ingress                                    4.11.0-0.nightly-2022-05-25-193227   True        False         False      5h50m   
insights                                   4.11.0-0.nightly-2022-05-25-193227   True        False         False      5h48m   
kube-apiserver                             4.11.0-0.nightly-2022-05-25-193227   True        False         False      5h51m   
kube-controller-manager                    4.11.0-0.nightly-2022-05-25-193227   True        False         False      5h53m   
kube-scheduler                             4.11.0-0.nightly-2022-05-25-193227   True        False         False      5h52m   
kube-storage-version-migrator              4.11.0-0.nightly-2022-05-25-193227   True        False         False      5h55m   
machine-api                                4.11.0-0.nightly-2022-05-25-193227   True        False         False      5h51m   
machine-approver                           4.11.0-0.nightly-2022-05-25-193227   True        False         False      5h55m   
machine-config                             4.11.0-0.nightly-2022-05-25-193227   True        False         False      5h53m   
marketplace                                4.11.0-0.nightly-2022-05-25-193227   True        False         False      5h54m   
monitoring                                 4.11.0-0.nightly-2022-05-25-193227   False       False         True       3h41m   Rollout of the monitoring stack failed and is degraded. Please investigate the degraded status error.
network                                    4.11.0-0.nightly-2022-05-25-193227   True        False         False      5h56m   
node-tuning                                4.11.0-0.nightly-2022-05-25-193227   True        False         False      5h54m   
openshift-apiserver                        4.11.0-0.nightly-2022-05-25-193227   True        False         False      5h50m   
openshift-controller-manager               4.11.0-0.nightly-2022-05-25-193227   True        False         False      5h52m   
openshift-samples                          4.11.0-0.nightly-2022-05-25-193227   True        False         False      5h50m   
operator-lifecycle-manager                 4.11.0-0.nightly-2022-05-25-193227   True        False         False      5h55m   
operator-lifecycle-manager-catalog         4.11.0-0.nightly-2022-05-25-193227   True        False         False      5h55m   
operator-lifecycle-manager-packageserver   4.11.0-0.nightly-2022-05-25-193227   True        False         False      5h50m   
service-ca                                 4.11.0-0.nightly-2022-05-25-193227   True        False         False      5h55m   
storage                                    4.11.0-0.nightly-2022-05-25-193227   True        False         False      5h55m   

$ oc get co monitoring -o yaml
apiVersion: config.openshift.io/v1
kind: ClusterOperator
metadata:
  annotations:
    include.release.openshift.io/ibm-cloud-managed: "true"
    include.release.openshift.io/self-managed-high-availability: "true"
    include.release.openshift.io/single-node-developer: "true"
  creationTimestamp: "2022-05-31T00:39:58Z"
  generation: 1
  name: monitoring
  ownerReferences:
  - apiVersion: config.openshift.io/v1
    kind: ClusterVersion
    name: version
    uid: 61056d18-5a91-4772-b6d8-49c6dfc3ca54
  resourceVersion: "144271"
  uid: 7c720035-e75e-4a0c-a91f-369c0ca19b35
spec: {}
status:
  conditions:
  - lastTransitionTime: "2022-05-31T02:56:14Z"
    message: 'Failed to rollout the stack. Error: updating prometheus-k8s: reconciling
      Prometheus object failed: updating Prometheus object failed: Prometheus.monitoring.coreos.com
      "k8s" is invalid: spec.retentionSize: Invalid value: "10": spec.retentionSize
      in body should match ''(^0|([0-9]*[.])?[0-9]+((K|M|G|T|E|P)i?)?B)$'''
    reason: UpdatingPrometheusK8SFailed
    status: "True"
    type: Degraded
  - lastTransitionTime: "2022-05-31T00:49:29Z"
    status: "True"
    type: Upgradeable
  - lastTransitionTime: "2022-05-31T02:56:14Z"
    message: Rollout of the monitoring stack failed and is degraded. Please investigate
      the degraded status error.
    reason: UpdatingPrometheusK8SFailed
    status: "False"
    type: Available
  - lastTransitionTime: "2022-05-31T06:37:19Z"
    message: Rollout of the monitoring stack failed and is degraded. Please investigate
      the degraded status error.
    reason: UpdatingPrometheusK8SFailed
    status: "False"
    type: Progressing
  extension: null
  relatedObjects:
  - group: ""
    name: openshift-monitoring
    resource: namespaces
  - group: ""
    name: openshift-user-workload-monitoring
    resource: namespaces
  - group: monitoring.coreos.com
    name: ""
    resource: servicemonitors
  - group: monitoring.coreos.com
    name: ""
    resource: podmonitors
  - group: monitoring.coreos.com
    name: ""
    resource: prometheusrules
  - group: monitoring.coreos.com
    name: ""
    resource: alertmanagers
  - group: monitoring.coreos.com
    name: ""
    resource: prometheuses
  - group: monitoring.coreos.com
    name: ""
    resource: thanosrulers
  - group: monitoring.coreos.com
    name: ""
    resource: alertmanagerconfigs
  versions:
  - name: operator
    version: 4.11.0-0.nightly-2022-05-25-193227
2. view cluster operator status on Cluster Settings -> ClusterOperators page 

Actual results:
2. co/monitoring is shown with Unavailable status, but in Status filter dropdown, we only have 'Available', 'Progressing', 'Degraded', 'Cannnot update' and 'Unknown' statuses, this will cause co/monitoring can not be filtered out with Status

Expected results:
2. ClusterOperators status filter should match with Status column

Additional info:

Comment 5 errata-xmlrpc 2022-06-13 14:38:56 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory (OpenShift Container Platform 4.10.18 bug fix update), and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHBA-2022:4944