Bug 2001268 - openshift-console operator incorrectly reports Available=false
Summary: openshift-console operator incorrectly reports Available=false
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: OpenShift Container Platform
Classification: Red Hat
Component: Management Console
Version: 4.9
Hardware: Unspecified
OS: Unspecified
unspecified
high
Target Milestone: ---
: 4.8.z
Assignee: Jakub Hadvig
QA Contact: Yadan Pei
URL:
Whiteboard:
Depends On: 1994443
Blocks:
TreeView+ depends on / blocked
 
Reported: 2021-09-05 02:25 UTC by OpenShift BugZilla Robot
Modified: 2021-09-21 08:02 UTC (History)
5 users (show)

Fixed In Version:
Doc Type: If docs needed, set a value
Doc Text:
Clone Of:
Environment:
Last Closed: 2021-09-21 08:02:39 UTC
Target Upstream Version:
Embargoed:


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
Github openshift console-operator pull 583 0 None None None 2021-09-06 09:56:20 UTC
Red Hat Product Errata RHBA-2021:3511 0 None None None 2021-09-21 08:02:58 UTC

Description OpenShift BugZilla Robot 2021-09-05 02:25:18 UTC
+++ This bug was initially created as a clone of Bug #1994443 +++

From https://prow.ci.openshift.org/view/gs/origin-ci-test/logs/periodic-ci-openshift-release-master-nightly-4.9-e2e-aws-upgrade/1427185491152736256:

```
2 unexpected clusteroperator state transitions during e2e test run

Aug 16 09:39:13.626 - 14s   E clusteroperator/console condition/Available status/False reason/DeploymentAvailable: 1 replicas ready at version 4.9.0-0.nightly-2021-08-16-082143
1 tests failed during this blip (2021-08-16 09:39:13.626741252 +0000 UTC to 2021-08-16 09:39:13.626741252 +0000 UTC): [sig-arch][Feature:ClusterUpgrade] Cluster should remain functional during upgrade [Disruptive] [Serial]
```

Checking build.log:
```
Aug 16 09:23:54.000 I ns/openshift-console-operator deployment/console-operator reason/OperatorStatusChanged Status for clusteroperator/console changed: status.versions changed from [{"operator" "4.9.0-0.nightly-2021-08-14-065522"}] to [{"operator" "4.9.0-0.nightly-2021-08-16-082143"}]
Aug 16 09:23:54.000 I ns/openshift-console-operator deployment/console-operator reason/OperatorVersionChanged clusteroperator/console version "operator" changed from "4.9.0-0.nightly-2021-08-14-065522" to "4.9.0-0.nightly-2021-08-16-082143"
Aug 16 09:23:55.009 I clusteroperator/console versions: operator 4.9.0-0.nightly-2021-08-14-065522 -> 4.9.0-0.nightly-2021-08-16-082143
Aug 16 09:32:30.581 E ns/openshift-console-operator pod/console-operator-7779b75d9c-xshrk node/ip-10-0-130-236.us-west-1.compute.internal container/console-operator reason/ContainerExit code/1 cause/Error 4.980443       1 event.go:282] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-console-operator", Name:"console-operator", UID:"b2c8ac7a-74ec-4844-8a1a-5d72605535b6", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'OperatorStatusChanged' Status for clusteroperator/console changed: status.versions changed from [{"operator" "4.9.0-0.nightly-2021-08-14-065522"}] to [{"operator" "4.9.0-0.nightly-2021-08-16-082143"}]\nI0816 09:32:26.311529       1 cmd.go:88] Received SIGTERM or SIGINT signal, shutting down controller.\nI0816 09:32:26.315043       1 reflector.go:225] Stopping reflector *v1.ClusterOperator (10m0s) from github.com/openshift/client-go/config/informers/externalversions/factory.go:101\nI0816 09:32:26.315149       1 reflector.go:225] Stopping reflector *v1.Infrastructure (10m0s) from github.com/openshift/client-go/config/informers/externalversions/factory.go:101\nI0816 09:32:26.315214       1 reflector.go:225] Stopping reflector *v1.Proxy (10m0s) from github.com/openshift/client-go/config/informers/externalversions/factory.go:101\nI0816 09:32:26.315279       1 reflector.go:225] Stopping reflector *v1.Secret (10m0s) from k8s.io/client-go/informers/factory.go:134\nI0816 09:32:26.315355       1 reflector.go:225] Stopping reflector *v1.ConfigMap (10m0s) from k8s.io/client-go/informers/factory.go:134\nI0816 09:32:26.315352       1 reflector.go:225] Stopping reflector *v1.Secret (10m0s) from k8s.io/client-go/informers/factory.go:134\nI0816 09:32:26.315479       1 reflector.go:225] Stopping reflector *v1.Route (10m0s) from github.com/openshift/client-go/route/informers/externalversions/factory.go:101\nI0816 09:32:26.315559       1 reflector.go:225] Stopping reflector *v1.Secret (10m0s) from k8s.io/client-go/informers/factory.go:134\nI0816 09:32:26.315723       1 reflector.go:225] Stopping reflector *v1.Secret (10m0s) from k8s.io/client-go/informers/factory.go:134\nW0816 09:32:26.315743       1 builder.go:99] graceful termination failed, controllers failed with error: stopped\n
Aug 16 09:39:13.000 I ns/openshift-console-operator deployment/console-operator reason/OperatorStatusChanged Status for clusteroperator/console changed: Available changed from True to False ("DeploymentAvailable: 1 replicas ready at version 4.9.0-0.nightly-2021-08-16-082143")
Aug 16 09:39:13.626 E clusteroperator/console condition/Available status/False reason/Deployment_FailedUpdate changed: DeploymentAvailable: 1 replicas ready at version 4.9.0-0.nightly-2021-08-16-082143
Aug 16 09:39:13.626 - 14s   E clusteroperator/console condition/Available status/False reason/DeploymentAvailable: 1 replicas ready at version 4.9.0-0.nightly-2021-08-16-082143
Aug 16 09:39:28.000 I ns/openshift-console-operator deployment/console-operator reason/OperatorStatusChanged Status for clusteroperator/console changed: Available changed from False to True ("All is well")
Aug 16 09:39:28.166 W clusteroperator/console condition/Available status/True reason/AsExpected changed: All is well
```

Important log line:
```
Aug 16 09:39:13.626 E clusteroperator/console condition/Available status/False reason/Deployment_FailedUpdate changed: DeploymentAvailable: 1 replicas ready at version 4.9.0-0.nightly-2021-08-16-082143
```

From loki logs (https://grafana-loki.ci.openshift.org/explore?orgId=1&left=%5B%221629064800000%22,%221629151199000%22,%22Grafana%20Cloud%22,%7B%22expr%22:%22%7Binvoker%3D%5C%22openshift-internal-ci%2Fperiodic-ci-openshift-release-master-nightly-4.9-e2e-aws-upgrade%2F1427185491152736256%5C%22%7D%20%7C%20unpack%20%7C%20namespace%3D%5C%22openshift-console-operator%5C%22%22%7D%5D):

```
I0816 09:39:28.135226 1 event.go:282] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-console-operator", Name:"console-operator", UID:"b2c8ac7a-74ec-4844-8a1a-5d72605535b6", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'OperatorStatusChanged' Status for clusteroperator/console changed: Available changed from False to True ("All is well")
I0816 09:39:28.119547 1 status_controller.go:211] clusteroperator/console diff {"status":{"conditions":[{"lastTransitionTime":"2021-08-16T09:03:38Z","message":"All is well","reason":"AsExpected","status":"False","type":"Degraded"},{"lastTransitionTime":"2021-08-16T09:04:17Z","message":"All is well","reason":"AsExpected","status":"False","type":"Progressing"},{"lastTransitionTime":"2021-08-16T09:39:28Z","message":"All is well","reason":"AsExpected","status":"True","type":"Available"},{"lastTransitionTime":"2021-08-16T09:03:47Z","message":"All is well","reason":"AsExpected","status":"True","type":"Upgradeable"}]}}

E0816 09:39:15.008547 1 status.go:78] DeploymentAvailable FailedUpdate 1 replicas ready at version 4.9.0-0.nightly-2021-08-16-082143
I0816 09:39:13.600090 1 event.go:282] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-console-operator", Name:"console-operator", UID:"b2c8ac7a-74ec-4844-8a1a-5d72605535b6", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'OperatorStatusChanged' Status for clusteroperator/console changed: Available changed from True to False ("DeploymentAvailable: 1 replicas ready at version 4.9.0-0.nightly-2021-08-16-082143")

I0816 09:39:13.561850 1 status_controller.go:211] clusteroperator/console diff {"status":{"conditions":[{"lastTransitionTime":"2021-08-16T09:03:38Z","message":"All is well","reason":"AsExpected","status":"False","type":"Degraded"},{"lastTransitionTime":"2021-08-16T09:04:17Z","message":"All is well","reason":"AsExpected","status":"False","type":"Progressing"},{"lastTransitionTime":"2021-08-16T09:39:13Z","message":"DeploymentAvailable: 1 replicas ready at version 4.9.0-0.nightly-2021-08-16-082143","reason":"Deployment_FailedUpdate","status":"False","type":"Available"},{"lastTransitionTime":"2021-08-16T09:03:47Z","message":"All is well","reason":"AsExpected","status":"True","type":"Upgradeable"}]}}

E0816 09:39:13.461120 1 status.go:78] DeploymentAvailable FailedUpdate 1 replicas ready at version 4.9.0-0.nightly-2021-08-16-082143

I0816 09:23:54.980443 1 event.go:282] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-console-operator", Name:"console-operator", UID:"b2c8ac7a-74ec-4844-8a1a-5d72605535b6", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'OperatorStatusChanged' Status for clusteroperator/console changed: status.versions changed from [{"operator" "4.9.0-0.nightly-2021-08-14-065522"}] to [{"operator" "4.9.0-0.nightly-2021-08-16-082143"}]
```

Important log line (more precise time of the FailedUpdate log line):
```
E0816 09:39:13.461120 1 status.go:78] DeploymentAvailable FailedUpdate 1 replicas ready at version 4.9.0-0.nightly-2021-08-16-082143
```

Meaning there was only a single replica set pod running and ready. Yet, not with the expected version of 4.9.0-0.nightly-2021-08-16-082143. Check https://github.com/openshift/console-operator/blob/293fc1630197ff8afa54e86a7c5feb53fa62a3a2/pkg/console/operator/sync_v400.go#L158-L160 for more details. In short, IsReadyAndUpdated (https://github.com/openshift/console-operator/blob/293fc1630197ff8afa54e86a7c5feb53fa62a3a2/pkg/console/subresource/deployment/deployment.go#L537) checks if the console pod instance is updated and ready. It requires the new replica set to be fully running (i.e. exactly one instance of pod of the new RS). However, that is not the case yet.

Checking kubelet logs for running console instances of any console RS:
```
ip-10-0-160-242.us-west-1.compute.internal:12077:Aug 16 09:03:45.192057 ip-10-0-160-242 hyperkube[1368]: I0816 09:03:45.192019    1368 kubelet.go:2065] "SyncLoop ADD" source="api" pods=[openshift-console/console-7ccf696fc8-x2qvc]
ip-10-0-160-242.us-west-1.compute.internal:25665:Aug 16 09:45:54.011355 ip-10-0-160-242 hyperkube[1458]: I0816 09:45:54.011327    1458 kubelet.go:2065] "SyncLoop ADD" source="api" pods=[openshift-console/console-7ccf696fc8-57czc]
ip-10-0-252-206.us-west-1.compute.internal:9394:Aug 16 09:03:41.393224 ip-10-0-252-206 hyperkube[1368]: I0816 09:03:41.393154    1368 kubelet.go:2065] "SyncLoop ADD" source="api" pods=[openshift-console/console-6499869746-6qrmm]
ip-10-0-252-206.us-west-1.compute.internal:12240:Aug 16 09:32:26.441870 ip-10-0-252-206 hyperkube[1368]: I0816 09:32:26.441844    1368 kubelet.go:2065] "SyncLoop ADD" source="api" pods=[openshift-console/console-7ccf696fc8-wg49v]
ip-10-0-130-236.us-west-1.compute.internal:8160:Aug 16 09:03:41.365724 ip-10-0-130-236 hyperkube[1366]: I0816 09:03:41.365692    1366 kubelet.go:2065] "SyncLoop ADD" source="api" pods=[openshift-console/console-6499869746-bmrp8]
ip-10-0-130-236.us-west-1.compute.internal:8275:Aug 16 09:04:15.570701 ip-10-0-130-236 hyperkube[1366]: I0816 09:04:15.570675    1366 kubelet.go:2065] "SyncLoop ADD" source="api" pods=[openshift-console/console-7ccf696fc8-nsw68]
ip-10-0-130-236.us-west-1.compute.internal:17895:Aug 16 09:39:13.123526 ip-10-0-130-236 hyperkube[1454]: I0816 09:39:13.123510    1454 kubelet.go:2065] "SyncLoop ADD" source="api" pods=[openshift-console/console-7ccf696fc8-rvql7]
```

Sorted be ADD timestamp:
```
Aug 16 09:03:41.365724 ip-10-0-130-236 hyperkube[1366]: I0816 09:03:41.365692    1366 kubelet.go:2065] "SyncLoop ADD" source="api" pods=[openshift-console/console-6499869746-bmrp8]
Aug 16 09:03:41.393224 ip-10-0-252-206 hyperkube[1368]: I0816 09:03:41.393154    1368 kubelet.go:2065] "SyncLoop ADD" source="api" pods=[openshift-console/console-6499869746-6qrmm]
Aug 16 09:03:45.192057 ip-10-0-160-242 hyperkube[1368]: I0816 09:03:45.192019    1368 kubelet.go:2065] "SyncLoop ADD" source="api" pods=[openshift-console/console-7ccf696fc8-x2qvc]
Aug 16 09:04:15.570701 ip-10-0-130-236 hyperkube[1366]: I0816 09:04:15.570675    1366 kubelet.go:2065] "SyncLoop ADD" source="api" pods=[openshift-console/console-7ccf696fc8-nsw68]
Aug 16 09:32:26.441870 ip-10-0-252-206 hyperkube[1368]: I0816 09:32:26.441844    1368 kubelet.go:2065] "SyncLoop ADD" source="api" pods=[openshift-console/console-7ccf696fc8-wg49v]
Aug 16 09:39:13.123526 ip-10-0-130-236 hyperkube[1454]: I0816 09:39:13.123510    1454 kubelet.go:2065] "SyncLoop ADD" source="api" pods=[openshift-console/console-7ccf696fc8-rvql7]
Aug 16 09:45:54.011355 ip-10-0-160-242 hyperkube[1458]: I0816 09:45:54.011327    1458 kubelet.go:2065] "SyncLoop ADD" source="api" pods=[openshift-console/console-7ccf696fc8-57czc]
```

Checking which pods are running at which intervals:
```
Aug 16 09:32:26.000 I ns/openshift-console replicaset/console-7ccf696fc8 reason/SuccessfulCreate Created pod: console-7ccf696fc8-wg49v
Aug 16 09:32:26.410 I ns/openshift-console pod/console-7ccf696fc8-wg49v node/ reason/Created
Aug 16 09:32:26.420 I ns/openshift-console pod/console-7ccf696fc8-wg49v node/ip-10-0-252-206.us-west-1.compute.internal reason/Scheduled
Aug 16 09:32:28.000 I ns/openshift-console pod/console-7ccf696fc8-wg49v node/ip-10-0-252-206.us-west-1.compute.internal container/console reason/Created
Aug 16 09:32:28.000 I ns/openshift-console pod/console-7ccf696fc8-wg49v node/ip-10-0-252-206.us-west-1.compute.internal container/console reason/Pulled image/quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c565063ed198885b27d0a226c67cdbfbd14981172f7f9e06632b186dfe54189a
Aug 16 09:32:28.000 I ns/openshift-console pod/console-7ccf696fc8-wg49v reason/AddedInterface Add eth0 [10.130.0.56/23] from openshift-sdn
Aug 16 09:32:29.000 I ns/openshift-console pod/console-7ccf696fc8-wg49v node/ip-10-0-252-206.us-west-1.compute.internal container/console reason/Started
Aug 16 09:32:29.406 I ns/openshift-console pod/console-7ccf696fc8-wg49v node/ip-10-0-252-206.us-west-1.compute.internal container/console reason/ContainerStart duration/3.00s
Aug 16 09:32:29.555 I ns/openshift-console pod/console-7ccf696fc8-wg49v node/ip-10-0-252-206.us-west-1.compute.internal container/console reason/Ready
Aug 16 09:45:53.000 I ns/openshift-console pod/console-7ccf696fc8-wg49v node/ip-10-0-252-206.us-west-1.compute.internal container/console reason/Killing
Aug 16 09:45:53.959 I ns/openshift-console pod/console-7ccf696fc8-wg49v node/ip-10-0-252-206.us-west-1.compute.internal reason/GracefulDelete duration/40s
Aug 16 09:46:20.106 E ns/openshift-console pod/console-7ccf696fc8-wg49v node/ip-10-0-252-206.us-west-1.compute.internal container/console reason/ContainerExit code/2 cause/Error W0816 09:32:29.078663       1 main.go:206] Flag inactivity-timeout is set to less then 300 seconds and will be ignored!\nI0816 09:32:29.078769       1 main.go:278] cookies are secure!\nI0816 09:32:29.226020       1 main.go:660] Binding to [::]:8443...\nI0816 09:32:29.226114       1 main.go:662] using TLS\n
Aug 16 09:46:20.148 I ns/openshift-console pod/console-7ccf696fc8-wg49v node/ip-10-0-252-206.us-west-1.compute.internal reason/Deleted
```

```
Aug 16 09:39:13.000 I ns/openshift-console replicaset/console-7ccf696fc8 reason/SuccessfulCreate Created pod: console-7ccf696fc8-rvql7
Aug 16 09:39:13.083 I ns/openshift-console pod/console-7ccf696fc8-rvql7 node/ip-10-0-130-236.us-west-1.compute.internal reason/Scheduled
Aug 16 09:39:13.099 I ns/openshift-console pod/console-7ccf696fc8-rvql7 node/ reason/Created
Aug 16 09:39:16.000 I ns/openshift-console pod/console-7ccf696fc8-rvql7 reason/AddedInterface Add eth0 [10.129.0.21/23] from openshift-sdn
Aug 16 09:39:18.000 I ns/openshift-console pod/console-7ccf696fc8-rvql7 node/ip-10-0-130-236.us-west-1.compute.internal container/console reason/Pulled image/quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c565063ed198885b27d0a226c67cdbfbd14981172f7f9e06632b186dfe54189a
Aug 16 09:39:24.000 I ns/openshift-console pod/console-7ccf696fc8-rvql7 node/ip-10-0-130-236.us-west-1.compute.internal container/console reason/Created
Aug 16 09:39:24.000 I ns/openshift-console pod/console-7ccf696fc8-rvql7 node/ip-10-0-130-236.us-west-1.compute.internal container/console reason/Started
Aug 16 09:39:27.513 I ns/openshift-console pod/console-7ccf696fc8-rvql7 node/ip-10-0-130-236.us-west-1.compute.internal container/console reason/ContainerStart duration/11.00s
Aug 16 09:39:27.902 I ns/openshift-console pod/console-7ccf696fc8-rvql7 node/ip-10-0-130-236.us-west-1.compute.internal container/console reason/Ready
```

```
Aug 16 09:03:52.573 W ns/openshift-console pod/console-7ccf696fc8-nsw68 reason/FailedScheduling 0/6 nodes are available: 3 node(s) didn't match Pod's node affinity/selector, 3 node(s) didn't match pod anti-affinity rules.
Aug 16 09:32:25.000 I ns/openshift-console pod/console-7ccf696fc8-nsw68 node/ip-10-0-130-236.us-west-1.compute.internal container/console reason/Killing
Aug 16 09:32:25.892 I ns/openshift-console pod/console-7ccf696fc8-nsw68 node/ip-10-0-130-236.us-west-1.compute.internal reason/GracefulDelete duration/40s
Aug 16 09:32:52.588 E ns/openshift-console pod/console-7ccf696fc8-nsw68 node/ip-10-0-130-236.us-west-1.compute.internal container/console reason/ContainerExit code/2 cause/Error W0816 09:04:17.732932       1 main.go:206] Flag inactivity-timeout is set to less then 300 seconds and will be ignored!\nI0816 09:04:17.733007       1 main.go:278] cookies are secure!\nI0816 09:04:17.783515       1 main.go:660] Binding to [::]:8443...\nI0816 09:04:17.783535       1 main.go:662] using TLS\n
Aug 16 09:32:52.652 I ns/openshift-console pod/console-7ccf696fc8-nsw68 node/ip-10-0-130-236.us-west-1.compute.internal reason/Deleted
```
+ kubelet logs:
```
ip-10-0-130-236.us-west-1.compute.internal:8275:Aug 16 09:04:15.570701 ip-10-0-130-236 hyperkube[1366]: I0816 09:04:15.570675    1366 kubelet.go:2065] "SyncLoop ADD" source="api" pods=[openshift-console/console-7ccf696fc8-nsw68]
ip-10-0-130-236.us-west-1.compute.internal:8335:Aug 16 09:04:17.692046 ip-10-0-130-236 crio[1326]: time="2021-08-16 09:04:17.692004836Z" level=info msg="Started container d07e7f817d221af35df7c029c7bec6ccda30834f8e7a2580b4aebf79819de047: openshift-console/console-7ccf696fc8-nsw68/console" id=7c939d30-36fa-4814-8f85-019faf12df59 name=/runtime.v1alpha2.RuntimeService/StartContainer
ip-10-0-130-236.us-west-1.compute.internal:8336:Aug 16 09:04:18.500966 ip-10-0-130-236 hyperkube[1366]: I0816 09:04:18.500935    1366 kubelet.go:2103] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-7ccf696fc8-nsw68" event=&{ID:f16c3f18-f6be-42a2-baba-4fb470d8df93 Type:ContainerStarted Data:d07e7f817d221af35df7c029c7bec6ccda30834f8e7a2580b4aebf79819de047}
ip-10-0-130-236.us-west-1.compute.internal:8337:Aug 16 09:04:18.500966 ip-10-0-130-236 hyperkube[1366]: I0816 09:04:18.500981    1366 kubelet.go:2103] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-7ccf696fc8-nsw68" event=&{ID:f16c3f18-f6be-42a2-baba-4fb470d8df93 Type:ContainerStarted Data:e41790cf8a7a353c58db642ce948912a3bfbec5019cdc07d86b39e364c3f0301}
ip-10-0-130-236.us-west-1.compute.internal:8338:Aug 16 09:04:18.501514 ip-10-0-130-236 hyperkube[1366]: I0816 09:04:18.501290    1366 kubelet.go:2175] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/console-7ccf696fc8-nsw68"
ip-10-0-130-236.us-west-1.compute.internal:8339:Aug 16 09:04:18.504954 ip-10-0-130-236 hyperkube[1366]: I0816 09:04:18.504932    1366 kubelet.go:2175] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/console-7ccf696fc8-nsw68"
```

Based on the logs one can deduce:
- console-7ccf696fc8-nsw68 was running&ready in <09:04:18.504954; 09:32:52.588>
- console-7ccf696fc8-wg49v was running&ready in <09:32:29.555; 09:46:20.106>
- console-7ccf696fc8-rvql7 was running&ready in <09:39:27.902; ???>

Summarizing the important timestamps:
- 09:23:54.980443: operator version changed from 4.9.0-0.nightly-2021-08-14-065522 to 4.9.0-0.nightly-2021-08-16-082143
- <09:32:29.555; 09:46:20.106>: console-7ccf696fc8-wg49v was running&ready
- 09:39:13.000: created console-7ccf696fc8-rvql7
- <09:39:27.902; ???>: console-7ccf696fc8-rvql7 was running&ready
- 09:39:13.461120: reporting "DeploymentAvailable FailedUpdate 1 replicas ready at version 4.9.0-0.nightly-2021-08-16-082143"

So between 09:04:18.504954 and 09:39:27.902 there's always at least one console pod running and ready.

From https://github.com/openshift/api/blob/a6156965faae5ce117e3cd3735981a3fc0e27e27/config/v1/types_cluster_operator.go#L146:
```
// Available indicates that the operand (eg: openshift-apiserver for the
// openshift-apiserver-operator), is functional and available in the cluster.
// Available=False means at least part of the component is non-functional,
// and that the condition requires immediate administrator intervention.
```

So changing condition/Available to false is incorrect.

--- Additional comment from jchaloup on 2021-08-17 09:42:44 UTC ---

Other instances of the flake: https://search.ci.openshift.org/?search=%5C%5Bbz-Management+Console%5C%5D+clusteroperator%2Fconsole+should+not+change+condition%2FAvailable&maxAge=24h&context=1&type=junit&name=&excludeName=&maxMatches=5&maxBytes=20971520&groupBy=job

--- Additional comment from jchaloup on 2021-08-17 09:51:20 UTC ---

Checking if the operand is upgraded or in the process of upgrading belongs under condition/Progressing. From https://github.com/openshift/api/blob/a6156965faae5ce117e3cd3735981a3fc0e27e27/config/v1/types_cluster_operator.go#L152-L159:
```
// Progressing indicates that the operator is actively rolling out new code,
// propagating config changes, or otherwise moving from one steady state to
// another.
```

Comment 5 errata-xmlrpc 2021-09-21 08:02:39 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory (OpenShift Container Platform 4.8.12 bug fix update), and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHBA-2021:3511


Note You need to log in before you can comment on or make changes to this bug.