Bug 2208962

Summary: [UI] ODF Topology. Degraded cluster don't show red canvas on cluster level
Product: [Red Hat Storage] Red Hat OpenShift Data Foundation Reporter: Daniel Osypenko <dosypenk>
Component: management-consoleAssignee: Bipul Adhikari <badhikar>
Status: CLOSED ERRATA QA Contact: Daniel Osypenko <dosypenk>
Severity: unspecified Docs Contact:
Priority: unspecified    
Version: 4.13CC: kramdoss, muagarwa, ocs-bugs, odf-bz-bot, skatiyar
Target Milestone: ---   
Target Release: ODF 4.13.0   
Hardware: Unspecified   
OS: Unspecified   
Whiteboard:
Fixed In Version: 4.13.0-207 Doc Type: No Doc Update
Doc Text:
Story Points: ---
Clone Of: Environment:
Last Closed: 2023-06-21 15:25:39 UTC Type: Bug
Regression: --- Mount Type: ---
Documentation: --- CRM:
Verified Versions: Category: ---
oVirt Team: --- RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: --- Target Upstream Version:
Embargoed:

Description Daniel Osypenko 2023-05-22 08:14:18 UTC
Created attachment 1966114 [details]
overview

Description of problem (please be detailed as possible and provide log
snippests):

Degraded cluster don't show red canvas on cluster level. Node has alert msg but canvas of the cluster looks false-healthy.

Version of all relevant components (if applicable):

OC version:
Client Version: 4.12.0-202208031327
Kustomize Version: v4.5.4
Server Version: 4.13.0-0.nightly-2023-05-20-014943
Kubernetes Version: v1.26.3+b404935

OCS verison:
ocs-operator.v4.13.0-203.stable              OpenShift Container Storage   4.13.0-203.stable              Succeeded

Cluster version
NAME      VERSION                              AVAILABLE   PROGRESSING   SINCE   STATUS
version   4.13.0-0.nightly-2023-05-20-014943   True        False         6h30m   Cluster version is 4.13.0-0.nightly-2023-05-20-014943

Rook version:
rook: v4.13.0-0.e5648f0a2577b9bfd2aa256d4853dc3e8d94862a
go: go1.19.6

Ceph version:
ceph version 17.2.6-50.el9cp (c202ddb5589554af0ce43432ff07cd7ce8f35243) quincy (stable)

Does this issue impact your ability to continue to work with the product
(please explain in detail what is the user impact)?
no

Is there any workaround available to the best of your knowledge?
no

Rate from 1 - 5 the complexity of the scenario you performed that caused this
bug (1 - very simple, 5 - very complex)?
1

Can this issue reproducible?
happens second time, no trigger was found

Can this issue reproduce from the UI?
ui-only

If this is a regression, please provide more details to justify this:


Steps to Reproduce:
1. Deploy cluster
2. Reach 82,6 storage capacity, get degraded state (see on attachment 'overview')
3. Open Storage/ Data Foundation / Topology


Actual results:

Cluster canvas shoud have alert '!' and be red-colored

Expected results:

Cluster rendered as healthy

Additional info:
-cluster is alive and will be destroyed 25.05 automatically. kubeconfig is given for investigation

Comment 7 Daniel Osypenko 2023-06-06 11:38:55 UTC
issue fixed, screen recording - https://drive.google.com/file/d/1VwIg6ri33pHY6HvgwSyShsTzFwmSo6TA/view?usp=sharing

Versions:
OC version:
Client Version: 4.12.0-202208031327
Kustomize Version: v4.5.4
Server Version: 4.13.0-0.nightly-2023-06-03-031200
Kubernetes Version: v1.26.5+7a891f0

OCS verison:
ocs-operator.v4.13.0-207.stable              OpenShift Container Storage   4.13.0-207.stable              Succeeded

Cluster version
NAME      VERSION                              AVAILABLE   PROGRESSING   SINCE   STATUS
version   4.13.0-0.nightly-2023-06-03-031200   True        False         3d      Cluster version is 4.13.0-0.nightly-2023-06-03-031200

Rook version:
rook: v4.13.0-0.e5648f0a2577b9bfd2aa256d4853dc3e8d94862a
go: go1.19.6

Ceph version:
ceph version 17.2.6-50.el9cp (c202ddb5589554af0ce43432ff07cd7ce8f35243) quincy (stable)

Comment 9 errata-xmlrpc 2023-06-21 15:25:39 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory (Red Hat OpenShift Data Foundation 4.13.0 enhancement and bug fix update), and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHBA-2023:3742