Bug 2208962 - [UI] ODF Topology. Degraded cluster don't show red canvas on cluster level
Summary: [UI] ODF Topology. Degraded cluster don't show red canvas on cluster level
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: Red Hat OpenShift Data Foundation
Classification: Red Hat Storage
Component: management-console
Version: 4.13
Hardware: Unspecified
OS: Unspecified
unspecified
unspecified
Target Milestone: ---
: ODF 4.13.0
Assignee: Bipul Adhikari
QA Contact: Daniel Osypenko
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2023-05-22 08:14 UTC by Daniel Osypenko
Modified: 2023-08-09 16:46 UTC (History)
5 users (show)

Fixed In Version: 4.13.0-207
Doc Type: No Doc Update
Doc Text:
Clone Of:
Environment:
Last Closed: 2023-06-21 15:25:39 UTC
Embargoed:


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
Github red-hat-storage odf-console pull 859 0 None Merged Fixes storageCluster status when Alerts are firing 2023-05-29 06:20:56 UTC
Github red-hat-storage odf-console pull 875 0 None Merged [release-4.13] Bug 2208962: Fixes storageCluster status when Alerts are firing 2023-05-29 06:21:11 UTC
Github red-hat-storage odf-console pull 876 0 None Merged [release-4.13-compatibility] Bug 2208962: Fixes storageCluster status when Alerts are firing 2023-05-29 06:21:11 UTC
Red Hat Product Errata RHBA-2023:3742 0 None None None 2023-06-21 15:25:52 UTC

Description Daniel Osypenko 2023-05-22 08:14:18 UTC
Created attachment 1966114 [details]
overview

Description of problem (please be detailed as possible and provide log
snippests):

Degraded cluster don't show red canvas on cluster level. Node has alert msg but canvas of the cluster looks false-healthy.

Version of all relevant components (if applicable):

OC version:
Client Version: 4.12.0-202208031327
Kustomize Version: v4.5.4
Server Version: 4.13.0-0.nightly-2023-05-20-014943
Kubernetes Version: v1.26.3+b404935

OCS verison:
ocs-operator.v4.13.0-203.stable              OpenShift Container Storage   4.13.0-203.stable              Succeeded

Cluster version
NAME      VERSION                              AVAILABLE   PROGRESSING   SINCE   STATUS
version   4.13.0-0.nightly-2023-05-20-014943   True        False         6h30m   Cluster version is 4.13.0-0.nightly-2023-05-20-014943

Rook version:
rook: v4.13.0-0.e5648f0a2577b9bfd2aa256d4853dc3e8d94862a
go: go1.19.6

Ceph version:
ceph version 17.2.6-50.el9cp (c202ddb5589554af0ce43432ff07cd7ce8f35243) quincy (stable)

Does this issue impact your ability to continue to work with the product
(please explain in detail what is the user impact)?
no

Is there any workaround available to the best of your knowledge?
no

Rate from 1 - 5 the complexity of the scenario you performed that caused this
bug (1 - very simple, 5 - very complex)?
1

Can this issue reproducible?
happens second time, no trigger was found

Can this issue reproduce from the UI?
ui-only

If this is a regression, please provide more details to justify this:


Steps to Reproduce:
1. Deploy cluster
2. Reach 82,6 storage capacity, get degraded state (see on attachment 'overview')
3. Open Storage/ Data Foundation / Topology


Actual results:

Cluster canvas shoud have alert '!' and be red-colored

Expected results:

Cluster rendered as healthy

Additional info:
-cluster is alive and will be destroyed 25.05 automatically. kubeconfig is given for investigation

Comment 7 Daniel Osypenko 2023-06-06 11:38:55 UTC
issue fixed, screen recording - https://drive.google.com/file/d/1VwIg6ri33pHY6HvgwSyShsTzFwmSo6TA/view?usp=sharing

Versions:
OC version:
Client Version: 4.12.0-202208031327
Kustomize Version: v4.5.4
Server Version: 4.13.0-0.nightly-2023-06-03-031200
Kubernetes Version: v1.26.5+7a891f0

OCS verison:
ocs-operator.v4.13.0-207.stable              OpenShift Container Storage   4.13.0-207.stable              Succeeded

Cluster version
NAME      VERSION                              AVAILABLE   PROGRESSING   SINCE   STATUS
version   4.13.0-0.nightly-2023-06-03-031200   True        False         3d      Cluster version is 4.13.0-0.nightly-2023-06-03-031200

Rook version:
rook: v4.13.0-0.e5648f0a2577b9bfd2aa256d4853dc3e8d94862a
go: go1.19.6

Ceph version:
ceph version 17.2.6-50.el9cp (c202ddb5589554af0ce43432ff07cd7ce8f35243) quincy (stable)

Comment 9 errata-xmlrpc 2023-06-21 15:25:39 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory (Red Hat OpenShift Data Foundation 4.13.0 enhancement and bug fix update), and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHBA-2023:3742


Note You need to log in before you can comment on or make changes to this bug.