Bug 2001539 - [UI] ODF Overview showing two different status for the same storage system
Summary: [UI] ODF Overview showing two different status for the same storage system
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: Red Hat OpenShift Data Foundation
Classification: Red Hat Storage
Component: odf-operator
Version: 4.9
Hardware: Unspecified
OS: Unspecified
medium
high
Target Milestone: ---
: ODF 4.9.0
Assignee: umanga
QA Contact: Mugdha Soni
URL:
Whiteboard:
: 2012722 (view as bug list)
Depends On:
Blocks: 2019652
TreeView+ depends on / blocked
 
Reported: 2021-09-06 10:37 UTC by Jilju Joy
Modified: 2023-08-09 17:00 UTC (History)
13 users (show)

Fixed In Version: v4.9.0-230.ci
Doc Type: No Doc Update
Doc Text:
Clone Of:
Environment:
Last Closed: 2021-12-13 17:46:04 UTC
Embargoed:


Attachments (Terms of Use)
Screen recording which shows the issue (3.16 MB, video/webm)
2021-09-06 10:37 UTC, Jilju Joy
no flags Details


Links
System ID Private Priority Status Summary Last Updated
Github red-hat-storage ocs-operator pull 1386 0 None open add managedBy label only for mcg-standalone 2021-10-26 14:01:03 UTC
Github red-hat-storage ocs-operator pull 1400 0 None Merged Bug 2019652: [release-4.9] add managedBy label only for mcg-standalone 2021-11-09 11:59:05 UTC
Github red-hat-storage odf-console pull 29 0 None None None 2021-09-08 16:45:10 UTC
Red Hat Product Errata RHSA-2021:5086 0 None None None 2021-12-13 17:46:23 UTC

Description Jilju Joy 2021-09-06 10:37:11 UTC
Created attachment 1820806 [details]
Screen recording which shows the issue

Description of problem (please be detailed as possible and provide log
snippests):
OpenShift Data Foundation Overview page is showing two different status for the same storage system. Only one storage system "ocs-storagecluster-storagesystem" is present in the cluster. One status is showing the storage system "ocs-storagecluster-storagesystem" as healthy and another status on the same page is showing that the storage system "ocs-storagecluster-storagesystem" as degraded.
The storage system is actually healthy.

Testing was done on VMware LSO configuration.

$ oc get storagesystem
NAME                               STORAGE-SYSTEM-KIND                  STORAGE-SYSTEM-NAME
ocs-storagecluster-storagesystem   storagecluster.ocs.openshift.io/v1   ocs-storagecluster

$ oc get storagecluster
NAME                 AGE   PHASE   EXTERNAL   CREATED AT             VERSION
ocs-storagecluster   10m   Ready              2021-09-06T10:12:31Z   4.9.0


Screen recording is attached.

=======================================================================

Version of all relevant components (if applicable):
$ oc get clusterversion
NAME      VERSION                             AVAILABLE   PROGRESSING   SINCE   STATUS
version   4.9.0-0.nightly-2021-09-06-004132   True        False         3h47m   Cluster version is 4.9.0-0.nightly-2021-09-06-004132

$ oc get csv
NAME                            DISPLAY                       VERSION        REPLACES   PHASE
noobaa-operator.v4.9.0-125.ci   NooBaa Operator               4.9.0-125.ci              Succeeded
ocs-operator.v4.9.0-125.ci      OpenShift Container Storage   4.9.0-125.ci              Succeeded
odf-operator.v4.9.0-125.ci      OpenShift Data Foundation     4.9.0-125.ci              Succeeded

=======================================================================

Does this issue impact your ability to continue to work with the product
(please explain in detail what is the user impact)?
UI is showing wrong status

Is there any workaround available to the best of your knowledge?
No

Rate from 1 - 5 the complexity of the scenario you performed that caused this
bug (1 - very simple, 5 - very complex)?
1

Can this issue reproducible?
Reporting the first occurrence

Can this issue reproduce from the UI?
UI issue

If this is a regression, please provide more details to justify this:


Steps to Reproduce:
1. Install ODF operator and and create a storagesystem from UI using LSO configuration.
2. Go to Storage --> OpenShift Data Foundation.
3. Check the status of storagesystem



Actual results:
Two different status is shown for one storage system.

Expected results:
One one status should be shown in overview page.

Additional info:

Comment 4 Mudit Agarwal 2021-09-22 12:52:03 UTC
Fix is available in the latest builds

Comment 10 Shay Rozen 2021-10-18 11:26:16 UTC
On version 4.9.0-191.ci still 2 storagesystem can be seen in odf page. Please check attachment above.

Comment 11 afrahman 2021-10-19 06:19:03 UTC
*** Bug 2012722 has been marked as a duplicate of this bug. ***

Comment 13 Elad 2021-10-19 08:39:27 UTC
Marking as a blocker? for the sake of not moving this out of 4.9.0

Comment 17 Mugdha Soni 2021-11-11 11:25:36 UTC
Hi 

**Tested the following with the ODF "4.9.0-230.ci" and OCP "4.9.0-0.nightly-2021-11-10-215111"

**Steps to reproduce were performed same as mentioned in comment#0.

** Scenarios:- LSO and Non-lso cluster .

** OBSERVATIONS:-

(a)One status is shown in the overview page for storage system.
(b)One storage system is present in overview.
(c) Ceph health was OK.

sh-4.4$ ceph status
  cluster:
    id:     946078ae-63fb-4e84-9d38-8bb0863fa73a
    health: HEALTH_OK
 
  services:
    mon: 3 daemons, quorum a,c,b (age 35m)
    mgr: a(active, since 35m)
    mds: 1/1 daemons up, 1 hot standby
    osd: 3 osds: 3 up (since 34m), 3 in (since 35m)
 
  data:
    volumes: 1/1 healthy
    pools:   4 pools, 97 pgs
    objects: 93 objects, 134 MiB
    usage:   275 MiB used, 1.5 TiB / 1.5 TiB avail
    pgs:     97 active+clean
 
  io:
    client:   1.2 KiB/s rd, 6.3 KiB/s wr, 2 op/s rd, 0 op/s wr


** Screenshots :- https://docs.google.com/document/d/11PGHiz9POZFTzXQ_pysFSYFS-61YtPOm46SMCqUsbdc/edit?usp=sharing

Moving the bug to verified state .

Thanks 
Mugdha

Comment 19 errata-xmlrpc 2021-12-13 17:46:04 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory (Moderate: Red Hat OpenShift Data Foundation 4.9.0 enhancement, security, and bug fix update), and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHSA-2021:5086


Note You need to log in before you can comment on or make changes to this bug.