Bug 2257236 - [UI][MDR] Application showing wrong status of protected PVC but main dashboard showing healthy
Summary: [UI][MDR] Application showing wrong status of protected PVC but main dashboar...
Keywords:
Status: CLOSED WONTFIX
Alias: None
Product: Red Hat OpenShift Data Foundation
Classification: Red Hat Storage
Component: management-console
Version: 4.15
Hardware: Unspecified
OS: Unspecified
unspecified
high
Target Milestone: ---
: ---
Assignee: gowtham
QA Contact: Prasad Desala
URL:
Whiteboard:
Depends On: 2256633
Blocks:
TreeView+ depends on / blocked
 
Reported: 2024-01-08 07:38 UTC by gowtham
Modified: 2024-03-18 15:22 UTC (History)
7 users (show)

Fixed In Version:
Doc Type: No Doc Update
Doc Text:
Clone Of: 2256633
Environment:
Last Closed: 2024-03-18 15:22:40 UTC
Embargoed:


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
Github red-hat-storage odf-console pull 1156 0 None open [MDR] Application view saying PVCs unhealthy but cluster view shows healthy 2024-01-09 08:39:55 UTC

Description gowtham 2024-01-08 07:38:45 UTC
+++ This bug was initially created as a clone of Bug #2256633 +++

Description of problem (please be detailed as possible and provide log
snippests):
After replace cluster configured ACM observability. In overview tab of data policies its showing all apps healthy but in apps section its showing issues with protected pvc.

Please find attached screenshot for more information.


Version of all relevant components (if applicable):

OCP- 4.15
ODF- 4.15.0-98
ACM- 2.9.1 
RHCS- 6.1z3

Does this issue impact your ability to continue to work with the product
(please explain in detail what is the user impact)?


Is there any workaround available to the best of your knowledge?

No
Rate from 1 - 5 the complexity of the scenario you performed that caused this
bug (1 - very simple, 5 - very complex)?


Can this issue reproducible?
yes

Can this issue reproduce from the UI?

yes
If this is a regression, please provide more details to justify this:


Steps to Reproduce:

1. Create MDR environemt using above compoent versions.
2. Create subscription as well as appset apps on clust1 managed cluster
3. Destroyed clust1 by doing power off the all nodes.
4. Followed replace cluster steps mentioned in below article-
5. after replace cluster relocate all apps to recovery cluster
6. Configure ACM observability using doc [2]

[1] https://access.redhat.com/articles/7048922 
[2]https://docs.google.com/document/d/1hl9BOShVpHQKZPXmMZJes3rrQdvy3IHBHmQSfqMSJr8/

7. After successfully configuring go to Data- data policy - overview tab
8. Check application status on main dashboard and individual app status below

Actual results:

Dashboard showing protected pvc with issues 

Expected results:

Dashboard should show status on apps section same as main dashboard. 

Additional info:

--- Additional comment from RHEL Program Management on 2024-01-03 14:20:47 UTC ---

This bug having no release flag set previously, is now set with release flag 'odf‑4.15.0' to '?', and so is being proposed to be fixed at the ODF 4.15.0 release. Note that the 3 Acks (pm_ack, devel_ack, qa_ack), if any previously set while release flag was missing, have now been reset since the Acks are to be set against a release flag.

--- Additional comment from RHEL Program Management on 2024-01-03 14:53:44 UTC ---

This BZ is being approved for ODF 4.15.0 release, upon receipt of the 3 ACKs (PM,Devel,QA) for the release flag 'odf‑4.15.0

--- Additional comment from RHEL Program Management on 2024-01-03 14:53:44 UTC ---

Since this bug has been approved for ODF 4.15.0 release, through release flag 'odf-4.15.0+', the Target Release is being set to 'ODF 4.15.0

--- Additional comment from RHEL Program Management on 2024-01-08 07:37:39 UTC ---

The 'Target Release' is not to be set manually at the Red Hat OpenShift Data Foundation product.

The 'Target Release' will be auto set appropriately, after the 3 Acks (pm,devel,qa) are set to "+" for a specific release flag and that release flag gets auto set to "+".

Comment 4 krishnaram Karthick 2024-02-22 11:50:41 UTC
Moving the bug to 4.14.7. 
We have exhausted the quota for fixes in 4.14.6


Note You need to log in before you can comment on or make changes to this bug.