Bug 2247518

Summary: [RDR] [Hub recovery] If the primary managed cluster along with active hub goes down, VolumeSynchronizationDelay alert is not fired on Passive Hub even when monitoring label is applied
Product: [Red Hat Storage] Red Hat OpenShift Data Foundation Reporter: Aman Agrawal <amagrawa>
Component: odf-drAssignee: rakesh-gm <rgowdege>
odf-dr sub component: ramen QA Contact: Aman Agrawal <amagrawa>
Status: CLOSED ERRATA Docs Contact:
Severity: urgent    
Priority: unspecified CC: ebenahar, egershko, kseeger, muagarwa, rgowdege, rtalur, srangana
Version: 4.14   
Target Milestone: ---   
Target Release: ODF 4.15.0   
Hardware: Unspecified   
OS: Unspecified   
Whiteboard:
Fixed In Version: 4.15.0-136 Doc Type: No Doc Update
Doc Text:
Story Points: ---
Clone Of: Environment:
Last Closed: 2024-03-19 15:28:14 UTC Type: Bug
Regression: --- Mount Type: ---
Documentation: --- CRM:
Verified Versions: Category: ---
oVirt Team: --- RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: --- Target Upstream Version:
Embargoed:

Description Aman Agrawal 2023-11-01 18:50:43 UTC
Description of problem (please be detailed as possible and provide log
snippests):


Version of all relevant components (if applicable):
OCP 4.14.0-0.nightly-2023-10-30-170011
advanced-cluster-management.v2.9.0-188 
ODF 4.14.0-157
ceph version 17.2.6-148.el9cp (badc1d27cb07762bea48f6554ad4f92b9d3fbb6b) quincy (stable)
ACM 2.9.0-DOWNSTREAM-2023-10-18-17-59-25
Submariner brew.registry.redhat.io/rh-osbs/iib:607438
Volsync 0.7


Does this issue impact your ability to continue to work with the product
(please explain in detail what is the user impact)?


Is there any workaround available to the best of your knowledge?


Rate from 1 - 5 the complexity of the scenario you performed that caused this
bug (1 - very simple, 5 - very complex)?


Can this issue reproducible?


Can this issue reproduce from the UI?


If this is a regression, please provide more details to justify this:


Steps to Reproduce:
1. On a hub recovery RDR setup, ensure backups are being created on active and passive hub clusters.
2. Bring primary managed cluster down, and then bring active hub down.
3. Ensure secondary managed cluster is properly imported and then DRPolicy gets validated. 
4. Run below cmd on passive hub
oc label namespace openshift-operators openshift.io/cluster-monitoring='true'
5. Since primary managed cluster is down, sync would stop for all DR protected workloads, but VolumeSynchronizationDelay doesn't fire on Passive hub OCP alert menu and neither on DR monitoring dashboard. 


Actual results: If the primary managed cluster along with active hub goes down, VolumeSynchronizationDelay alert is not fired on Passive Hub


Expected results: If the primary managed cluster along with active hub goes down, VolumeSynchronizationDelay alert should be fired on Passive Hub when monitoring label is applied on passive hub. 


Additional info:

Comment 4 Mudit Agarwal 2023-11-07 11:42:16 UTC
Moving hub recovery issues out to 4.15 based on offline discussion.

Comment 18 errata-xmlrpc 2024-03-19 15:28:14 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory (Important: Red Hat OpenShift Data Foundation 4.15.0 security, enhancement, & bug fix update), and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHSA-2024:1383