Bug 2247518 - [RDR] [Hub recovery] If the primary managed cluster along with active hub goes down, VolumeSynchronizationDelay alert is not fired on Passive Hub even when monitoring label is applied
Summary: [RDR] [Hub recovery] If the primary managed cluster along with active hub goe...
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: Red Hat OpenShift Data Foundation
Classification: Red Hat Storage
Component: odf-dr
Version: 4.14
Hardware: Unspecified
OS: Unspecified
unspecified
urgent
Target Milestone: ---
: ODF 4.15.0
Assignee: rakesh-gm
QA Contact: Aman Agrawal
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2023-11-01 18:50 UTC by Aman Agrawal
Modified: 2024-03-19 15:28 UTC (History)
7 users (show)

Fixed In Version: 4.15.0-136
Doc Type: No Doc Update
Doc Text:
Clone Of:
Environment:
Last Closed: 2024-03-19 15:28:14 UTC
Embargoed:


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
Github RamenDR ramen pull 1117 0 None open init DRPolicy metrics before secrets propagation 2023-11-03 06:58:40 UTC
Github red-hat-storage ramen pull 186 0 None open Bug 2247518: refactor set-metrics 2024-02-08 17:25:49 UTC
Red Hat Product Errata RHSA-2024:1383 0 None None None 2024-03-19 15:28:15 UTC

Description Aman Agrawal 2023-11-01 18:50:43 UTC
Description of problem (please be detailed as possible and provide log
snippests):


Version of all relevant components (if applicable):
OCP 4.14.0-0.nightly-2023-10-30-170011
advanced-cluster-management.v2.9.0-188 
ODF 4.14.0-157
ceph version 17.2.6-148.el9cp (badc1d27cb07762bea48f6554ad4f92b9d3fbb6b) quincy (stable)
ACM 2.9.0-DOWNSTREAM-2023-10-18-17-59-25
Submariner brew.registry.redhat.io/rh-osbs/iib:607438
Volsync 0.7


Does this issue impact your ability to continue to work with the product
(please explain in detail what is the user impact)?


Is there any workaround available to the best of your knowledge?


Rate from 1 - 5 the complexity of the scenario you performed that caused this
bug (1 - very simple, 5 - very complex)?


Can this issue reproducible?


Can this issue reproduce from the UI?


If this is a regression, please provide more details to justify this:


Steps to Reproduce:
1. On a hub recovery RDR setup, ensure backups are being created on active and passive hub clusters.
2. Bring primary managed cluster down, and then bring active hub down.
3. Ensure secondary managed cluster is properly imported and then DRPolicy gets validated. 
4. Run below cmd on passive hub
oc label namespace openshift-operators openshift.io/cluster-monitoring='true'
5. Since primary managed cluster is down, sync would stop for all DR protected workloads, but VolumeSynchronizationDelay doesn't fire on Passive hub OCP alert menu and neither on DR monitoring dashboard. 


Actual results: If the primary managed cluster along with active hub goes down, VolumeSynchronizationDelay alert is not fired on Passive Hub


Expected results: If the primary managed cluster along with active hub goes down, VolumeSynchronizationDelay alert should be fired on Passive Hub when monitoring label is applied on passive hub. 


Additional info:

Comment 4 Mudit Agarwal 2023-11-07 11:42:16 UTC
Moving hub recovery issues out to 4.15 based on offline discussion.

Comment 18 errata-xmlrpc 2024-03-19 15:28:14 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory (Important: Red Hat OpenShift Data Foundation 4.15.0 security, enhancement, & bug fix update), and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHSA-2024:1383


Note You need to log in before you can comment on or make changes to this bug.