Bug 2175201

Summary: [MDR]: After hub restore Ramen wasn't reconciling and so was not able to initiate failover from UI
Product: [Red Hat Storage] Red Hat OpenShift Data Foundation Reporter: akarsha <akrai>
Component: odf-drAssignee: Benamar Mekhissi <bmekhiss>
odf-dr sub component: ramen QA Contact: Parikshith <pbyregow>
Status: ASSIGNED --- Docs Contact:
Severity: medium    
Priority: unspecified CC: bmekhiss, hnallurv, muagarwa, odf-bz-bot, olakra, rgowdege, rtalur, srangana
Version: 4.12   
Target Milestone: ---   
Target Release: ODF 4.14.0   
Hardware: Unspecified   
OS: Unspecified   
Whiteboard:
Fixed In Version: Doc Type: Known Issue
Doc Text:
While working with an active/passive Hub Metro-DR setup, you might come across a rare scenario where the Ramen reconciler stops running after exceeding its allowed rate-limiting parameters. As reconciliation is specific to each workload, only that workload is impacted. In such an event, all disaster recovery orchestration activities related to that workload stop until the Ramen pod is restarted. Workaround: Restart the Ramen pod on the Hub cluster. $ oc delete pods <ramen-pod-name> -n openshift-operators
Story Points: ---
Clone Of: Environment:
Last Closed: Type: Bug
Regression: --- Mount Type: ---
Documentation: --- CRM:
Verified Versions: Category: ---
oVirt Team: --- RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: --- Target Upstream Version:
Embargoed:
Attachments:
Description Flags
[1]: Cannot initiate failover of app from UI none

Description akarsha 2023-03-03 14:34:57 UTC
Created attachment 1947701 [details]
[1]: Cannot initiate failover of app from UI

Description of problem (please be detailed as possible and provide log
snippests):

Created an active/passive MDR setup, made a one zone b down (where c1 managed cluster, active hub, and 3 ceph nodes running).
Then tried to restore the data in the passive hub which was running in another zone.

After restoring to a passive hub, cannot initiate failover of c1 apps as attached in the screenshot [1]

Version of all relevant components (if applicable):
OCP: 4.12.0-0.nightly-2023-03-02-051935
ODF: 4.12.1-19
ACM: 2.7.1
CEPH: 16.2.10-138.el8cp (a63ae467c8e1f7503ea3855893f1e5ca189a71b9) pacific (stable)

Does this issue impact your ability to continue to work with the product
(please explain in detail what is the user impact)?


Is there any workaround available to the best of your knowledge?
A restart of the Ramen pod, once the pod is restarted you should be able to initiate Failover from UI

Rate from 1 - 5 the complexity of the scenario you performed that caused this
bug (1 - very simple, 5 - very complex)?
2

Can this issue reproducible?


Can this issue reproduce from the UI?


If this is a regression, please provide more details to justify this:


Steps to Reproduce:
1. Create 4 OCP clusters such that 2 hubs and 2 managed clusters. And one stretched RHCS cluster.
   Deploy cluster in such a way that
	zone a: arbiter ceph node
	zone b: c1, active hub, 3 ceph nodes
	zone c: c2, passive hub, 3 ceph nodes
2. Configure MDR and deploy an application on each managed cluster
3. Initiate a backup process, such that the active and passive hubs are in sync
4. Made zone b down
5. Initiate the restore process in a passive hub
6. Initiate failover of the application


Actual results:
After hub restore, Ramen wasn't reconciling and so was not able to initiate a failover from UI

Expected results:
Without restart of ramen pod, we should be able to initiate failover from UI

Additional info: