Back to bug 2175201

Who When What Removed Added
Benamar Mekhissi 2023-03-03 14:54:46 UTC CC bmekhiss
Karolin Seeger 2023-03-03 15:51:07 UTC Assignee kseeger rtalur
Harish NV Rao 2023-03-07 11:19:33 UTC CC hnallurv
Doc Type If docs needed, set a value Known Issue
Olive Lakra 2023-03-08 07:27:02 UTC Doc Text In an active/passive Metro-DR setup, when one data center zone is down while the managed cluster, active hub, and the Ceph nodes are still running, it may not be possible to restore the data in the passive hub which was running in another zone.

Workaround: Restart the ramen pod the initiate the failover from the RHACM console.

$ oc delete pods <ramen-pod-name> -n openshift-operators
CC olakra
akarsha 2023-03-08 07:51:02 UTC CC rgowdege, rtalur
Flags needinfo?(rtalur) needinfo?(rgowdege) needinfo?(bmekhiss)
Benamar Mekhissi 2023-03-08 12:55:35 UTC Flags needinfo?(bmekhiss)
Olive Lakra 2023-03-08 13:10:55 UTC Doc Text In an active/passive Metro-DR setup, when one data center zone is down while the managed cluster, active hub, and the Ceph nodes are still running, it may not be possible to restore the data in the passive hub which was running in another zone.

Workaround: Restart the ramen pod the initiate the failover from the RHACM console.

$ oc delete pods <ramen-pod-name> -n openshift-operators
In an active/passive hub and metro-dr setup, when the ramen reconciler stops running after exceeding its allowed rate-limiting parameters, all disaster recovery orchestration activities will halt.

Workaround: Restart the ramen pod on the hub cluster.

$ oc delete pods <ramen-pod-name> -n openshift-operators
Olive Lakra 2023-03-08 14:14:55 UTC Doc Text In an active/passive hub and metro-dr setup, when the ramen reconciler stops running after exceeding its allowed rate-limiting parameters, all disaster recovery orchestration activities will halt.

Workaround: Restart the ramen pod on the hub cluster.

$ oc delete pods <ramen-pod-name> -n openshift-operators
While working with an active/passive Hub-Metro-DR setup, you might come across a rare scenario where the Ramen reconciler stops running after exceeding its allowed rate-limiting parameters. Because reconciliation is specific to each workload, only that workload will be impacted. In such an event, all disaster recovery orchestration activities related to that workload will stop until the Ramen pod is restarted.

Workaround: Restart the Ramen pod on the Hub cluster.

$ oc delete pods <ramen-pod-name> -n openshift-operators
Olive Lakra 2023-03-08 14:16:12 UTC Doc Text While working with an active/passive Hub-Metro-DR setup, you might come across a rare scenario where the Ramen reconciler stops running after exceeding its allowed rate-limiting parameters. Because reconciliation is specific to each workload, only that workload will be impacted. In such an event, all disaster recovery orchestration activities related to that workload will stop until the Ramen pod is restarted.

Workaround: Restart the Ramen pod on the Hub cluster.

$ oc delete pods <ramen-pod-name> -n openshift-operators
While working with an active/passive Hub-Metro-DR setup, you might come across a rare scenario where the Ramen reconciler stops running after exceeding its allowed rate-limiting parameters. As reconciliation is specific to each workload, only that workload will be impacted. In such an event, all disaster recovery orchestration activities related to that workload will stop until the Ramen pod is restarted.

Workaround: Restart the Ramen pod on the Hub cluster.

$ oc delete pods <ramen-pod-name> -n openshift-operators
Olive Lakra 2023-03-08 14:17:28 UTC Doc Text While working with an active/passive Hub-Metro-DR setup, you might come across a rare scenario where the Ramen reconciler stops running after exceeding its allowed rate-limiting parameters. As reconciliation is specific to each workload, only that workload will be impacted. In such an event, all disaster recovery orchestration activities related to that workload will stop until the Ramen pod is restarted.

Workaround: Restart the Ramen pod on the Hub cluster.

$ oc delete pods <ramen-pod-name> -n openshift-operators
While working with an active/passive Hub-Metro-DR setup, you might come across a rare scenario where the Ramen reconciler stops running after exceeding its allowed rate-limiting parameters. As reconciliation is specific to each workload, only that workload is impacted. In such an event, all disaster recovery orchestration activities related to that workload stop until the Ramen pod is restarted.

Workaround: Restart the Ramen pod on the Hub cluster.

$ oc delete pods <ramen-pod-name> -n openshift-operators
Olive Lakra 2023-03-08 14:34:02 UTC Doc Text While working with an active/passive Hub-Metro-DR setup, you might come across a rare scenario where the Ramen reconciler stops running after exceeding its allowed rate-limiting parameters. As reconciliation is specific to each workload, only that workload is impacted. In such an event, all disaster recovery orchestration activities related to that workload stop until the Ramen pod is restarted.

Workaround: Restart the Ramen pod on the Hub cluster.

$ oc delete pods <ramen-pod-name> -n openshift-operators
While working with an active/passive Hub Metro-DR setup, you might come across a rare scenario where the Ramen reconciler stops running after exceeding its allowed rate-limiting parameters. As reconciliation is specific to each workload, only that workload is impacted. In such an event, all disaster recovery orchestration activities related to that workload stop until the Ramen pod is restarted.

Workaround: Restart the Ramen pod on the Hub cluster.

$ oc delete pods <ramen-pod-name> -n openshift-operators
Sunil Kumar Acharya 2023-03-19 17:58:34 UTC Flags needinfo?(rtalur)
Benamar Mekhissi 2023-03-21 01:25:33 UTC Status NEW ASSIGNED
Benamar Mekhissi 2023-03-21 01:25:50 UTC Assignee rtalur bmekhiss
Harish NV Rao 2023-04-06 09:57:29 UTC Severity low medium
QA Contact kramdoss pbyregow
rakesh 2023-04-07 09:41:32 UTC Flags needinfo?(rgowdege)
Shyamsundar 2023-04-12 12:28:26 UTC CC srangana
Flags needinfo?(rtalur) needinfo?(rtalur)
RHEL Program Management 2023-04-12 12:28:35 UTC Target Release --- ODF 4.13.0
RHEL Program Management 2023-05-02 23:04:09 UTC Target Release ODF 4.13.0 ---
RHEL Program Management 2023-06-17 07:28:07 UTC Target Release --- ODF 4.14.0
Red Hat Bugzilla 2023-08-03 08:29:21 UTC CC ocs-bugs
Elad 2023-08-09 17:00:43 UTC CC odf-bz-bot

Back to bug 2175201