MDR team also hits this issue versions: --------- RHCS - 7.0 OCP - 4.15.0-0.nightly-2024-01-10-101042 ODF - 4.15.0-113 ACM - 2.9.1 Steps to reproduce: -------------------- 1. On MDR hub recovery setup, Deploy subscription apps and appset apps ensure that few apps are moved to failedover and relocated states 2. Ensure that few apps are just installed without assigning any DRPolicy to them 3. Ensure that backup is taken on both active and passive hub 4. Bring zone b down ( ceph 0, 1, 2 nodes, C1 cluster and Active hub cluster) 5. Restore passive hub into active hub 6. After importing secrets of C2 cluster check DRPolicy to be in validated state 7. Now check DRPC status $ oc get drpc --all-namespaces -o wide NAMESPACE NAME AGE PREFERREDCLUSTER FAILOVERCLUSTER DESIREDSTATE CURRENTSTATE PROGRESSION START TIME DURATION PEER READY cephfs-sub1 cephfs-sub1-placement-1-drpc 67m sraghave-c1-jan sraghave-c2-jan Failover True openshift-gitops cephfs-appset1-placement-drpc 67m sraghave-c1-jan sraghave-c2-jan Relocate True openshift-gitops helloworld-appset1-placement-drpc 67m sraghave-c1-jan sraghave-c2-jan Failover True openshift-gitops rbd-appset1-placement-drpc 67m sraghave-c1-jan sraghave-c2-jan Relocate True rbd-sub1 rbd-sub1-placement-1-drpc 67m sraghave-c1-jan sraghave-c2-jan Relocate True 8. Now assign DRPOlicy to apps that are already installed on clusters and check DRPC statuses $ oc get drpc --all-namespaces -o wide NAMESPACE NAME AGE PREFERREDCLUSTER FAILOVERCLUSTER DESIREDSTATE CURRENTSTATE PROGRESSION START TIME DURATION PEER READY cephfs-sub1 cephfs-sub1-placement-1-drpc 70m sraghave-c1-jan sraghave-c2-jan Failover FailedOver Cleaning Up False cephfs-sub2 cephfs-sub2-placement-1-drpc 101s sraghave-c2-jan Deployed Completed 2024-01-16T16:35:06Z 22.043807167s True openshift-gitops cephfs-appset1-placement-drpc 70m sraghave-c1-jan sraghave-c2-jan Relocate Paused True openshift-gitops cephfs-appset2-placement-drpc 3m sraghave-c2-jan Deployed Completed 2024-01-16T16:34:08Z 1.043287787s True openshift-gitops helloworld-appset1-placement-drpc 70m sraghave-c1-jan sraghave-c2-jan Failover FailedOver Cleaning Up False openshift-gitops helloworld-appset2-placement-drpc 2m35s sraghave-c2-jan Deployed Completed 2024-01-16T16:34:33Z 1.048046697s True openshift-gitops rbd-appset1-placement-drpc 70m sraghave-c1-jan sraghave-c2-jan Relocate Paused True openshift-gitops rbd-appset2-placement-drpc 2m14s sraghave-c2-jan Deployed Completed 2024-01-16T16:34:41Z 15.040241914s True rbd-sub1 rbd-sub1-placement-1-drpc 70m sraghave-c1-jan sraghave-c2-jan Relocate Paused True rbd-sub2 rbd-sub2-placement-1-drpc 62s sraghave-c2-jan Deployed Completed 2024-01-16T16:36:06Z 1.04207343s True Please note: Zone b is not recovered yet and Failover/relocate not performed post hub recovery.
Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory (Important: Red Hat OpenShift Data Foundation 4.15.0 security, enhancement, & bug fix update), and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://access.redhat.com/errata/RHSA-2024:1383
The needinfo request[s] on this closed bug have been removed as they have been unresolved for 120 days