Description of problem (please be detailed as possible and provide log snippests): Version of all relevant components (if applicable): OCP 4.14.0-0.nightly-2023-09-24-044110 ACM 2.9.0-DOWNSTREAM-2023-09-27-04-43-48 advanced-cluster-management.v2.9.0-163 ODF 4.14.0-137.stable Submariner image: brew.registry.redhat.io/rh-osbs/iib:580786 ceph version 17.2.6-138.el9cp (b488c8dad42b2ecffcd96f3d76eeeecce48b8590) quincy (stable) Does this issue impact your ability to continue to work with the product (please explain in detail what is the user impact)? Is there any workaround available to the best of your knowledge? Rate from 1 - 5 the complexity of the scenario you performed that caused this bug (1 - very simple, 5 - very complex)? Can this issue reproducible? Can this issue reproduce from the UI? If this is a regression, please provide more details to justify this: Steps to Reproduce: 1. On a Regional DR setup, perform failover and relocate operations on different rbd and cephfs workloads and ensure new backup is **not** taken and workloads are in diff. CURRENTSTATE like below. Then perform hub recovery without new backup being created for new states of various workloads & restore backup on passive hub after active hub goes down. amagrawa:hub$ drpc NAMESPACE NAME AGE PREFERREDCLUSTER FAILOVERCLUSTER DESIREDSTATE CURRENTSTATE PROGRESSION START TIME DURATION PEER READY busybox-workloads-2 busybox-drpc 22h amagrawa-c1 amagrawa-c2 Failover FailedOver WaitForReadiness True busybox-workloads-3 busybox-workloads-3-placement-1-drpc 22h amagrawa-c2 Deployed Completed True openshift-gitops busybox-3-placement-drpc 22h amagrawa-c1 amagrawa-c2 Relocate Relocated Completed 2023-09-28T11:21:05Z 587.08349ms True openshift-gitops busybox-workloads-4-placement-drpc 22h amagrawa-c1 Deployed Completed True 2. Ensure that the backup is properly restored and managed clusters are successfully imported on the passive hub. 3. Ensure submariner is healthy (it wasn't in this case, but connectivity was restored after applying the WA- refer https://issues.redhat.com/browse/ACM-7757 Subctl verify logs after the WA was applied- http://pastebin.test.redhat.com/1110040) 4. When submariner connectivity is restored or if it's already fine, check drpc, drpolicy status, mirroring status for rbd based workloads, src and dst pod status for cephfs based workloads, lastGroupSyncTime for all workloads, ceph health on both the managed clusters, status of all pods in openshift-storage NS and their container status on both the managed clusters, etc). Actual results: With passive hub, sync stops for all rbd and cephfs workloads, rgw on one of the managed clusters goes down Hub- amagrawa:hub$ drpc NAMESPACE NAME AGE PREFERREDCLUSTER FAILOVERCLUSTER DESIREDSTATE CURRENTSTATE PROGRESSION START TIME DURATION PEER READY busybox-workloads-2 busybox-drpc 22h amagrawa-c1 amagrawa-c2 Failover FailedOver WaitForReadiness True busybox-workloads-3 busybox-workloads-3-placement-1-drpc 22h amagrawa-c2 Deployed Completed True openshift-gitops busybox-3-placement-drpc 22h amagrawa-c1 amagrawa-c2 Relocate Relocated Completed 2023-09-28T11:21:05Z 587.08349ms True openshift-gitops busybox-workloads-4-placement-drpc 22h amagrawa-c1 Deployed Completed True Here first 2 are rbd and last 2 are cephfs based workloads. amagrawa:hub$ drpcyaml apiVersion: v1 items: - apiVersion: ramendr.openshift.io/v1alpha1 kind: DRPlacementControl metadata: annotations: drplacementcontrol.ramendr.openshift.io/last-app-deployment-cluster: amagrawa-c2 creationTimestamp: "2023-09-28T10:59:12Z" finalizers: - drpc.ramendr.openshift.io/finalizer generation: 1 labels: app: busybox-sample cluster.open-cluster-management.io/backup: resource velero.io/backup-name: acm-resources-generic-schedule-20230928100034 velero.io/restore-name: restore-acm-acm-resources-generic-schedule-20230928100034 name: busybox-drpc namespace: busybox-workloads-2 ownerReferences: - apiVersion: apps.open-cluster-management.io/v1 blockOwnerDeletion: true controller: true kind: PlacementRule name: busybox-placement uid: 507d2d11-546e-4db5-90db-52b11ee52ec6 resourceVersion: "3024467" uid: 6d27e1c7-b048-4e9c-b0cf-b34bc8e27bae spec: action: Failover drPolicyRef: name: my-drpolicy-20 failoverCluster: amagrawa-c2 placementRef: kind: PlacementRule name: busybox-placement preferredCluster: amagrawa-c1 pvcSelector: matchLabels: appname: busybox_app2 status: conditions: - lastTransitionTime: "2023-09-28T11:21:03Z" message: Completed observedGeneration: 1 reason: FailedOver status: "True" type: Available - lastTransitionTime: "2023-09-28T10:59:12Z" message: Ready observedGeneration: 1 reason: Success status: "True" type: PeerReady lastUpdateTime: "2023-09-29T05:53:45Z" phase: FailedOver preferredDecision: clusterName: amagrawa-c1 clusterNamespace: amagrawa-c1 progression: WaitForReadiness resourceConditions: conditions: - lastTransitionTime: "2023-09-28T12:16:59Z" message: Not all VolSync PVCs are ready observedGeneration: 1 reason: Ready status: "False" type: DataReady - lastTransitionTime: "2023-09-27T12:27:31Z" message: Not all VolSync PVCs are protected observedGeneration: 1 reason: DataProtected status: "False" type: DataProtected - lastTransitionTime: "2023-09-27T12:27:30Z" message: Restored cluster data observedGeneration: 1 reason: Restored status: "True" type: ClusterDataReady - lastTransitionTime: "2023-09-28T12:16:58Z" message: Not all VolSync PVCs are protected observedGeneration: 1 reason: DataProtected status: "False" type: ClusterDataProtected resourceMeta: generation: 1 kind: VolumeReplicationGroup name: busybox-drpc namespace: busybox-workloads-2 protectedpvcs: - busybox-pvc-21 - busybox-pvc-22 - busybox-pvc-23 - busybox-pvc-24 - busybox-pvc-25 - apiVersion: ramendr.openshift.io/v1alpha1 kind: DRPlacementControl metadata: annotations: drplacementcontrol.ramendr.openshift.io/last-app-deployment-cluster: amagrawa-c2 creationTimestamp: "2023-09-28T10:59:12Z" finalizers: - drpc.ramendr.openshift.io/finalizer generation: 1 labels: cluster.open-cluster-management.io/backup: resource velero.io/backup-name: acm-resources-generic-schedule-20230928100034 velero.io/restore-name: restore-acm-acm-resources-generic-schedule-20230928100034 name: busybox-workloads-3-placement-1-drpc namespace: busybox-workloads-3 ownerReferences: - apiVersion: cluster.open-cluster-management.io/v1beta1 blockOwnerDeletion: true controller: true kind: Placement name: busybox-workloads-3-placement-1 uid: 619a8e64-d2bd-431e-86b9-670371420b64 resourceVersion: "3168727" uid: 63d50b22-0c63-431b-8662-cde469cd23a6 spec: drPolicyRef: apiVersion: ramendr.openshift.io/v1alpha1 kind: DRPolicy name: my-drpolicy-5 placementRef: apiVersion: cluster.open-cluster-management.io/v1beta1 kind: Placement name: busybox-workloads-3-placement-1 namespace: busybox-workloads-3 preferredCluster: amagrawa-c2 pvcSelector: matchLabels: appname: busybox-cephfs status: conditions: - lastTransitionTime: "2023-09-28T11:21:05Z" message: Initial deployment completed observedGeneration: 1 reason: Deployed status: "True" type: Available - lastTransitionTime: "2023-09-28T10:59:13Z" message: Ready observedGeneration: 1 reason: Success status: "True" type: PeerReady lastGroupSyncDuration: 50.504730915s lastGroupSyncTime: "2023-09-28T11:10:27Z" lastUpdateTime: "2023-09-29T08:23:57Z" phase: Deployed preferredDecision: clusterName: amagrawa-c2 clusterNamespace: amagrawa-c2 progression: Completed resourceConditions: conditions: - lastTransitionTime: "2023-09-27T14:44:30Z" message: All VolSync PVCs are ready observedGeneration: 1 reason: Ready status: "True" type: DataReady - lastTransitionTime: "2023-09-27T14:45:13Z" message: All VolSync PVCs are protected observedGeneration: 1 reason: DataProtected status: "True" type: DataProtected - lastTransitionTime: "2023-09-27T14:42:26Z" message: Restored cluster data observedGeneration: 1 reason: Restored status: "True" type: ClusterDataReady - lastTransitionTime: "2023-09-29T08:23:28Z" message: All VolSync PVCs are protected observedGeneration: 1 reason: DataProtected status: "True" type: ClusterDataProtected resourceMeta: generation: 1 kind: VolumeReplicationGroup name: busybox-workloads-3-placement-1-drpc namespace: busybox-workloads-3 protectedpvcs: - dd-io-pvc-5 - dd-io-pvc-6 - dd-io-pvc-7 - dd-io-pvc-1 - dd-io-pvc-2 - dd-io-pvc-3 - dd-io-pvc-4 - apiVersion: ramendr.openshift.io/v1alpha1 kind: DRPlacementControl metadata: annotations: drplacementcontrol.ramendr.openshift.io/last-app-deployment-cluster: amagrawa-c1 creationTimestamp: "2023-09-28T10:59:12Z" finalizers: - drpc.ramendr.openshift.io/finalizer generation: 1 labels: cluster.open-cluster-management.io/backup: resource velero.io/backup-name: acm-resources-generic-schedule-20230928100034 velero.io/restore-name: restore-acm-acm-resources-generic-schedule-20230928100034 name: busybox-3-placement-drpc namespace: openshift-gitops ownerReferences: - apiVersion: cluster.open-cluster-management.io/v1beta1 blockOwnerDeletion: true controller: true kind: Placement name: busybox-3-placement uid: 9c206463-236b-40a1-bd84-e83090ccc48e resourceVersion: "3024472" uid: 53610f7e-8846-4e28-b844-74b3967efebc spec: action: Relocate drPolicyRef: apiVersion: ramendr.openshift.io/v1alpha1 kind: DRPolicy name: my-drpolicy-20 failoverCluster: amagrawa-c2 placementRef: apiVersion: cluster.open-cluster-management.io/v1beta1 kind: Placement name: busybox-3-placement namespace: openshift-gitops preferredCluster: amagrawa-c1 pvcSelector: matchLabels: appname: busybox_app3 status: actionDuration: 587.08349ms actionStartTime: "2023-09-28T11:21:05Z" conditions: - lastTransitionTime: "2023-09-28T11:21:05Z" message: Completed observedGeneration: 1 reason: Relocated status: "True" type: Available - lastTransitionTime: "2023-09-28T11:21:05Z" message: Cleaned observedGeneration: 1 reason: Success status: "True" type: PeerReady lastGroupSyncBytes: 829313024 lastGroupSyncDuration: 28s lastGroupSyncTime: "2023-09-28T11:00:00Z" lastUpdateTime: "2023-09-29T05:53:45Z" phase: Relocated preferredDecision: clusterName: amagrawa-c1 clusterNamespace: amagrawa-c1 progression: Completed resourceConditions: conditions: - lastTransitionTime: "2023-09-28T12:15:48Z" message: All VolSync PVCs are ready observedGeneration: 1 reason: Ready status: "True" type: DataReady - lastTransitionTime: "2023-09-27T16:27:23Z" message: Not all VolSync PVCs are protected observedGeneration: 1 reason: DataProtected status: "False" type: DataProtected - lastTransitionTime: "2023-09-27T16:27:17Z" message: Restored cluster data observedGeneration: 1 reason: Restored status: "True" type: ClusterDataReady - lastTransitionTime: "2023-09-28T12:13:16Z" message: Not all VolSync PVCs are protected observedGeneration: 1 reason: DataProtected status: "False" type: ClusterDataProtected resourceMeta: generation: 1 kind: VolumeReplicationGroup name: busybox-3-placement-drpc namespace: appset-busybox-3 protectedpvcs: - busybox-pvc-41 - busybox-pvc-42 - busybox-pvc-43 - busybox-pvc-44 - busybox-pvc-45 - busybox-pvc-46 - busybox-pvc-47 - busybox-pvc-48 - busybox-pvc-49 - busybox-pvc-50 - busybox-pvc-51 - busybox-pvc-52 - busybox-pvc-53 - busybox-pvc-54 - busybox-pvc-55 - busybox-pvc-56 - busybox-pvc-57 - busybox-pvc-58 - busybox-pvc-59 - busybox-pvc-60 - apiVersion: ramendr.openshift.io/v1alpha1 kind: DRPlacementControl metadata: annotations: drplacementcontrol.ramendr.openshift.io/last-app-deployment-cluster: amagrawa-c1 creationTimestamp: "2023-09-28T10:59:13Z" finalizers: - drpc.ramendr.openshift.io/finalizer generation: 1 labels: cluster.open-cluster-management.io/backup: resource velero.io/backup-name: acm-resources-generic-schedule-20230928100034 velero.io/restore-name: restore-acm-acm-resources-generic-schedule-20230928100034 name: busybox-workloads-4-placement-drpc namespace: openshift-gitops ownerReferences: - apiVersion: cluster.open-cluster-management.io/v1beta1 blockOwnerDeletion: true controller: true kind: Placement name: busybox-workloads-4-placement uid: a862ee42-76fc-493e-a301-b75757a3c79b resourceVersion: "3269315" uid: 8e03eb2d-a200-4c9e-8971-a73cefa165c8 spec: drPolicyRef: apiVersion: ramendr.openshift.io/v1alpha1 kind: DRPolicy name: my-drpolicy-5 placementRef: apiVersion: cluster.open-cluster-management.io/v1beta1 kind: Placement name: busybox-workloads-4-placement namespace: openshift-gitops preferredCluster: amagrawa-c1 pvcSelector: matchLabels: appname: mongodb status: conditions: - lastTransitionTime: "2023-09-28T11:21:05Z" message: Initial deployment completed observedGeneration: 1 reason: Deployed status: "True" type: Available - lastTransitionTime: "2023-09-28T10:59:13Z" message: Ready observedGeneration: 1 reason: Success status: "True" type: PeerReady lastGroupSyncDuration: 29.20645197s lastGroupSyncTime: "2023-09-28T11:10:29Z" lastUpdateTime: "2023-09-29T10:08:55Z" phase: Deployed preferredDecision: clusterName: amagrawa-c1 clusterNamespace: amagrawa-c1 progression: Completed resourceConditions: conditions: - lastTransitionTime: "2023-09-27T15:14:20Z" message: All VolSync PVCs are ready observedGeneration: 1 reason: Ready status: "True" type: DataReady - lastTransitionTime: "2023-09-27T15:14:49Z" message: All VolSync PVCs are protected observedGeneration: 1 reason: DataProtected status: "True" type: DataProtected - lastTransitionTime: "2023-09-27T15:12:18Z" message: Restored cluster data observedGeneration: 1 reason: Restored status: "True" type: ClusterDataReady - lastTransitionTime: "2023-09-29T10:08:48Z" message: All VolSync PVCs are protected observedGeneration: 1 reason: DataProtected status: "True" type: ClusterDataProtected resourceMeta: generation: 1 kind: VolumeReplicationGroup name: busybox-workloads-4-placement-drpc namespace: busybox-workloads-4 protectedpvcs: - mongo-data kind: List metadata: resourceVersion: "" C1- amagrawa:c1$ mirror { "lastChecked": "2023-09-29T10:11:16Z", "summary": { "daemon_health": "OK", "health": "WARNING", "image_health": "WARNING", "states": { "replaying": 5, "starting_replay": 20 } } } amagrawa:c1$ busybox-3 Now using project "busybox-workloads-3" on server "https://api.amagrawa-c1.qe.rh-ocs.com:6443". NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE VOLUMEMODE persistentvolumeclaim/dd-io-pvc-1 Bound pvc-b85d69a7-3aa9-4304-bdce-665408ecfb41 117Gi RWO ocs-storagecluster-cephfs 22h Filesystem persistentvolumeclaim/dd-io-pvc-2 Bound pvc-67b28902-3029-4cf5-90e7-d1e82728ec98 143Gi RWO ocs-storagecluster-cephfs 22h Filesystem persistentvolumeclaim/dd-io-pvc-3 Bound pvc-27e6a211-2df1-473a-8b4b-fe282156d2f6 134Gi RWO ocs-storagecluster-cephfs 22h Filesystem persistentvolumeclaim/dd-io-pvc-4 Bound pvc-7ee21926-5502-4b7e-9f23-7ac98b884dec 106Gi RWO ocs-storagecluster-cephfs 22h Filesystem persistentvolumeclaim/dd-io-pvc-5 Bound pvc-c8f26e9e-ecd3-4515-888e-b7d653d0b56c 115Gi RWO ocs-storagecluster-cephfs 22h Filesystem persistentvolumeclaim/dd-io-pvc-6 Bound pvc-7df67757-6225-4275-bf5c-37128764cdc6 129Gi RWO ocs-storagecluster-cephfs 22h Filesystem persistentvolumeclaim/dd-io-pvc-7 Bound pvc-dd184e28-bbe3-4281-b19e-d9998b30585c 149Gi RWO ocs-storagecluster-cephfs 22h Filesystem persistentvolumeclaim/volsync-dd-io-pvc-1-dst Bound pvc-e435c751-944c-413c-b7bd-1185d9c619e0 117Gi RWO ocs-storagecluster-cephfs 22h Filesystem persistentvolumeclaim/volsync-dd-io-pvc-1-dst-20230928111025 Bound pvc-a2244689-021e-43d5-a0f5-b29fff906f11 117Gi ROX ocs-storagecluster-cephfs 23h Filesystem persistentvolumeclaim/volsync-dd-io-pvc-2-dst Bound pvc-7103a64d-3a2b-4c5f-8f14-9a3fc841eb13 143Gi RWO ocs-storagecluster-cephfs 22h Filesystem persistentvolumeclaim/volsync-dd-io-pvc-2-dst-20230928111047 Bound pvc-fffd657c-9671-407a-967c-c278f9853db0 143Gi ROX ocs-storagecluster-cephfs 23h Filesystem persistentvolumeclaim/volsync-dd-io-pvc-3-dst Bound pvc-28aa4856-162d-4383-9ff4-955d3361eeff 134Gi RWO ocs-storagecluster-cephfs 22h Filesystem persistentvolumeclaim/volsync-dd-io-pvc-3-dst-20230928111039 Bound pvc-60696d96-ccee-4595-9ca5-216b087ec1ad 134Gi ROX ocs-storagecluster-cephfs 23h Filesystem persistentvolumeclaim/volsync-dd-io-pvc-4-dst Bound pvc-4c9a586d-ba40-481e-a10b-4ff81fe26ac4 106Gi RWO ocs-storagecluster-cephfs 22h Filesystem persistentvolumeclaim/volsync-dd-io-pvc-4-dst-20230928111036 Bound pvc-7c0be149-a8b6-472f-a08e-474680212794 106Gi ROX ocs-storagecluster-cephfs 23h Filesystem persistentvolumeclaim/volsync-dd-io-pvc-5-dst Bound pvc-1dd1f7ca-b58f-407c-bed1-f295b572fb1a 115Gi RWO ocs-storagecluster-cephfs 22h Filesystem persistentvolumeclaim/volsync-dd-io-pvc-5-dst-20230928111051 Bound pvc-c956be23-e27a-4b43-9555-da05e0f83200 115Gi ROX ocs-storagecluster-cephfs 23h Filesystem persistentvolumeclaim/volsync-dd-io-pvc-6-dst Bound pvc-80dffdd8-3299-42d5-8560-4e9f747ba160 129Gi RWO ocs-storagecluster-cephfs 22h Filesystem persistentvolumeclaim/volsync-dd-io-pvc-6-dst-20230928111039 Bound pvc-55fa7186-1ca9-4ad6-9009-e0038404a21b 129Gi ROX ocs-storagecluster-cephfs 23h Filesystem persistentvolumeclaim/volsync-dd-io-pvc-7-dst Bound pvc-78e292f0-5285-4c09-a309-ac4b6a5cba9b 149Gi RWO ocs-storagecluster-cephfs 22h Filesystem persistentvolumeclaim/volsync-dd-io-pvc-7-dst-20230928111027 Bound pvc-a311784e-bfa2-4f79-bed4-022f02498a43 149Gi ROX ocs-storagecluster-cephfs 23h Filesystem NAME DESIREDSTATE CURRENTSTATE volumereplicationgroup.ramendr.openshift.io/busybox-workloads-3-placement-1-drpc secondary NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES pod/volsync-rsync-tls-dst-dd-io-pvc-1-local-xsw2q 1/1 Running 0 9h 10.128.2.78 compute-1 <none> <none> pod/volsync-rsync-tls-dst-dd-io-pvc-1-mf2rq 1/1 Running 0 9h 10.128.2.74 compute-1 <none> <none> pod/volsync-rsync-tls-dst-dd-io-pvc-2-dv9mn 1/1 Running 0 9h 10.128.2.80 compute-1 <none> <none> pod/volsync-rsync-tls-dst-dd-io-pvc-2-local-h9x6t 1/1 Running 0 9h 10.128.2.53 compute-1 <none> <none> pod/volsync-rsync-tls-dst-dd-io-pvc-3-4z8sd 1/1 Running 0 9h 10.128.2.61 compute-1 <none> <none> pod/volsync-rsync-tls-dst-dd-io-pvc-3-local-h54fx 1/1 Running 0 9h 10.128.2.83 compute-1 <none> <none> pod/volsync-rsync-tls-dst-dd-io-pvc-4-local-n86ll 1/1 Running 0 9h 10.128.2.41 compute-1 <none> <none> pod/volsync-rsync-tls-dst-dd-io-pvc-4-t8c82 1/1 Running 0 9h 10.128.2.89 compute-1 <none> <none> pod/volsync-rsync-tls-dst-dd-io-pvc-5-5475c 1/1 Running 0 9h 10.128.2.60 compute-1 <none> <none> pod/volsync-rsync-tls-dst-dd-io-pvc-5-local-hfgtw 1/1 Running 0 9h 10.128.2.88 compute-1 <none> <none> pod/volsync-rsync-tls-dst-dd-io-pvc-6-9zj5d 1/1 Running 0 9h 10.128.2.64 compute-1 <none> <none> pod/volsync-rsync-tls-dst-dd-io-pvc-6-local-986m9 1/1 Running 0 9h 10.128.2.76 compute-1 <none> <none> pod/volsync-rsync-tls-dst-dd-io-pvc-7-g7xfz 1/1 Running 0 9h 10.128.2.84 compute-1 <none> <none> pod/volsync-rsync-tls-dst-dd-io-pvc-7-local-86qj8 1/1 Running 0 9h 10.128.2.86 compute-1 <none> <none> amagrawa:c1$ appset-3 NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE VOLUMEMODE persistentvolumeclaim/busybox-pvc-41 Bound pvc-6e1efa8b-eb94-46d1-aa86-eb6d85a7c550 42Gi RWO ocs-storagecluster-ceph-rbd 41h Filesystem persistentvolumeclaim/busybox-pvc-42 Bound pvc-d92672fd-0a4f-40b0-8a65-483e378b2e92 81Gi RWO ocs-storagecluster-ceph-rbd 41h Filesystem persistentvolumeclaim/busybox-pvc-43 Bound pvc-36910268-df4d-4a0a-8223-a64d022ca7af 28Gi RWO ocs-storagecluster-ceph-rbd 41h Filesystem persistentvolumeclaim/busybox-pvc-44 Bound pvc-8b42cf82-b5ba-47a9-b181-b0220d35c4fe 118Gi RWO ocs-storagecluster-ceph-rbd 41h Filesystem persistentvolumeclaim/busybox-pvc-45 Bound pvc-3c0a945c-9090-43a1-abab-3e98566a1140 19Gi RWO ocs-storagecluster-ceph-rbd 41h Filesystem persistentvolumeclaim/busybox-pvc-46 Bound pvc-52c38088-5828-4c4a-bc26-32b9415ccef2 129Gi RWO ocs-storagecluster-ceph-rbd 41h Filesystem persistentvolumeclaim/busybox-pvc-47 Bound pvc-ab8bb971-e63e-41a4-816b-0019e63a0de3 43Gi RWO ocs-storagecluster-ceph-rbd 41h Filesystem persistentvolumeclaim/busybox-pvc-48 Bound pvc-7066e56d-c105-4ee9-8b6c-a2641f4b466e 57Gi RWO ocs-storagecluster-ceph-rbd 41h Filesystem persistentvolumeclaim/busybox-pvc-49 Bound pvc-cb72dcc2-73e1-4795-ba72-80dbe762b863 89Gi RWO ocs-storagecluster-ceph-rbd 41h Filesystem persistentvolumeclaim/busybox-pvc-50 Bound pvc-88c3980a-55f1-4375-9e26-66b8e00873ed 124Gi RWO ocs-storagecluster-ceph-rbd 41h Filesystem persistentvolumeclaim/busybox-pvc-51 Bound pvc-b0863097-ce7a-4d6d-acdd-bdc17bd31462 95Gi RWO ocs-storagecluster-ceph-rbd 41h Filesystem persistentvolumeclaim/busybox-pvc-52 Bound pvc-595fc4ee-60c9-40d4-a973-5a36cd9e5e32 129Gi RWO ocs-storagecluster-ceph-rbd 41h Filesystem persistentvolumeclaim/busybox-pvc-53 Bound pvc-37fe47a7-a270-484b-8e94-094d1d59fc86 51Gi RWO ocs-storagecluster-ceph-rbd 41h Filesystem persistentvolumeclaim/busybox-pvc-54 Bound pvc-e8e59eec-de94-4b2a-b100-5d2990a968f5 30Gi RWO ocs-storagecluster-ceph-rbd 41h Filesystem persistentvolumeclaim/busybox-pvc-55 Bound pvc-a6f829e9-5df0-4c7a-a3bc-11cd48a3184f 102Gi RWO ocs-storagecluster-ceph-rbd 41h Filesystem persistentvolumeclaim/busybox-pvc-56 Bound pvc-300cb343-6fea-4a44-8535-c071b9755a96 40Gi RWO ocs-storagecluster-ceph-rbd 41h Filesystem persistentvolumeclaim/busybox-pvc-57 Bound pvc-a6f2caf9-3f08-4c44-a7a4-b32ec091f6d7 146Gi RWO ocs-storagecluster-ceph-rbd 41h Filesystem persistentvolumeclaim/busybox-pvc-58 Bound pvc-f2eef286-c700-484f-9394-ea557ea91b57 63Gi RWO ocs-storagecluster-ceph-rbd 41h Filesystem persistentvolumeclaim/busybox-pvc-59 Bound pvc-aadbbc4a-ed91-4275-9de6-f828999400dd 118Gi RWO ocs-storagecluster-ceph-rbd 41h Filesystem persistentvolumeclaim/busybox-pvc-60 Bound pvc-c03fa1de-05dc-4128-9fe8-f8a63705216f 25Gi RWO ocs-storagecluster-ceph-rbd 41h Filesystem persistentvolumeclaim/volsync-busybox-pvc-41-src Bound pvc-c8b29273-4ef5-4b9a-938d-ee1377f19de4 42Gi RWO ocs-storagecluster-ceph-rbd 21h Filesystem persistentvolumeclaim/volsync-busybox-pvc-42-src Bound pvc-a5e9f3a6-4c75-4b88-9698-21449ab84890 81Gi RWO ocs-storagecluster-ceph-rbd 21h Filesystem persistentvolumeclaim/volsync-busybox-pvc-43-src Bound pvc-a4a8cefd-456a-4198-b84d-7e282699863d 28Gi RWO ocs-storagecluster-ceph-rbd 21h Filesystem persistentvolumeclaim/volsync-busybox-pvc-44-src Bound pvc-3d2a80fd-1549-4c7c-b5ed-fa82da17fc72 118Gi RWO ocs-storagecluster-ceph-rbd 21h Filesystem persistentvolumeclaim/volsync-busybox-pvc-45-src Bound pvc-e9fe3518-d817-4b74-a7f9-3708c33b4e8c 19Gi RWO ocs-storagecluster-ceph-rbd 21h Filesystem persistentvolumeclaim/volsync-busybox-pvc-46-src Bound pvc-2d4d8a19-39f2-47ad-af71-bd198976bea8 129Gi RWO ocs-storagecluster-ceph-rbd 21h Filesystem persistentvolumeclaim/volsync-busybox-pvc-47-src Bound pvc-8f715f6d-c524-44e8-bfcb-5138d06f238e 43Gi RWO ocs-storagecluster-ceph-rbd 21h Filesystem persistentvolumeclaim/volsync-busybox-pvc-48-src Bound pvc-7059c780-ae60-4029-a4b1-547e58b93fa7 57Gi RWO ocs-storagecluster-ceph-rbd 21h Filesystem persistentvolumeclaim/volsync-busybox-pvc-49-src Bound pvc-acfe2d68-0133-4fae-9525-c55387eff341 89Gi RWO ocs-storagecluster-ceph-rbd 21h Filesystem persistentvolumeclaim/volsync-busybox-pvc-50-src Bound pvc-1c5cd4bc-abfe-41a8-a658-4b0e137f0fa8 124Gi RWO ocs-storagecluster-ceph-rbd 21h Filesystem persistentvolumeclaim/volsync-busybox-pvc-51-src Bound pvc-03c58992-178f-4cad-9bd2-c18078085bb3 95Gi RWO ocs-storagecluster-ceph-rbd 21h Filesystem persistentvolumeclaim/volsync-busybox-pvc-52-src Bound pvc-c2bac311-eb51-4122-82ff-8f2dee7f88e4 129Gi RWO ocs-storagecluster-ceph-rbd 21h Filesystem persistentvolumeclaim/volsync-busybox-pvc-53-src Bound pvc-56d8b95b-1bec-4802-bf02-0e7b5563061b 51Gi RWO ocs-storagecluster-ceph-rbd 21h Filesystem persistentvolumeclaim/volsync-busybox-pvc-54-src Bound pvc-1364589a-a04d-442d-b919-e1d45dcbf198 30Gi RWO ocs-storagecluster-ceph-rbd 21h Filesystem persistentvolumeclaim/volsync-busybox-pvc-55-src Bound pvc-992d3c0b-1329-4355-839f-54a5b83d9602 102Gi RWO ocs-storagecluster-ceph-rbd 21h Filesystem persistentvolumeclaim/volsync-busybox-pvc-56-src Bound pvc-c01c2675-861e-44d6-b29b-02333e9ab99e 40Gi RWO ocs-storagecluster-ceph-rbd 21h Filesystem persistentvolumeclaim/volsync-busybox-pvc-57-src Bound pvc-d73f95ee-0cf2-4ca8-9395-d5d5b36b866d 146Gi RWO ocs-storagecluster-ceph-rbd 21h Filesystem persistentvolumeclaim/volsync-busybox-pvc-58-src Bound pvc-0b026dd8-9676-4bcf-aa07-f7f92a412161 63Gi RWO ocs-storagecluster-ceph-rbd 21h Filesystem persistentvolumeclaim/volsync-busybox-pvc-59-src Bound pvc-655bbb3f-5811-4e12-9671-20dc24ab2640 118Gi RWO ocs-storagecluster-ceph-rbd 21h Filesystem persistentvolumeclaim/volsync-busybox-pvc-60-src Bound pvc-d80f59cb-2b4c-4ab9-83f1-50817083d712 25Gi RWO ocs-storagecluster-ceph-rbd 21h Filesystem NAME AGE VOLUMEREPLICATIONCLASS PVCNAME DESIREDSTATE CURRENTSTATE volumereplication.replication.storage.openshift.io/busybox-pvc-41 41h rbd-volumereplicationclass-2263283542 busybox-pvc-41 primary Primary volumereplication.replication.storage.openshift.io/busybox-pvc-42 41h rbd-volumereplicationclass-2263283542 busybox-pvc-42 primary Primary volumereplication.replication.storage.openshift.io/busybox-pvc-43 41h rbd-volumereplicationclass-2263283542 busybox-pvc-43 primary Primary volumereplication.replication.storage.openshift.io/busybox-pvc-44 41h rbd-volumereplicationclass-2263283542 busybox-pvc-44 primary Primary volumereplication.replication.storage.openshift.io/busybox-pvc-45 41h rbd-volumereplicationclass-2263283542 busybox-pvc-45 primary Primary volumereplication.replication.storage.openshift.io/busybox-pvc-46 41h rbd-volumereplicationclass-2263283542 busybox-pvc-46 primary Primary volumereplication.replication.storage.openshift.io/busybox-pvc-47 41h rbd-volumereplicationclass-2263283542 busybox-pvc-47 primary Primary volumereplication.replication.storage.openshift.io/busybox-pvc-48 41h rbd-volumereplicationclass-2263283542 busybox-pvc-48 primary Primary volumereplication.replication.storage.openshift.io/busybox-pvc-49 41h rbd-volumereplicationclass-2263283542 busybox-pvc-49 primary Primary volumereplication.replication.storage.openshift.io/busybox-pvc-50 41h rbd-volumereplicationclass-2263283542 busybox-pvc-50 primary Primary volumereplication.replication.storage.openshift.io/busybox-pvc-51 41h rbd-volumereplicationclass-2263283542 busybox-pvc-51 primary Primary volumereplication.replication.storage.openshift.io/busybox-pvc-52 41h rbd-volumereplicationclass-2263283542 busybox-pvc-52 primary Primary volumereplication.replication.storage.openshift.io/busybox-pvc-53 41h rbd-volumereplicationclass-2263283542 busybox-pvc-53 primary Primary volumereplication.replication.storage.openshift.io/busybox-pvc-54 41h rbd-volumereplicationclass-2263283542 busybox-pvc-54 primary Primary volumereplication.replication.storage.openshift.io/busybox-pvc-55 41h rbd-volumereplicationclass-2263283542 busybox-pvc-55 primary Primary volumereplication.replication.storage.openshift.io/busybox-pvc-56 41h rbd-volumereplicationclass-2263283542 busybox-pvc-56 primary Primary volumereplication.replication.storage.openshift.io/busybox-pvc-57 41h rbd-volumereplicationclass-2263283542 busybox-pvc-57 primary Primary volumereplication.replication.storage.openshift.io/busybox-pvc-58 41h rbd-volumereplicationclass-2263283542 busybox-pvc-58 primary Primary volumereplication.replication.storage.openshift.io/busybox-pvc-59 41h rbd-volumereplicationclass-2263283542 busybox-pvc-59 primary Primary volumereplication.replication.storage.openshift.io/busybox-pvc-60 41h rbd-volumereplicationclass-2263283542 busybox-pvc-60 primary Primary NAME DESIREDSTATE CURRENTSTATE volumereplicationgroup.ramendr.openshift.io/busybox-3-placement-drpc primary Primary more pods*************** pod/busybox-58-66c56c5799-w2mv2 1/1 Running 6 41h 10.128.2.57 compute-1 <none> <none> pod/busybox-59-54dfd4c9df-dv6tg 1/1 Running 6 41h 10.128.2.54 compute-1 <none> <none> pod/busybox-60-fd7dbc9c6-k9bt5 1/1 Running 6 41h 10.128.2.77 compute-1 <none> <none> pod/volsync-rsync-tls-src-busybox-pvc-41-bzv26 0/1 ContainerCreating 0 9h <none> compute-1 <none> <none> pod/volsync-rsync-tls-src-busybox-pvc-41-hmdmg 0/1 Error 0 9h 10.128.2.94 compute-1 <none> <none> pod/volsync-rsync-tls-src-busybox-pvc-42-7xzhz 0/1 ContainerCreating 0 8h <none> compute-1 <none> <none> pod/volsync-rsync-tls-src-busybox-pvc-43-kmrvd 0/1 Error 0 9h 10.128.2.102 compute-1 <none> <none> pod/volsync-rsync-tls-src-busybox-pvc-43-w7jgd 0/1 ContainerCreating 0 9h <none> compute-1 <none> <none> pod/volsync-rsync-tls-src-busybox-pvc-44-jrds8 0/1 Error 0 9h 10.128.2.101 compute-1 <none> <none> pod/volsync-rsync-tls-src-busybox-pvc-44-ldqh5 0/1 ContainerCreating 0 9h <none> compute-1 <none> <none> pod/volsync-rsync-tls-src-busybox-pvc-45-7m8cj 0/1 Error 0 9h 10.128.2.68 compute-1 <none> <none> pod/volsync-rsync-tls-src-busybox-pvc-45-hcsjp 0/1 ContainerCreating 0 9h <none> compute-1 <none> <none> pod/volsync-rsync-tls-src-busybox-pvc-46-47jdk 0/1 ContainerCreating 0 9h <none> compute-1 <none> <none> pod/volsync-rsync-tls-src-busybox-pvc-46-fjsqx 0/1 Error 0 9h 10.128.2.99 compute-1 <none> <none> pod/volsync-rsync-tls-src-busybox-pvc-47-hl7n7 0/1 ContainerCreating 0 9h <none> compute-1 <none> <none> pod/volsync-rsync-tls-src-busybox-pvc-47-xzv5t 0/1 Error 0 9h 10.128.2.98 compute-1 <none> <none> pod/volsync-rsync-tls-src-busybox-pvc-48-cg4rv 0/1 ContainerCreating 0 9h <none> compute-1 <none> <none> pod/volsync-rsync-tls-src-busybox-pvc-48-ww9k7 0/1 Error 0 9h 10.128.2.92 compute-1 <none> <none> pod/volsync-rsync-tls-src-busybox-pvc-49-clmpk 0/1 ContainerCreating 0 9h <none> compute-1 <none> <none> pod/volsync-rsync-tls-src-busybox-pvc-50-75gm2 0/1 Error 0 9h 10.128.2.100 compute-1 <none> <none> pod/volsync-rsync-tls-src-busybox-pvc-50-b78f7 0/1 ContainerCreating 0 9h <none> compute-1 <none> <none> pod/volsync-rsync-tls-src-busybox-pvc-51-rn8dg 0/1 ContainerCreating 0 8h <none> compute-1 <none> <none> pod/volsync-rsync-tls-src-busybox-pvc-52-tx8wv 0/1 ContainerCreating 0 9h <none> compute-1 <none> <none> pod/volsync-rsync-tls-src-busybox-pvc-52-xqsxh 0/1 Error 0 9h 10.128.2.62 compute-1 <none> <none> pod/volsync-rsync-tls-src-busybox-pvc-53-stl4n 0/1 ContainerCreating 0 9h <none> compute-1 <none> <none> pod/volsync-rsync-tls-src-busybox-pvc-53-tmjpd 0/1 Error 0 9h 10.128.2.97 compute-1 <none> <none> pod/volsync-rsync-tls-src-busybox-pvc-54-2gbrk 0/1 ContainerCreating 0 9h <none> compute-1 <none> <none> pod/volsync-rsync-tls-src-busybox-pvc-55-s4lts 0/1 ContainerCreating 0 9h <none> compute-1 <none> <none> pod/volsync-rsync-tls-src-busybox-pvc-56-5wqq8 0/1 ContainerCreating 0 9h <none> compute-1 <none> <none> pod/volsync-rsync-tls-src-busybox-pvc-56-jbn72 0/1 Error 0 9h 10.128.2.90 compute-1 <none> <none> pod/volsync-rsync-tls-src-busybox-pvc-57-6zrvn 0/1 ContainerCreating 0 9h <none> compute-1 <none> <none> pod/volsync-rsync-tls-src-busybox-pvc-58-hkbsv 0/1 ContainerCreating 0 9h <none> compute-1 <none> <none> pod/volsync-rsync-tls-src-busybox-pvc-58-lb5zl 0/1 Error 0 9h 10.128.2.73 compute-1 <none> <none> pod/volsync-rsync-tls-src-busybox-pvc-59-zxlk5 0/1 ContainerCreating 0 9h <none> compute-1 <none> <none> pod/volsync-rsync-tls-src-busybox-pvc-60-5wrdr 0/1 Error 0 9h 10.128.2.91 compute-1 <none> <none> pod/volsync-rsync-tls-src-busybox-pvc-60-k48t7 0/1 ContainerCreating 0 9h <none> compute-1 <none> <none> amagrawa:c1$ busybox-4 Now using project "busybox-workloads-4" on server "https://api.amagrawa-c1.qe.rh-ocs.com:6443". NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE VOLUMEMODE persistentvolumeclaim/mongo-data Bound pvc-236713ad-4eb2-4d1d-b888-dccf1f72d733 40Gi RWX ocs-storagecluster-cephfs 43h Filesystem persistentvolumeclaim/volsync-mongo-data-src Bound pvc-f9dd53c2-e50a-422d-959c-0497f82e5dc7 40Gi ROX ocs-storagecluster-cephfs-vrg 22h Filesystem NAME DESIREDSTATE CURRENTSTATE volumereplicationgroup.ramendr.openshift.io/busybox-workloads-4-placement-drpc primary Primary NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES pod/io-writer-mongodb-f56c699b9-4mvpg 1/1 Running 12 (9h ago) 43h 10.131.0.46 compute-2 <none> <none> pod/io-writer-mongodb-f56c699b9-5mqgz 1/1 Running 6 43h 10.128.2.31 compute-1 <none> <none> pod/io-writer-mongodb-f56c699b9-fmbts 1/1 Running 6 43h 10.128.2.15 compute-1 <none> <none> pod/io-writer-mongodb-f56c699b9-k6d28 1/1 Running 10 (9h ago) 43h 10.129.2.36 compute-0 <none> <none> pod/io-writer-mongodb-f56c699b9-mpgxx 1/1 Running 12 (9h ago) 43h 10.129.2.37 compute-0 <none> <none> pod/mongo-7cdcd848f7-6gm8x 1/1 Running 6 43h 10.128.2.87 compute-1 <none> <none> pod/volsync-rsync-tls-src-mongo-data-tth5h 1/1 Running 0 107s 10.128.3.110 compute-1 <none> <none> C2- amagrawa:~$ busybox-2 Already on project "busybox-workloads-2" on server "https://api.amagrawa-c2.qe.rh-ocs.com:6443". NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE VOLUMEMODE persistentvolumeclaim/busybox-pvc-21 Bound pvc-5e69c820-1098-49ca-ad0a-19f9fe05d817 43Gi RWO ocs-storagecluster-ceph-rbd 46h Filesystem persistentvolumeclaim/busybox-pvc-22 Bound pvc-8ed17c3c-255a-4bfa-aa4a-7f644b7eea3b 43Gi RWO ocs-storagecluster-ceph-rbd 46h Filesystem persistentvolumeclaim/busybox-pvc-23 Bound pvc-d2f8cb8a-482d-4e73-a52f-61a82f10cc80 52Gi RWO ocs-storagecluster-ceph-rbd 46h Filesystem persistentvolumeclaim/busybox-pvc-24 Bound pvc-566c9783-0523-4c26-8ce6-730e89dce8d5 20Gi RWO ocs-storagecluster-ceph-rbd 46h Filesystem persistentvolumeclaim/busybox-pvc-25 Bound pvc-77554947-8700-4d79-9063-9b1afb127ad1 45Gi RWO ocs-storagecluster-ceph-rbd 46h Filesystem NAME AGE VOLUMEREPLICATIONCLASS PVCNAME DESIREDSTATE CURRENTSTATE volumereplication.replication.storage.openshift.io/busybox-pvc-21 46h rbd-volumereplicationclass-2263283542 busybox-pvc-21 primary Primary volumereplication.replication.storage.openshift.io/busybox-pvc-22 46h rbd-volumereplicationclass-2263283542 busybox-pvc-22 primary Primary volumereplication.replication.storage.openshift.io/busybox-pvc-23 46h rbd-volumereplicationclass-2263283542 busybox-pvc-23 primary Primary volumereplication.replication.storage.openshift.io/busybox-pvc-24 46h rbd-volumereplicationclass-2263283542 busybox-pvc-24 primary Primary volumereplication.replication.storage.openshift.io/busybox-pvc-25 46h rbd-volumereplicationclass-2263283542 busybox-pvc-25 primary Primary NAME DESIREDSTATE CURRENTSTATE volumereplicationgroup.ramendr.openshift.io/busybox-drpc primary Primary NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES pod/busybox-21-7d6dfb858-87sm8 1/1 Running 0 46h 10.128.2.149 compute-1 <none> <none> pod/busybox-22-6cf5dcc584-hxblm 1/1 Running 0 46h 10.128.2.150 compute-1 <none> <none> pod/busybox-23-5bf89b9cc8-fdrf4 1/1 Running 0 46h 10.128.2.151 compute-1 <none> <none> pod/busybox-24-6d5bc476dd-9z5c6 1/1 Running 0 46h 10.128.2.147 compute-1 <none> <none> pod/busybox-25-84d6dd6dc4-zchmj 1/1 Running 0 46h 10.128.2.148 compute-1 <none> <none> amagrawa:~$ busybox-3 Now using project "busybox-workloads-3" on server "https://api.amagrawa-c2.qe.rh-ocs.com:6443". NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE VOLUMEMODE persistentvolumeclaim/dd-io-pvc-1 Bound pvc-d99d8bbd-ff8a-442b-a029-f6241323cf69 117Gi RWO ocs-storagecluster-cephfs 43h Filesystem persistentvolumeclaim/dd-io-pvc-2 Bound pvc-0d8773c9-320e-486c-8a05-3a88a2755da2 143Gi RWO ocs-storagecluster-cephfs 43h Filesystem persistentvolumeclaim/dd-io-pvc-3 Bound pvc-ce6ed28f-ee07-4025-998b-6a7286e56101 134Gi RWO ocs-storagecluster-cephfs 43h Filesystem persistentvolumeclaim/dd-io-pvc-4 Bound pvc-3b7fef49-8f29-4b6e-9a68-858f8a2efb7f 106Gi RWO ocs-storagecluster-cephfs 43h Filesystem persistentvolumeclaim/dd-io-pvc-5 Bound pvc-9ca3cf77-50b0-4aba-8b8d-8f843ea359d5 115Gi RWO ocs-storagecluster-cephfs 43h Filesystem persistentvolumeclaim/dd-io-pvc-6 Bound pvc-a87dff37-8e26-4ace-bb96-ff40b13ad5de 129Gi RWO ocs-storagecluster-cephfs 43h Filesystem persistentvolumeclaim/dd-io-pvc-7 Bound pvc-e981e670-1735-4afa-832b-3ab1a05610a9 149Gi RWO ocs-storagecluster-cephfs 43h Filesystem persistentvolumeclaim/volsync-dd-io-pvc-1-src Bound pvc-f73a3607-d2c1-4527-bdbf-9c6fdfaf0b8d 117Gi ROX ocs-storagecluster-cephfs-vrg 23h Filesystem persistentvolumeclaim/volsync-dd-io-pvc-2-src Bound pvc-9cd743ed-9b9b-48bc-9b06-d0c53e4c940d 143Gi ROX ocs-storagecluster-cephfs-vrg 23h Filesystem persistentvolumeclaim/volsync-dd-io-pvc-3-src Bound pvc-a7b3a7ca-8882-45e5-82d9-ac4e96b91be8 134Gi ROX ocs-storagecluster-cephfs-vrg 23h Filesystem persistentvolumeclaim/volsync-dd-io-pvc-4-src Bound pvc-9236d5ce-c211-458b-8bc2-dc79d9de2935 106Gi ROX ocs-storagecluster-cephfs-vrg 23h Filesystem persistentvolumeclaim/volsync-dd-io-pvc-5-src Bound pvc-1d0e91a2-9c93-4342-946c-387c5494a9bd 115Gi ROX ocs-storagecluster-cephfs-vrg 23h Filesystem persistentvolumeclaim/volsync-dd-io-pvc-6-src Bound pvc-bf4502d5-88be-4bf1-a059-36eeaf120995 129Gi ROX ocs-storagecluster-cephfs-vrg 23h Filesystem persistentvolumeclaim/volsync-dd-io-pvc-7-src Bound pvc-7ef9d504-69d2-4109-bc08-1b22eb56a94a 149Gi ROX ocs-storagecluster-cephfs-vrg 23h Filesystem NAME DESIREDSTATE CURRENTSTATE volumereplicationgroup.ramendr.openshift.io/busybox-workloads-3-placement-1-drpc primary Primary NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES pod/dd-io-1-5dbcfccf76-sdm9z 1/1 Running 0 43h 10.131.0.104 compute-0 <none> <none> pod/dd-io-2-684fc84b64-gpt8n 1/1 Running 0 43h 10.128.2.165 compute-1 <none> <none> pod/dd-io-3-68bf99586d-rqk49 1/1 Running 0 43h 10.129.2.144 compute-2 <none> <none> pod/dd-io-4-757c8d8b7b-fks4m 1/1 Running 0 43h 10.128.2.167 compute-1 <none> <none> pod/dd-io-5-74768ccf84-nn9k7 1/1 Running 0 43h 10.129.2.145 compute-2 <none> <none> pod/dd-io-6-68d5769c76-4zmrq 1/1 Running 0 43h 10.128.2.166 compute-1 <none> <none> pod/dd-io-7-67d87688b4-7frf5 1/1 Running 0 43h 10.131.0.103 compute-0 <none> <none> pod/volsync-rsync-tls-src-dd-io-pvc-1-rz7h6 0/1 ContainerCreating 0 23h <none> compute-0 <none> <none> pod/volsync-rsync-tls-src-dd-io-pvc-2-xkdfb 0/1 ContainerCreating 0 23h <none> compute-1 <none> <none> pod/volsync-rsync-tls-src-dd-io-pvc-3-tvwwl 0/1 ContainerCreating 0 23h <none> compute-0 <none> <none> pod/volsync-rsync-tls-src-dd-io-pvc-4-fw7z8 0/1 ContainerCreating 0 23h <none> compute-0 <none> <none> pod/volsync-rsync-tls-src-dd-io-pvc-5-pcls2 0/1 ContainerCreating 0 23h <none> compute-1 <none> <none> pod/volsync-rsync-tls-src-dd-io-pvc-6-854x8 0/1 ContainerCreating 0 23h <none> compute-0 <none> <none> pod/volsync-rsync-tls-src-dd-io-pvc-7-g2n5l 0/1 ContainerCreating 0 23h <none> compute-1 <none> <none> amagrawa:~$ appset-3 NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE VOLUMEMODE persistentvolumeclaim/busybox-pvc-41 Pending ocs-storagecluster-ceph-rbd 23h Filesystem persistentvolumeclaim/busybox-pvc-42 Pending ocs-storagecluster-ceph-rbd 23h Filesystem persistentvolumeclaim/busybox-pvc-43 Pending ocs-storagecluster-ceph-rbd 23h Filesystem persistentvolumeclaim/busybox-pvc-44 Pending ocs-storagecluster-ceph-rbd 23h Filesystem persistentvolumeclaim/busybox-pvc-45 Pending ocs-storagecluster-ceph-rbd 23h Filesystem persistentvolumeclaim/busybox-pvc-46 Pending ocs-storagecluster-ceph-rbd 23h Filesystem persistentvolumeclaim/busybox-pvc-47 Pending ocs-storagecluster-ceph-rbd 23h Filesystem persistentvolumeclaim/busybox-pvc-48 Pending ocs-storagecluster-ceph-rbd 23h Filesystem persistentvolumeclaim/busybox-pvc-49 Pending ocs-storagecluster-ceph-rbd 23h Filesystem persistentvolumeclaim/busybox-pvc-50 Pending ocs-storagecluster-ceph-rbd 23h Filesystem persistentvolumeclaim/busybox-pvc-51 Pending ocs-storagecluster-ceph-rbd 23h Filesystem persistentvolumeclaim/busybox-pvc-52 Pending ocs-storagecluster-ceph-rbd 23h Filesystem persistentvolumeclaim/busybox-pvc-53 Pending ocs-storagecluster-ceph-rbd 23h Filesystem persistentvolumeclaim/busybox-pvc-54 Pending ocs-storagecluster-ceph-rbd 23h Filesystem persistentvolumeclaim/busybox-pvc-55 Pending ocs-storagecluster-ceph-rbd 23h Filesystem persistentvolumeclaim/busybox-pvc-56 Pending ocs-storagecluster-ceph-rbd 23h Filesystem persistentvolumeclaim/busybox-pvc-57 Pending ocs-storagecluster-ceph-rbd 23h Filesystem persistentvolumeclaim/busybox-pvc-58 Pending ocs-storagecluster-ceph-rbd 23h Filesystem persistentvolumeclaim/busybox-pvc-59 Pending ocs-storagecluster-ceph-rbd 23h Filesystem persistentvolumeclaim/busybox-pvc-60 Pending ocs-storagecluster-ceph-rbd 23h Filesystem persistentvolumeclaim/volsync-busybox-pvc-41-dst Pending ocs-storagecluster-ceph-rbd 22h Filesystem persistentvolumeclaim/volsync-busybox-pvc-42-dst Pending ocs-storagecluster-ceph-rbd 22h Filesystem persistentvolumeclaim/volsync-busybox-pvc-43-dst Pending ocs-storagecluster-ceph-rbd 22h Filesystem persistentvolumeclaim/volsync-busybox-pvc-44-dst Pending ocs-storagecluster-ceph-rbd 22h Filesystem persistentvolumeclaim/volsync-busybox-pvc-45-dst Pending ocs-storagecluster-ceph-rbd 22h Filesystem persistentvolumeclaim/volsync-busybox-pvc-46-dst Pending ocs-storagecluster-ceph-rbd 22h Filesystem persistentvolumeclaim/volsync-busybox-pvc-47-dst Pending ocs-storagecluster-ceph-rbd 22h Filesystem persistentvolumeclaim/volsync-busybox-pvc-48-dst Pending ocs-storagecluster-ceph-rbd 22h Filesystem persistentvolumeclaim/volsync-busybox-pvc-49-dst Pending ocs-storagecluster-ceph-rbd 22h Filesystem persistentvolumeclaim/volsync-busybox-pvc-50-dst Pending ocs-storagecluster-ceph-rbd 22h Filesystem persistentvolumeclaim/volsync-busybox-pvc-51-dst Pending ocs-storagecluster-ceph-rbd 22h Filesystem persistentvolumeclaim/volsync-busybox-pvc-52-dst Pending ocs-storagecluster-ceph-rbd 22h Filesystem persistentvolumeclaim/volsync-busybox-pvc-53-dst Pending ocs-storagecluster-ceph-rbd 22h Filesystem persistentvolumeclaim/volsync-busybox-pvc-54-dst Pending ocs-storagecluster-ceph-rbd 22h Filesystem persistentvolumeclaim/volsync-busybox-pvc-55-dst Pending ocs-storagecluster-ceph-rbd 22h Filesystem persistentvolumeclaim/volsync-busybox-pvc-56-dst Pending ocs-storagecluster-ceph-rbd 22h Filesystem persistentvolumeclaim/volsync-busybox-pvc-57-dst Pending ocs-storagecluster-ceph-rbd 22h Filesystem persistentvolumeclaim/volsync-busybox-pvc-58-dst Pending ocs-storagecluster-ceph-rbd 22h Filesystem persistentvolumeclaim/volsync-busybox-pvc-59-dst Pending ocs-storagecluster-ceph-rbd 22h Filesystem persistentvolumeclaim/volsync-busybox-pvc-60-dst Pending ocs-storagecluster-ceph-rbd 22h Filesystem NAME DESIREDSTATE CURRENTSTATE volumereplicationgroup.ramendr.openshift.io/busybox-3-placement-drpc secondary NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES pod/busybox-41-58444d46d7-xf8lr 0/1 Pending 0 23h <none> <none> <none> <none> pod/busybox-42-798d9688df-tj6fx 0/1 Pending 0 23h <none> <none> <none> <none> pod/busybox-43-5d559df9c4-wddvs 0/1 Pending 0 23h <none> <none> <none> <none> pod/busybox-44-67cf6fbbdd-xpvr4 0/1 Pending 0 23h <none> <none> <none> <none> pod/busybox-45-666674f8d4-cdkj8 0/1 Pending 0 23h <none> <none> <none> <none> pod/busybox-46-5957cbb6b6-lk2pj 0/1 Pending 0 23h <none> <none> <none> <none> pod/busybox-47-698bcd985-pcm5d 0/1 Pending 0 23h <none> <none> <none> <none> pod/busybox-48-74ddf9bf6d-cv6r6 0/1 Pending 0 23h <none> <none> <none> <none> pod/busybox-49-868dfd9d9b-tfk4x 0/1 Pending 0 23h <none> <none> <none> <none> pod/busybox-50-6fcd648dc6-trmbn 0/1 Pending 0 23h <none> <none> <none> <none> pod/busybox-51-6ff466d784-2c8qf 0/1 Pending 0 23h <none> <none> <none> <none> pod/busybox-52-6d79df6f4f-m49rk 0/1 Pending 0 23h <none> <none> <none> <none> pod/busybox-53-69d9749cfb-t8tsz 0/1 Pending 0 23h <none> <none> <none> <none> pod/busybox-54-f9c8ddc45-vd4l2 0/1 Pending 0 23h <none> <none> <none> <none> pod/busybox-55-d8d95b5f6-cwlsl 0/1 Pending 0 23h <none> <none> <none> <none> pod/busybox-56-7867f5cd66-hfgg6 0/1 Pending 0 23h <none> <none> <none> <none> pod/busybox-57-659f85ccd5-qf6dv 0/1 Pending 0 23h <none> <none> <none> <none> pod/busybox-58-66c56c5799-mjrqd 0/1 Pending 0 23h <none> <none> <none> <none> pod/busybox-59-54dfd4c9df-x987t 0/1 Pending 0 23h <none> <none> <none> <none> pod/busybox-60-fd7dbc9c6-6lk7k 0/1 Pending 0 23h <none> <none> <none> <none> pod/volsync-rsync-tls-dst-busybox-pvc-41-bjm4j 0/1 Pending 0 22h <none> <none> <none> <none> pod/volsync-rsync-tls-dst-busybox-pvc-42-j2p4s 0/1 Pending 0 22h <none> <none> <none> <none> pod/volsync-rsync-tls-dst-busybox-pvc-43-s9gm6 0/1 Pending 0 22h <none> <none> <none> <none> pod/volsync-rsync-tls-dst-busybox-pvc-44-c22qr 0/1 Pending 0 22h <none> <none> <none> <none> pod/volsync-rsync-tls-dst-busybox-pvc-45-l8j8j 0/1 Pending 0 22h <none> <none> <none> <none> pod/volsync-rsync-tls-dst-busybox-pvc-46-gr6j5 0/1 Pending 0 22h <none> <none> <none> <none> pod/volsync-rsync-tls-dst-busybox-pvc-47-v2nfr 0/1 Pending 0 22h <none> <none> <none> <none> pod/volsync-rsync-tls-dst-busybox-pvc-48-jmck5 0/1 Pending 0 22h <none> <none> <none> <none> pod/volsync-rsync-tls-dst-busybox-pvc-49-9qrd8 0/1 Pending 0 22h <none> <none> <none> <none> pod/volsync-rsync-tls-dst-busybox-pvc-50-cdrk7 0/1 Pending 0 22h <none> <none> <none> <none> pod/volsync-rsync-tls-dst-busybox-pvc-51-rzxjl 0/1 Pending 0 22h <none> <none> <none> <none> pod/volsync-rsync-tls-dst-busybox-pvc-52-2vcp7 0/1 Pending 0 22h <none> <none> <none> <none> pod/volsync-rsync-tls-dst-busybox-pvc-53-lgcnb 0/1 Pending 0 22h <none> <none> <none> <none> pod/volsync-rsync-tls-dst-busybox-pvc-54-6tbvp 0/1 Pending 0 22h <none> <none> <none> <none> pod/volsync-rsync-tls-dst-busybox-pvc-55-ccpkw 0/1 Pending 0 22h <none> <none> <none> <none> pod/volsync-rsync-tls-dst-busybox-pvc-56-h54vc 0/1 Pending 0 22h <none> <none> <none> <none> pod/volsync-rsync-tls-dst-busybox-pvc-57-j8dhx 0/1 Pending 0 22h <none> <none> <none> <none> pod/volsync-rsync-tls-dst-busybox-pvc-58-zz866 0/1 Pending 0 22h <none> <none> <none> <none> pod/volsync-rsync-tls-dst-busybox-pvc-59-f8f9p 0/1 Pending 0 22h <none> <none> <none> <none> pod/volsync-rsync-tls-dst-busybox-pvc-60-gmf6g 0/1 Pending 0 22h <none> <none> <none> <none> amagrawa:~$ busybox-4 Now using project "busybox-workloads-4" on server "https://api.amagrawa-c2.qe.rh-ocs.com:6443". NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE VOLUMEMODE persistentvolumeclaim/mongo-data Terminating pvc-0e658363-9df5-4625-8084-4b1ec0650876 40Gi RWX ocs-storagecluster-cephfs 43h Filesystem persistentvolumeclaim/volsync-mongo-data-dst Pending ocs-storagecluster-cephfs 22h Filesystem persistentvolumeclaim/volsync-mongo-data-dst-20230928111030 Bound pvc-d9c7fe91-4fa3-4d3a-99c9-32bf7ea39374 40Gi ROX ocs-storagecluster-cephfs 23h Filesystem NAME DESIREDSTATE CURRENTSTATE volumereplicationgroup.ramendr.openshift.io/busybox-workloads-4-placement-drpc secondary NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES pod/io-writer-mongodb-f56c699b9-7vbxj 0/1 Init:0/1 0 23h 10.129.2.255 compute-2 <none> <none> pod/io-writer-mongodb-f56c699b9-fhwkv 0/1 Init:0/1 0 23h 10.129.2.254 compute-2 <none> <none> pod/io-writer-mongodb-f56c699b9-pcpf5 0/1 Init:0/1 0 23h 10.128.2.221 compute-1 <none> <none> pod/io-writer-mongodb-f56c699b9-rq5kp 0/1 Init:0/1 0 23h 10.128.2.220 compute-1 <none> <none> pod/io-writer-mongodb-f56c699b9-vzgfg 0/1 Init:0/1 0 23h 10.131.1.116 compute-0 <none> <none> pod/mongo-7cdcd848f7-8gw25 0/1 ContainerCreating 0 23h <none> compute-2 <none> Expected results: All the workload apps should revert to their last backed up state, e.g. if workloads were in deployed state and then were failedover or relocated but new backup is not taken, drpc should report the status as deployed and not the new state, or other relevant last backed up state for each workload app. sync for all rbd and cephfs workload should progress as expected. All ODF pods should be up and running. Ceph should be healthy. DRPolicy should stay validated, mirroring status should be healthy. DR monitoring dashboard should be backed up as is (if it was configured and was backedup in the last stored backup).
In the bug triage meeting held on 3rd oct, it was decided that Aman will retest and share the backup info with Benammar. This will help engineering decide whether a fix is needed. The bug will continue to be proposed as a blocker.
Several issues need attention, but I'll focus on the situation where both workloads (specifically, the ApplicationSet workload) are running on both clusters. While I'm not entirely certain about my findings, it appears that the PlacementDecision was backed up and subsequently restored on the passive hub before transitioning to the active one. ```up oc get placementdecision -n openshift-gitops busybox-3-placement-decision-1 -o yaml apiVersion: cluster.open-cluster-management.io/v1beta1 kind: PlacementDecision metadata: creationTimestamp: "2023-09-28T10:59:51Z" generation: 1 labels: cluster.open-cluster-management.io/decision-group-index: "0" cluster.open-cluster-management.io/decision-group-name: "" cluster.open-cluster-management.io/placement: busybox-3-placement velero.io/backup-name: acm-resources-schedule-20230928100034 velero.io/restore-name: restore-acm-acm-resources-schedule-20230928100034 name: busybox-3-placement-decision-1 namespace: openshift-gitops resourceVersion: "1960594" uid: c8878f28-2594-4c12-9495-30e86fd94cf4 status: decisions: - clusterName: amagrawa-c1 reason: "" ``` The label `velero.io/restore-name: restore-acm-acm-resources-schedule-20230928100034` indicates that the decision was restored from that backup. Although we lack access to the backup for verification, the application log indicates that following the hub restore, "amagrawa-c2" was initially chosen as the target cluster for the application. However, at a later stage, when the DRPC had the opportunity to select the correct cluster, the decision was modified to "amagrawa-c1" as observed above. However, the openshift-gitops operator failed to remove the application from "amagrawa-c2". We need to reproduce this issue in order to gather the logs before and after the hub recovery is started.
PR posted to fix a ramen issue with PlacementDecision being backed up by the HubRecovery backup routine: https://github.com/RamenDR/ramen/pull/1092
Tested with OCP 4.14.0-0.nightly-2023-10-18-004928 advanced-cluster-management.v2.9.0-188 ODF 4.14.0-156 ceph version 17.2.6-148.el9cp (badc1d27cb07762bea48f6554ad4f92b9d3fbb6b) quincy (stable) Submariner image: brew.registry.redhat.io/rh-osbs/iib:599799 ACM 2.9.0-DOWNSTREAM-2023-10-18-17-59-25 Latency 50ms RTT Outputs from active hub- amagrawa:acm$ drpc NAMESPACE NAME AGE PREFERREDCLUSTER FAILOVERCLUSTER DESIREDSTATE CURRENTSTATE PROGRESSION START TIME DURATION PEER READY busybox-workloads-1 busybox-workloads-1-placement-1-drpc 2d1h amagrawa-1st Deployed Completed True busybox-workloads-3 busybox-sub-cephfs-placement-1-drpc 47h amagrawa-2nd Relocate Relocated Completed 2023-10-23T17:35:35Z 4m40.644361827s True openshift-gitops busybox-appset-cephfs-placement-drpc 47h amagrawa-2nd Deployed Completed True openshift-gitops busybox-workloads-2-placement-drpc 47h amagrawa-1st amagrawa-2nd Failover FailedOver Cleaning Up 2023-10-23T17:35:45Z False Here, busybox-sub-cephfs-placement-1-drpc was relocated and busybox-workloads-2-placement-drpc was failedover which was still cleaning up but resources were successfully created on the failover cluster. It was ensured that the new backup is not created and active hub is completely brought down before next new backup creation schedule. amagrawa:acm$ group drplacementcontrol.ramendr.openshift.io/app-namespace: busybox-workloads-1 namespace: busybox-workloads-1 namespace: busybox-workloads-1 lastGroupSyncTime: "2023-10-23T17:40:00Z" namespace: busybox-workloads-1 drplacementcontrol.ramendr.openshift.io/app-namespace: busybox-workloads-3 namespace: busybox-workloads-3 namespace: busybox-workloads-3 lastGroupSyncTime: "2023-10-23T17:41:44Z" namespace: busybox-workloads-3 drplacementcontrol.ramendr.openshift.io/app-namespace: busybox-workloads-4 namespace: openshift-gitops namespace: openshift-gitops lastGroupSyncTime: "2023-10-23T17:40:40Z" namespace: busybox-workloads-4 drplacementcontrol.ramendr.openshift.io/app-namespace: busybox-workloads-2 namespace: openshift-gitops namespace: openshift-gitops namespace: busybox-workloads-2 amagrawa:acm$ date -u Monday 23 October 2023 05:49:25 PM UTC amagrawa:acm$ oc get backups -A NAMESPACE NAME AGE open-cluster-management-backup acm-credentials-schedule-20231023110057 6h48m open-cluster-management-backup acm-credentials-schedule-20231023120057 5h48m open-cluster-management-backup acm-credentials-schedule-20231023130057 4h48m open-cluster-management-backup acm-credentials-schedule-20231023140057 3h48m open-cluster-management-backup acm-credentials-schedule-20231023150057 168m open-cluster-management-backup acm-credentials-schedule-20231023160057 108m open-cluster-management-backup acm-credentials-schedule-20231023170057 48m open-cluster-management-backup acm-managed-clusters-schedule-20231023110057 6h48m open-cluster-management-backup acm-managed-clusters-schedule-20231023120057 5h48m open-cluster-management-backup acm-managed-clusters-schedule-20231023130057 4h48m open-cluster-management-backup acm-managed-clusters-schedule-20231023140057 3h48m open-cluster-management-backup acm-managed-clusters-schedule-20231023150057 168m open-cluster-management-backup acm-managed-clusters-schedule-20231023160057 108m open-cluster-management-backup acm-managed-clusters-schedule-20231023170057 48m open-cluster-management-backup acm-resources-generic-schedule-20231023110057 6h48m open-cluster-management-backup acm-resources-generic-schedule-20231023120057 5h48m open-cluster-management-backup acm-resources-generic-schedule-20231023130057 4h48m open-cluster-management-backup acm-resources-generic-schedule-20231023140057 3h48m open-cluster-management-backup acm-resources-generic-schedule-20231023150057 168m open-cluster-management-backup acm-resources-generic-schedule-20231023160057 108m open-cluster-management-backup acm-resources-generic-schedule-20231023170057 48m open-cluster-management-backup acm-resources-schedule-20231023110057 6h48m open-cluster-management-backup acm-resources-schedule-20231023120057 5h48m open-cluster-management-backup acm-resources-schedule-20231023130057 4h48m open-cluster-management-backup acm-resources-schedule-20231023140057 3h48m open-cluster-management-backup acm-resources-schedule-20231023150057 168m open-cluster-management-backup acm-resources-schedule-20231023160057 108m open-cluster-management-backup acm-resources-schedule-20231023170057 48m open-cluster-management-backup acm-validation-policy-schedule-20231023170057 48m amagrawa:acm$ date -u Monday 23 October 2023 05:50:07 PM UTC From passive hub- amagrawa:~$ oc get backups -A NAMESPACE NAME AGE open-cluster-management-backup acm-credentials-schedule-20231023110057 6h48m open-cluster-management-backup acm-credentials-schedule-20231023120057 5h47m open-cluster-management-backup acm-credentials-schedule-20231023130057 4h47m open-cluster-management-backup acm-credentials-schedule-20231023140057 3h48m open-cluster-management-backup acm-credentials-schedule-20231023150057 167m open-cluster-management-backup acm-credentials-schedule-20231023160057 108m open-cluster-management-backup acm-credentials-schedule-20231023170057 48m open-cluster-management-backup acm-managed-clusters-schedule-20231023110057 6h48m open-cluster-management-backup acm-managed-clusters-schedule-20231023120057 5h47m open-cluster-management-backup acm-managed-clusters-schedule-20231023130057 4h47m open-cluster-management-backup acm-managed-clusters-schedule-20231023140057 3h48m open-cluster-management-backup acm-managed-clusters-schedule-20231023150057 168m open-cluster-management-backup acm-managed-clusters-schedule-20231023160057 107m open-cluster-management-backup acm-managed-clusters-schedule-20231023170057 48m open-cluster-management-backup acm-resources-generic-schedule-20231023110057 6h46m open-cluster-management-backup acm-resources-generic-schedule-20231023120057 5h47m open-cluster-management-backup acm-resources-generic-schedule-20231023130057 4h47m open-cluster-management-backup acm-resources-generic-schedule-20231023140057 3h47m open-cluster-management-backup acm-resources-generic-schedule-20231023150057 167m open-cluster-management-backup acm-resources-generic-schedule-20231023160057 107m open-cluster-management-backup acm-resources-generic-schedule-20231023170057 47m open-cluster-management-backup acm-resources-schedule-20231023110057 6h46m open-cluster-management-backup acm-resources-schedule-20231023120057 5h47m open-cluster-management-backup acm-resources-schedule-20231023130057 4h47m open-cluster-management-backup acm-resources-schedule-20231023140057 3h47m open-cluster-management-backup acm-resources-schedule-20231023150057 167m open-cluster-management-backup acm-resources-schedule-20231023160057 108m open-cluster-management-backup acm-resources-schedule-20231023170057 47m open-cluster-management-backup acm-validation-policy-schedule-20231023170057 47m amagrawa:~$ date -u Monday 23 October 2023 05:50:02 PM UTC Then completely brought down active hub and waited for 5-10mins. Restored backup on passive hub, & ensured that both the managed clusters are up and running. DRPolicy gets validated. Checked drpc status- amagrawa:~$ drpc NAMESPACE NAME AGE PREFERREDCLUSTER FAILOVERCLUSTER DESIREDSTATE CURRENTSTATE PROGRESSION START TIME DURATION PEER READY busybox-workloads-1 busybox-workloads-1-placement-1-drpc 10h amagrawa-1st Deployed Completed True busybox-workloads-3 busybox-sub-cephfs-placement-1-drpc 10h amagrawa-1st Deployed Completed True openshift-gitops busybox-appset-cephfs-placement-drpc 10h amagrawa-2nd Deployed Completed True openshift-gitops busybox-workloads-2-placement-drpc 10h amagrawa-1st Deployed Completed True All workloads moved to their last backedup state which is Deployed. But individual workload resources status are completely messed up. busybox-workloads-1-placement-1-drpc is fine ============================================================================================================================================================================== for busybox-workloads-2-placement-drpc which was failedover to C2 (amagrawa-2nd) From C1- amagrawa:~$ busybox-2 Now using project "busybox-workloads-2" on server "https://api.amagrawa-1st.qe.rh-ocs.com:6443". NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE VOLUMEMODE persistentvolumeclaim/dd-io-pvc-1 Terminating pvc-e8ad08e3-a6ca-4232-81ba-31d31811b255 117Gi RWO ocs-storagecluster-ceph-rbd 2d21h Filesystem persistentvolumeclaim/dd-io-pvc-2 Terminating pvc-d5cf5619-671f-4fd8-8415-82c6398d0ed4 143Gi RWO ocs-storagecluster-ceph-rbd 2d21h Filesystem persistentvolumeclaim/dd-io-pvc-3 Terminating pvc-e9561849-5093-49b2-9a1b-4fe75fb91535 134Gi RWO ocs-storagecluster-ceph-rbd 2d21h Filesystem persistentvolumeclaim/dd-io-pvc-4 Terminating pvc-970dff84-721b-4c6f-9829-42869812c523 106Gi RWO ocs-storagecluster-ceph-rbd 2d21h Filesystem persistentvolumeclaim/dd-io-pvc-5 Terminating pvc-d50dc5b9-b1db-45f2-a60d-145eb34fcacb 115Gi RWO ocs-storagecluster-ceph-rbd 2d21h Filesystem persistentvolumeclaim/dd-io-pvc-6 Terminating pvc-d80390eb-717a-4b25-8e42-dc6517be610d 129Gi RWO ocs-storagecluster-ceph-rbd 2d21h Filesystem persistentvolumeclaim/dd-io-pvc-7 Terminating pvc-b3d544ea-1c03-4206-9383-816c611467df 149Gi RWO ocs-storagecluster-ceph-rbd 2d21h Filesystem NAME AGE VOLUMEREPLICATIONCLASS PVCNAME DESIREDSTATE CURRENTSTATE volumereplication.replication.storage.openshift.io/dd-io-pvc-1 2d21h rbd-volumereplicationclass-2263283542 dd-io-pvc-1 secondary Secondary volumereplication.replication.storage.openshift.io/dd-io-pvc-2 2d21h rbd-volumereplicationclass-2263283542 dd-io-pvc-2 secondary Secondary volumereplication.replication.storage.openshift.io/dd-io-pvc-3 2d21h rbd-volumereplicationclass-2263283542 dd-io-pvc-3 secondary Secondary volumereplication.replication.storage.openshift.io/dd-io-pvc-4 2d21h rbd-volumereplicationclass-2263283542 dd-io-pvc-4 secondary Secondary volumereplication.replication.storage.openshift.io/dd-io-pvc-5 2d21h rbd-volumereplicationclass-2263283542 dd-io-pvc-5 secondary Secondary volumereplication.replication.storage.openshift.io/dd-io-pvc-6 2d21h rbd-volumereplicationclass-2263283542 dd-io-pvc-6 secondary Secondary volumereplication.replication.storage.openshift.io/dd-io-pvc-7 2d21h rbd-volumereplicationclass-2263283542 dd-io-pvc-7 secondary Secondary NAME DESIREDSTATE CURRENTSTATE volumereplicationgroup.ramendr.openshift.io/busybox-workloads-2-placement-drpc primary Unknown NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES pod/dd-io-1-854f867867-zp9hr 0/1 Pending 0 20h <none> <none> <none> <none> pod/dd-io-2-56679fb667-gz8jf 0/1 Pending 0 21h <none> <none> <none> <none> pod/dd-io-3-5757659b99-p6rjh 0/1 Pending 0 21h <none> <none> <none> <none> pod/dd-io-4-75bd89888c-tslbd 0/1 Pending 0 21h <none> <none> <none> <none> pod/dd-io-5-86c65fd579-kq68b 0/1 Pending 0 21h <none> <none> <none> <none> pod/dd-io-6-fd8994467-9ln28 0/1 Pending 0 21h <none> <none> <none> <none> pod/dd-io-7-685b4f6699-q97rs 0/1 Pending 0 21h <none> <none> <none> <none> From C2- amagrawa:~$ busybox-2 Now using project "busybox-workloads-2" on server "https://api.amagrawa-2nd.qe.rh-ocs.com:6443". NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE VOLUMEMODE persistentvolumeclaim/dd-io-pvc-1 Terminating pvc-e8ad08e3-a6ca-4232-81ba-31d31811b255 117Gi RWO ocs-storagecluster-ceph-rbd 22h Filesystem persistentvolumeclaim/dd-io-pvc-2 Terminating pvc-d5cf5619-671f-4fd8-8415-82c6398d0ed4 143Gi RWO ocs-storagecluster-ceph-rbd 22h Filesystem persistentvolumeclaim/dd-io-pvc-3 Terminating pvc-e9561849-5093-49b2-9a1b-4fe75fb91535 134Gi RWO ocs-storagecluster-ceph-rbd 22h Filesystem persistentvolumeclaim/dd-io-pvc-4 Terminating pvc-970dff84-721b-4c6f-9829-42869812c523 106Gi RWO ocs-storagecluster-ceph-rbd 22h Filesystem persistentvolumeclaim/dd-io-pvc-5 Terminating pvc-d50dc5b9-b1db-45f2-a60d-145eb34fcacb 115Gi RWO ocs-storagecluster-ceph-rbd 22h Filesystem persistentvolumeclaim/dd-io-pvc-6 Terminating pvc-d80390eb-717a-4b25-8e42-dc6517be610d 129Gi RWO ocs-storagecluster-ceph-rbd 22h Filesystem persistentvolumeclaim/dd-io-pvc-7 Terminating pvc-b3d544ea-1c03-4206-9383-816c611467df 149Gi RWO ocs-storagecluster-ceph-rbd 22h Filesystem NAME AGE VOLUMEREPLICATIONCLASS PVCNAME DESIREDSTATE CURRENTSTATE volumereplication.replication.storage.openshift.io/dd-io-pvc-1 22h rbd-volumereplicationclass-2263283542 dd-io-pvc-1 primary Primary volumereplication.replication.storage.openshift.io/dd-io-pvc-2 22h rbd-volumereplicationclass-2263283542 dd-io-pvc-2 primary Primary volumereplication.replication.storage.openshift.io/dd-io-pvc-3 22h rbd-volumereplicationclass-2263283542 dd-io-pvc-3 primary Primary volumereplication.replication.storage.openshift.io/dd-io-pvc-4 22h rbd-volumereplicationclass-2263283542 dd-io-pvc-4 primary Primary volumereplication.replication.storage.openshift.io/dd-io-pvc-5 22h rbd-volumereplicationclass-2263283542 dd-io-pvc-5 primary Primary volumereplication.replication.storage.openshift.io/dd-io-pvc-6 22h rbd-volumereplicationclass-2263283542 dd-io-pvc-6 primary Primary volumereplication.replication.storage.openshift.io/dd-io-pvc-7 22h rbd-volumereplicationclass-2263283542 dd-io-pvc-7 primary Primary NAME DESIREDSTATE CURRENTSTATE volumereplicationgroup.ramendr.openshift.io/busybox-workloads-2-placement-drpc primary Primary ============================================================================================================================================================================== busybox-sub-cephfs-placement-1-drpc initially had issues but it recovered after a few hours which was relocated to C2 but it's backup wasn't taken From C1- amagrawa:~$ busybox-3 Now using project "busybox-workloads-3" on server "https://api.amagrawa-1st.qe.rh-ocs.com:6443". NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE VOLUMEMODE persistentvolumeclaim/busybox-pvc-1 Bound pvc-caeb302f-c863-4646-b8ea-22e4dd57ac63 94Gi RWX ocs-storagecluster-cephfs 2d21h Filesystem persistentvolumeclaim/busybox-pvc-10 Bound pvc-b832628a-632c-4268-9195-5ae88e13eae0 87Gi RWX ocs-storagecluster-cephfs 2d21h Filesystem persistentvolumeclaim/busybox-pvc-11 Bound pvc-6ae0f2f7-ac8b-4a70-b090-fe52e40274a8 33Gi RWX ocs-storagecluster-cephfs 2d21h Filesystem persistentvolumeclaim/busybox-pvc-12 Bound pvc-1462dddf-3b95-4cc2-bd22-c85c20eafa51 147Gi RWX ocs-storagecluster-cephfs 2d21h Filesystem persistentvolumeclaim/busybox-pvc-13 Bound pvc-6e37fa31-ffd7-4b89-9d5b-9d349f1d4496 77Gi RWX ocs-storagecluster-cephfs 2d21h Filesystem persistentvolumeclaim/busybox-pvc-14 Bound pvc-039641c0-425c-4fa8-89af-6eac7da8cda6 70Gi RWX ocs-storagecluster-cephfs 2d21h Filesystem persistentvolumeclaim/busybox-pvc-15 Bound pvc-0a9dfa24-4476-4fb1-a99e-0a64cb0d7c56 131Gi RWX ocs-storagecluster-cephfs 2d21h Filesystem persistentvolumeclaim/busybox-pvc-16 Bound pvc-79875a2c-c924-4faf-8fb9-9b7b8ddd7dcf 127Gi RWX ocs-storagecluster-cephfs 2d21h Filesystem persistentvolumeclaim/busybox-pvc-17 Bound pvc-2a8d4574-1c08-4fdf-88d2-8c16b62ffd99 58Gi RWX ocs-storagecluster-cephfs 2d21h Filesystem persistentvolumeclaim/busybox-pvc-18 Bound pvc-d4b20c4d-b7a1-4943-b258-f8590e424aee 123Gi RWX ocs-storagecluster-cephfs 2d21h Filesystem persistentvolumeclaim/busybox-pvc-19 Bound pvc-e4597e84-d1ca-4853-a2b7-3174fbf18428 61Gi RWX ocs-storagecluster-cephfs 2d21h Filesystem persistentvolumeclaim/busybox-pvc-2 Bound pvc-cb019838-799b-46cc-852e-c5a6c3becbb5 44Gi RWX ocs-storagecluster-cephfs 2d21h Filesystem persistentvolumeclaim/busybox-pvc-20 Bound pvc-e1d05fff-eb05-4598-93b2-1985cfdbcaaa 33Gi RWX ocs-storagecluster-cephfs 2d21h Filesystem persistentvolumeclaim/busybox-pvc-3 Bound pvc-89a1de2a-bebc-4a04-859d-52b9ca8fd4c9 76Gi RWX ocs-storagecluster-cephfs 2d21h Filesystem persistentvolumeclaim/busybox-pvc-4 Bound pvc-406070d0-351b-4386-8dbc-649da2806533 144Gi RWX ocs-storagecluster-cephfs 2d21h Filesystem persistentvolumeclaim/busybox-pvc-5 Bound pvc-7f5ef6c5-8ea3-4e46-b943-7481fb5200c1 107Gi RWX ocs-storagecluster-cephfs 2d21h Filesystem persistentvolumeclaim/busybox-pvc-6 Bound pvc-1fb622b3-ba12-4210-8b23-c7fe570926d7 123Gi RWX ocs-storagecluster-cephfs 2d21h Filesystem persistentvolumeclaim/busybox-pvc-7 Bound pvc-825fefd8-3a21-495f-b91e-38972b3487cf 90Gi RWX ocs-storagecluster-cephfs 2d21h Filesystem persistentvolumeclaim/busybox-pvc-8 Bound pvc-1b691bde-e343-48f9-83a9-13f905a901d2 91Gi RWX ocs-storagecluster-cephfs 2d21h Filesystem persistentvolumeclaim/busybox-pvc-9 Bound pvc-535d2f02-905b-4307-a966-08e766768def 111Gi RWX ocs-storagecluster-cephfs 2d21h Filesystem NAME DESIREDSTATE CURRENTSTATE volumereplicationgroup.ramendr.openshift.io/busybox-sub-cephfs-placement-1-drpc primary Primary NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES pod/busybox-1-7f7bf8c5d9-g5vh5 1/1 Running 0 21h 10.131.2.18 compute-4 <none> <none> pod/busybox-10-7b7bddddf8-sjq6m 1/1 Running 0 21h 10.128.3.254 compute-3 <none> <none> pod/busybox-11-6c4cf4bfb-8mwt4 1/1 Running 0 21h 10.128.2.12 compute-3 <none> <none> pod/busybox-12-7968f7d4bb-9jbvw 1/1 Running 0 21h 10.131.2.27 compute-4 <none> <none> pod/busybox-13-674b97b564-b66hr 1/1 Running 0 21h 10.128.5.218 compute-5 <none> <none> pod/busybox-14-f59899658-bcdtg 1/1 Running 0 21h 10.128.5.219 compute-5 <none> <none> pod/busybox-15-867dd79cbd-bgljp 1/1 Running 0 21h 10.128.5.220 compute-5 <none> <none> pod/busybox-16-866d576d54-n22d6 1/1 Running 0 21h 10.128.2.14 compute-3 <none> <none> pod/busybox-17-8d7df8b76-4cmrr 1/1 Running 0 21h 10.128.2.15 compute-3 <none> <none> pod/busybox-18-75cdf6f4c4-scwq7 1/1 Running 0 21h 10.128.2.19 compute-3 <none> <none> pod/busybox-19-6bcbc84d68-jsx22 1/1 Running 0 21h 10.128.5.221 compute-5 <none> <none> pod/busybox-2-5cffb67686-z79cb 1/1 Running 0 21h 10.131.2.28 compute-4 <none> <none> pod/busybox-20-fdbd78dbd-d8s57 1/1 Running 0 21h 10.131.2.29 compute-4 <none> <none> pod/busybox-3-7ffc7c8fbb-dfjmv 1/1 Running 0 21h 10.131.2.30 compute-4 <none> <none> pod/busybox-4-66688c494b-5pmh9 1/1 Running 0 21h 10.128.2.20 compute-3 <none> <none> pod/busybox-5-56978ff94-rjm5d 1/1 Running 0 21h 10.131.2.31 compute-4 <none> <none> pod/busybox-6-57544b458b-ccxm8 1/1 Running 0 21h 10.128.5.222 compute-5 <none> <none> pod/busybox-7-77ff998b8b-qzh56 1/1 Running 0 21h 10.128.5.223 compute-5 <none> <none> pod/busybox-8-6d5cdc5678-gmgbx 1/1 Running 0 21h 10.128.5.224 compute-5 <none> <none> pod/busybox-9-79c789995d-nnw24 1/1 Running 0 21h 10.131.2.32 compute-4 <none> <none> From C2- amagrawa:~$ busybox-3 Now using project "busybox-workloads-3" on server "https://api.amagrawa-2nd.qe.rh-ocs.com:6443". NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE VOLUMEMODE persistentvolumeclaim/busybox-pvc-1 Bound pvc-47521762-e4a6-4af5-b662-72e274db8b86 94Gi RWX ocs-storagecluster-cephfs 20h Filesystem persistentvolumeclaim/busybox-pvc-10 Bound pvc-b3f62011-3d44-4920-9dff-da20ae34af5f 87Gi RWX ocs-storagecluster-cephfs 20h Filesystem persistentvolumeclaim/busybox-pvc-11 Bound pvc-cd44860a-a275-480d-98d1-0919a1095b8a 33Gi RWX ocs-storagecluster-cephfs 20h Filesystem persistentvolumeclaim/busybox-pvc-12 Bound pvc-72e8f30c-ddd5-48f8-8202-76f7163c38a7 147Gi RWX ocs-storagecluster-cephfs 20h Filesystem persistentvolumeclaim/busybox-pvc-13 Bound pvc-2111d09c-6439-4503-9643-22cfaa792e02 77Gi RWX ocs-storagecluster-cephfs 20h Filesystem persistentvolumeclaim/busybox-pvc-14 Bound pvc-2acb5e8d-6f09-4acb-9e1a-dc2843ed8c2e 70Gi RWX ocs-storagecluster-cephfs 20h Filesystem persistentvolumeclaim/busybox-pvc-15 Bound pvc-1d5d4c70-7330-4cd5-a75e-d2202d05ef7c 131Gi RWX ocs-storagecluster-cephfs 20h Filesystem persistentvolumeclaim/busybox-pvc-16 Bound pvc-05101e71-18c6-42bd-858f-5c5bc4e5b5ad 127Gi RWX ocs-storagecluster-cephfs 20h Filesystem persistentvolumeclaim/busybox-pvc-17 Bound pvc-21aef07c-7856-454b-b15e-c16ad2f3737e 58Gi RWX ocs-storagecluster-cephfs 20h Filesystem persistentvolumeclaim/busybox-pvc-18 Bound pvc-6389abd3-cfdb-42e4-a2de-73ad8094b5c5 123Gi RWX ocs-storagecluster-cephfs 20h Filesystem persistentvolumeclaim/busybox-pvc-19 Bound pvc-d366cbdc-c245-4b4d-932b-dd87d17d260e 61Gi RWX ocs-storagecluster-cephfs 20h Filesystem persistentvolumeclaim/busybox-pvc-2 Bound pvc-703f3ec0-a2d7-450a-8149-3ed6c04d1ec6 44Gi RWX ocs-storagecluster-cephfs 20h Filesystem persistentvolumeclaim/busybox-pvc-20 Bound pvc-d06e06c9-83ea-488a-86c2-355ef516c30d 33Gi RWX ocs-storagecluster-cephfs 20h Filesystem persistentvolumeclaim/busybox-pvc-3 Bound pvc-0c502b44-622c-45b1-9d7b-38b1dabe9d1d 76Gi RWX ocs-storagecluster-cephfs 20h Filesystem persistentvolumeclaim/busybox-pvc-4 Bound pvc-d3e2be6b-f2a7-4207-81ce-0346de6e5cee 144Gi RWX ocs-storagecluster-cephfs 20h Filesystem persistentvolumeclaim/busybox-pvc-5 Bound pvc-939e1c17-6c05-4589-af01-c36096252c20 107Gi RWX ocs-storagecluster-cephfs 20h Filesystem persistentvolumeclaim/busybox-pvc-6 Bound pvc-b58508cf-0d81-4d2b-96df-8030816c8330 123Gi RWX ocs-storagecluster-cephfs 20h Filesystem persistentvolumeclaim/busybox-pvc-7 Bound pvc-76c12792-f94e-4f7a-9bf8-c8febc83ee80 90Gi RWX ocs-storagecluster-cephfs 20h Filesystem persistentvolumeclaim/busybox-pvc-8 Bound pvc-5a88c698-f7fa-4536-9103-4f085cbcaa8e 91Gi RWX ocs-storagecluster-cephfs 20h Filesystem persistentvolumeclaim/busybox-pvc-9 Bound pvc-2687e03c-fdea-4594-b7f5-48080f06bac9 111Gi RWX ocs-storagecluster-cephfs 20h Filesystem NAME DESIREDSTATE CURRENTSTATE volumereplicationgroup.ramendr.openshift.io/busybox-sub-cephfs-placement-1-drpc secondary Secondary NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES pod/volsync-rsync-tls-dst-busybox-pvc-1-9bh6q 1/1 Running 0 102s 10.131.0.37 compute-1 <none> <none> pod/volsync-rsync-tls-dst-busybox-pvc-10-2fm7r 1/1 Running 0 103s 10.131.0.36 compute-1 <none> <none> pod/volsync-rsync-tls-dst-busybox-pvc-11-c27rx 1/1 Running 0 2m1s 10.131.0.31 compute-1 <none> <none> pod/volsync-rsync-tls-dst-busybox-pvc-12-wp9zz 1/1 Running 0 116s 10.131.0.35 compute-1 <none> <none> pod/volsync-rsync-tls-dst-busybox-pvc-13-jwcpp 1/1 Running 0 2m38s 10.131.0.10 compute-1 <none> <none> pod/volsync-rsync-tls-dst-busybox-pvc-14-zrwqn 1/1 Running 0 94s 10.131.0.39 compute-1 <none> <none> pod/volsync-rsync-tls-dst-busybox-pvc-15-q884c 1/1 Running 0 2m33s 10.131.0.12 compute-1 <none> <none> pod/volsync-rsync-tls-dst-busybox-pvc-16-ng9xb 1/1 Running 0 94s 10.131.0.38 compute-1 <none> <none> pod/volsync-rsync-tls-dst-busybox-pvc-17-t947z 1/1 Running 0 2m18s 10.131.0.23 compute-1 <none> <none> pod/volsync-rsync-tls-dst-busybox-pvc-18-bnklh 1/1 Running 0 2m28s 10.131.0.19 compute-1 <none> <none> pod/volsync-rsync-tls-dst-busybox-pvc-19-ml7zl 1/1 Running 0 2m16s 10.131.0.27 compute-1 <none> <none> pod/volsync-rsync-tls-dst-busybox-pvc-2-6lkcb 1/1 Running 0 93s 10.131.0.40 compute-1 <none> <none> pod/volsync-rsync-tls-dst-busybox-pvc-20-wghxd 1/1 Running 0 2m6s 10.131.0.30 compute-1 <none> <none> pod/volsync-rsync-tls-dst-busybox-pvc-3-mrblc 1/1 Running 0 2m28s 10.131.0.16 compute-1 <none> <none> pod/volsync-rsync-tls-dst-busybox-pvc-4-vqz66 1/1 Running 0 119s 10.131.0.32 compute-1 <none> <none> pod/volsync-rsync-tls-dst-busybox-pvc-5-9zc6c 1/1 Running 0 2m14s 10.131.0.28 compute-1 <none> <none> pod/volsync-rsync-tls-dst-busybox-pvc-6-8hcsd 1/1 Running 0 117s 10.131.0.33 compute-1 <none> <none> pod/volsync-rsync-tls-dst-busybox-pvc-7-z6qsc 1/1 Running 0 93s 10.131.0.41 compute-1 <none> <none> pod/volsync-rsync-tls-dst-busybox-pvc-8-bsf7k 1/1 Running 0 2m22s 10.131.0.21 compute-1 <none> <none> pod/volsync-rsync-tls-dst-busybox-pvc-9-5h8bn 1/1 Running 0 117s 10.131.0.34 compute-1 <none> <none> ============================================================================================================================================================================== busybox-appset-cephfs-placement-drpc is also fine which was in deployed state and running on C2 ============================================================================================================================================================================== Rest everything seems fine. Ceph is as it was earlier. All ODF pods are up and running. DRPC should last backedup state. DRPolicy is validated. Mirroring status is healthy which shouldn't be, but it will it fixed when the issue with busybox-workloads-2-placement-drpc will be fixed. DR monitoring dashboard was backed up as is. Sync is working fine for remaining 3 workloads- amagrawa:acm$ group drplacementcontrol.ramendr.openshift.io/app-namespace: busybox-workloads-1 namespace: busybox-workloads-1 namespace: busybox-workloads-1 lastGroupSyncTime: "2023-10-24T15:41:13Z" namespace: busybox-workloads-1 drplacementcontrol.ramendr.openshift.io/app-namespace: busybox-workloads-3 namespace: busybox-workloads-3 namespace: busybox-workloads-3 lastGroupSyncTime: "2023-10-24T15:41:29Z" namespace: busybox-workloads-3 drplacementcontrol.ramendr.openshift.io/app-namespace: busybox-workloads-4 namespace: openshift-gitops namespace: openshift-gitops lastGroupSyncTime: "2023-10-24T15:40:40Z" namespace: busybox-workloads-4 drplacementcontrol.ramendr.openshift.io/app-namespace: busybox-workloads-2 namespace: openshift-gitops namespace: openshift-gitops namespace: busybox-workloads-2 amagrawa:acm$ date -u Tuesday 24 October 2023 03:51:49 PM UTC Failing_qa because of the issue with workload busybox-workloads-2-placement-drpc which was failedover to C2 (amagrawa-2nd) but it's backup wasn't taken.
So, definitely not a blocker for 4.14.0? Best case, we can add this as a know issue.
(In reply to Mudit Agarwal from comment #12) > So, definitely not a blocker for 4.14.0? > > Best case, we can add this as a know issue. Pls note that this could possibly happen on a production setup, even though chances are on the lesser side. If failover was performed assuming primary cluster goes down, there are high chances that the hub also goes down after that (or may be at the same time if it's in the same Data centre/zone), and new backups are not taken which would eventually end up in the same situation. I agree with it not being 4.14 blocker, but it's a real use case and we should still consider it.
Benamar, let us add this as a known issue for now.
I can live with not a blocker at this point of time :)
As Aman highlighted, We should anticipate that some operations may occur on the cluster after a backup has been taken. Could we please document the steps on how to recover the cluster in such cases?
Yes
@sheggodu do we need to move this to ON_QA manually?
verification of this bug will be done as part of Hub recovery testing post 4.15. No regression seen so far. Moving the bug to 4.16.
Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory (Important: Red Hat OpenShift Data Foundation 4.17.0 Security, Enhancement, & Bug Fix Update), and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://access.redhat.com/errata/RHSA-2024:8676
The needinfo request[s] on this closed bug have been removed as they have been unresolved for 120 days