Description of problem (please be detailed as possible and provide log snippests): [DR] When drpolicy is create drcluster resources are getting created under default namespace Version of all relevant components (if applicable): OCP version:- 4.11.0-0.nightly-2022-05-11-054135 ACM version:- 2.5 ODF version:- 4.11.0-69 Does this issue impact your ability to continue to work with the product (please explain in detail what is the user impact)? Yes Is there any workaround available to the best of your knowledge? Rate from 1 - 5 the complexity of the scenario you performed that caused this bug (1 - very simple, 5 - very complex)? 1 Can this issue reproducible? Yes Can this issue reproduce from the UI? If this is a regression, please provide more details to justify this: Steps to Reproduce: 1. Deploy DR cluster 2. Deploy DRpolicy and workload 3. Check drcluster resource and drpc statuso Actual results: oc get drclusters.ramendr.openshift.io -A NAMESPACE NAME AGE default prsurve-vm-dev 78m default vmware-dccp-one 78m #DRPC status status: conditions: - lastTransitionTime: "2022-05-13T06:09:44Z" message: failed to get DRCluster (prsurve-vm-dev) DRCluster.ramendr.openshift.io "prsurve-vm-dev" not found observedGeneration: 2 reason: Error status: "False" type: Available lastUpdateTime: "2022-05-13T06:09:44Z" preferredDecision: {} resourceConditions: resourceMeta: generation: 0 kind: "" name: "" namespace: "" Expected results: drclusters should be create in respective namespace Additional info:
DRCluster is a namespaced resource and MCO is currently creating it under the "default" namespace, we are changing it to a cluster scoped resource, that should overcome the issue at hand. Till the PR is in, as a workaround create/move the drcluster from the default NS to the ramen NS.
Upstream PR posted for review: https://github.com/RamenDR/ramen/pull/444 @vbadrina This should require no fix/change from MCO code base, tagging you to understand if that is true.
@srangana We have it hardcoded the namespace to default. Even though it will still work with no changes, I would prefer to have it changed there. Also there are couple of other fixes we need to do, so we will make those changes alonside it.
Verification comments:- Verified on - 4.11-113 oc get drclusters.ramendr.openshift.io -A NAME AGE kmanohar-clu1 26h kmanohar-clu2 26h DRPC Status apiVersion: ramendr.openshift.io/v1alpha1 kind: DRPlacementControl metadata: creationTimestamp: "2022-07-19T08:51:58Z" finalizers: - drpc.ramendr.openshift.io/finalizer generation: 2 labels: app: busybox-sample cluster.open-cluster-management.io/backup: resource name: busybox-drpc namespace: busybox-workloads-1 resourceVersion: "5400391" uid: e9f4a82a-a13f-49a5-8dbc-c15778009893 spec: drPolicyRef: name: drpolicy-5m placementRef: kind: PlacementRule name: busybox-placement namespace: busybox-workloads-1 preferredCluster: kmanohar-clu1 pvcSelector: matchLabels: appname: busybox status: actionDuration: 30.365655808s actionStartTime: "2022-07-19T09:16:50Z" conditions: - lastTransitionTime: "2022-07-19T09:16:50Z" message: Initial deployment completed observedGeneration: 2 reason: Deployed status: "True" type: Available - lastTransitionTime: "2022-07-19T09:16:50Z" message: Ready observedGeneration: 2 reason: Success status: "True" type: PeerReady lastUpdateTime: "2022-07-19T10:39:24Z" phase: Deployed preferredDecision: clusterName: kmanohar-clu1 clusterNamespace: kmanohar-clu1 progression: Completed resourceConditions: conditions: - lastTransitionTime: "2022-07-19T09:16:52Z" message: Restored PV cluster data observedGeneration: 1 reason: Restored status: "True" type: ClusterDataReady - lastTransitionTime: "2022-07-19T09:16:52Z" message: VolumeReplicationGroup is replicating observedGeneration: 1 reason: Replicating status: "False" type: DataProtected - lastTransitionTime: "2022-07-19T10:39:24Z" message: Cluster data of all PVs are protected observedGeneration: 1 reason: Uploaded status: "True" type: ClusterDataProtected - lastTransitionTime: "2022-07-19T09:17:54Z" message: PVCs in the VolumeReplicationGroup are ready for use observedGeneration: 1 reason: Ready status: "True" type: DataReady resourceMeta: generation: 1 kind: VolumeReplicationGroup name: busybox-drpc namespace: busybox-workloads-1 protectedpvcs: - busybox-pvc-1 - busybox-pvc-10 - busybox-pvc-11 - busybox-pvc-12 - busybox-pvc-13 - busybox-pvc-14 - busybox-pvc-15 - busybox-pvc-16 - busybox-pvc-17 - busybox-pvc-18 - busybox-pvc-19 - busybox-pvc-20 - busybox-pvc-2 - busybox-pvc-3 - busybox-pvc-4 - busybox-pvc-6 - busybox-pvc-5 - busybox-pvc-8 - busybox-pvc-7 - busybox-pvc-9
Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory (Important: Red Hat OpenShift Data Foundation 4.11.0 security, enhancement, & bugfix update), and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://access.redhat.com/errata/RHSA-2022:6156