When a DRPlacementControl (DRPC) is created for applications that are co-located with other applications in the namespace, the DRPC has no label selector set for the applications.
If any subsequent changes are made to the label selector, the validating admission webhook in the OpenShift Data Foundation Hub controller rejects the changes.
Until the admission webhook is changed to allow such changes, the DRPC `validatingwebhookconfigurations` can be patched to remove the webhook.
Description of problem (please be detailed as possible and provide log
snippests):
When deployed with single namespace, multiple applications, using same placementrule via ACM subscription, and select all and apply DRPolicy, the DRPC.spec.pvcSelector is empty and VRG is not created for that.
(i) Subscrition type:
3 Apps created: cronjob, mysql, helloworld
Namespace: test-bs
Single Placement: test-bs1-placement-1
$ oc get drpc -n test-bs -owide
NAME AGE PREFERREDCLUSTER FAILOVERCLUSTER DESIREDSTATE CURRENTSTATE PROGRESSION START TIME DURATION PEER READY
test-bs1-placement-1-drpc 3h40m akrai-m22-c2
$ oc get drpc test-bs1-placement-1-drpc -n test-bs -oyaml
.
.
spec:
drPolicyRef:
apiVersion: ramendr.openshift.io/v1alpha1
kind: DRPolicy
name: odr-policy
placementRef:
apiVersion: apps.open-cluster-management.io/v1
kind: PlacementRule
name: test-bs1-placement-1
namespace: test-bs
preferredCluster: akrai-m22-c2
pvcSelector:
matchLabels: {}
status:
lastUpdateTime: "2023-05-25T11:00:13Z"
preferredDecision: {}
.
.
$ oc get pvc,pod,vrg -n test-bs
NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE
persistentvolumeclaim/hello-world-cephfs Bound pvc-cd0bfbe0-a5a5-427c-8a87-f3e97d412efb 10Gi RWO ocs-external-storagecluster-cephfs 4d2h
persistentvolumeclaim/hello-world-rbd Bound pvc-3301f2e6-6dc1-42af-ac8c-68c8b4a394b3 10Gi RWO ocs-external-storagecluster-ceph-rbd 4d2h
persistentvolumeclaim/helloworld-pv-claim Bound pvc-ae889683-2cf7-41a9-aa65-687bff2871da 10Gi RWO ocs-external-storagecluster-cephfs 4d2h
persistentvolumeclaim/mysql-pv-claim Bound pvc-a46dc582-22b3-409e-8189-7600f94ffdb8 24Gi RWO ocs-external-storagecluster-ceph-rbd 4d2h
NAME READY STATUS RESTARTS AGE
pod/data-viewer-1-build 0/1 Completed 0 4d2h
pod/data-viewer-5bbc649d6-bznmp 1/1 Running 0 4d2h
pod/hello-world-job-cephfs-28089325-vv7pm 0/1 Completed 0 2m46s
pod/hello-world-job-cephfs-28089326-hhgmv 0/1 Completed 0 106s
pod/hello-world-job-cephfs-28089327-4rhfc 0/1 Completed 0 46s
pod/hello-world-job-rbd-28089325-78jhx 0/1 Completed 0 2m46s
pod/hello-world-job-rbd-28089326-zcn6m 0/1 Completed 0 106s
pod/hello-world-job-rbd-28089327-4v6hd 0/1 Completed 0 46s
pod/helloworld-app-deploy-594944ffd7-glmrr 1/1 Running 0 4d2h
pod/io-writer-mysql-5f9ddd7d9c-4frdg 1/1 Running 0 4d2h
pod/io-writer-mysql-5f9ddd7d9c-52wns 1/1 Running 0 4d2h
pod/io-writer-mysql-5f9ddd7d9c-hkqtz 1/1 Running 0 4d2h
pod/io-writer-mysql-5f9ddd7d9c-s692l 1/1 Running 0 4d2h
pod/io-writer-mysql-5f9ddd7d9c-t84k2 1/1 Running 0 4d2h
pod/mysql-57dd5765d7-jqxnj 1/1 Running 1 (4d2h ago) 4d2h
Similar observation is seen when using different placementrule, i.e
(ii) Subscription type:
3 Apps created: logwriter, busybox, mysql
Namespace: test-mysql
3 Placement >> for each app there is one placement
Version of all relevant components (if applicable):
OCP: 4.13.0-0.nightly-2023-05-21-164436
ODF: 4.13.0-206.stable
ACM: 2.7.3
CEPH: 17.2.6-54.el9cp (d263f78add497a8b185de2edbf8e4ee49b430f4e) quincy (stable)
Does this issue impact your ability to continue to work with the product
(please explain in detail what is the user impact)?
Is there any workaround available to the best of your knowledge?
not aware
Rate from 1 - 5 the complexity of the scenario you performed that caused this
bug (1 - very simple, 5 - very complex)?
2
Can this issue reproducible?
1/1
Can this issue reproduce from the UI?
If this is a regression, please provide more details to justify this:
Steps to Reproduce:
1. Deploy MDR based cluster
2. Deploy multiple applications using same namespace and same placementrule
3. Select all applications and apply DRPolicy
Actual results:
When DRPolicy is applied to mutiple applications under same namespace, DRPC.spec.pvcSelector is empty and VRG is not created
Expected results:
VRG should be created for the applications.
Additional info:
Description of problem (please be detailed as possible and provide log snippests): When deployed with single namespace, multiple applications, using same placementrule via ACM subscription, and select all and apply DRPolicy, the DRPC.spec.pvcSelector is empty and VRG is not created for that. (i) Subscrition type: 3 Apps created: cronjob, mysql, helloworld Namespace: test-bs Single Placement: test-bs1-placement-1 $ oc get drpc -n test-bs -owide NAME AGE PREFERREDCLUSTER FAILOVERCLUSTER DESIREDSTATE CURRENTSTATE PROGRESSION START TIME DURATION PEER READY test-bs1-placement-1-drpc 3h40m akrai-m22-c2 $ oc get drpc test-bs1-placement-1-drpc -n test-bs -oyaml . . spec: drPolicyRef: apiVersion: ramendr.openshift.io/v1alpha1 kind: DRPolicy name: odr-policy placementRef: apiVersion: apps.open-cluster-management.io/v1 kind: PlacementRule name: test-bs1-placement-1 namespace: test-bs preferredCluster: akrai-m22-c2 pvcSelector: matchLabels: {} status: lastUpdateTime: "2023-05-25T11:00:13Z" preferredDecision: {} . . $ oc get pvc,pod,vrg -n test-bs NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE persistentvolumeclaim/hello-world-cephfs Bound pvc-cd0bfbe0-a5a5-427c-8a87-f3e97d412efb 10Gi RWO ocs-external-storagecluster-cephfs 4d2h persistentvolumeclaim/hello-world-rbd Bound pvc-3301f2e6-6dc1-42af-ac8c-68c8b4a394b3 10Gi RWO ocs-external-storagecluster-ceph-rbd 4d2h persistentvolumeclaim/helloworld-pv-claim Bound pvc-ae889683-2cf7-41a9-aa65-687bff2871da 10Gi RWO ocs-external-storagecluster-cephfs 4d2h persistentvolumeclaim/mysql-pv-claim Bound pvc-a46dc582-22b3-409e-8189-7600f94ffdb8 24Gi RWO ocs-external-storagecluster-ceph-rbd 4d2h NAME READY STATUS RESTARTS AGE pod/data-viewer-1-build 0/1 Completed 0 4d2h pod/data-viewer-5bbc649d6-bznmp 1/1 Running 0 4d2h pod/hello-world-job-cephfs-28089325-vv7pm 0/1 Completed 0 2m46s pod/hello-world-job-cephfs-28089326-hhgmv 0/1 Completed 0 106s pod/hello-world-job-cephfs-28089327-4rhfc 0/1 Completed 0 46s pod/hello-world-job-rbd-28089325-78jhx 0/1 Completed 0 2m46s pod/hello-world-job-rbd-28089326-zcn6m 0/1 Completed 0 106s pod/hello-world-job-rbd-28089327-4v6hd 0/1 Completed 0 46s pod/helloworld-app-deploy-594944ffd7-glmrr 1/1 Running 0 4d2h pod/io-writer-mysql-5f9ddd7d9c-4frdg 1/1 Running 0 4d2h pod/io-writer-mysql-5f9ddd7d9c-52wns 1/1 Running 0 4d2h pod/io-writer-mysql-5f9ddd7d9c-hkqtz 1/1 Running 0 4d2h pod/io-writer-mysql-5f9ddd7d9c-s692l 1/1 Running 0 4d2h pod/io-writer-mysql-5f9ddd7d9c-t84k2 1/1 Running 0 4d2h pod/mysql-57dd5765d7-jqxnj 1/1 Running 1 (4d2h ago) 4d2h Similar observation is seen when using different placementrule, i.e (ii) Subscription type: 3 Apps created: logwriter, busybox, mysql Namespace: test-mysql 3 Placement >> for each app there is one placement Version of all relevant components (if applicable): OCP: 4.13.0-0.nightly-2023-05-21-164436 ODF: 4.13.0-206.stable ACM: 2.7.3 CEPH: 17.2.6-54.el9cp (d263f78add497a8b185de2edbf8e4ee49b430f4e) quincy (stable) Does this issue impact your ability to continue to work with the product (please explain in detail what is the user impact)? Is there any workaround available to the best of your knowledge? not aware Rate from 1 - 5 the complexity of the scenario you performed that caused this bug (1 - very simple, 5 - very complex)? 2 Can this issue reproducible? 1/1 Can this issue reproduce from the UI? If this is a regression, please provide more details to justify this: Steps to Reproduce: 1. Deploy MDR based cluster 2. Deploy multiple applications using same namespace and same placementrule 3. Select all applications and apply DRPolicy Actual results: When DRPolicy is applied to mutiple applications under same namespace, DRPC.spec.pvcSelector is empty and VRG is not created Expected results: VRG should be created for the applications. Additional info: