Bug 2210762 - [MDR]: When DRPolicy is applied to mutiple applications under same namespace, VRG is not created [NEEDINFO]
Summary: [MDR]: When DRPolicy is applied to mutiple applications under same namespace,...
Keywords:
Status: POST
Alias: None
Product: Red Hat OpenShift Data Foundation
Classification: Red Hat Storage
Component: odf-dr
Version: 4.13
Hardware: Unspecified
OS: Unspecified
unspecified
high
Target Milestone: ---
: ODF 4.16.0
Assignee: Raghavendra Talur
QA Contact: akarsha
URL:
Whiteboard:
: 2222013 (view as bug list)
Depends On:
Blocks: 2154341
TreeView+ depends on / blocked
 
Reported: 2023-05-29 11:36 UTC by akarsha
Modified: 2024-01-30 07:28 UTC (History)
10 users (show)

Fixed In Version:
Doc Type: Known Issue
Doc Text:
When a DRPlacementControl (DRPC) is created for applications that are co-located with other applications in the namespace, the DRPC has no label selector set for the applications. If any subsequent changes are made to the label selector, the validating admission webhook in the OpenShift Data Foundation Hub controller rejects the changes. Until the admission webhook is changed to allow such changes, the DRPC `validatingwebhookconfigurations` can be patched to remove the webhook.
Clone Of:
Environment:
Last Closed:
Embargoed:
rtalur: needinfo? (akrai)
rtalur: needinfo? (akrai)
rtalur: needinfo? (hnallurv)


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
Github RamenDR ramen pull 904 0 None open api: make the pvcSelector optional in the DRPlacementControl CRD 2023-05-31 04:04:21 UTC

Description akarsha 2023-05-29 11:36:52 UTC
Description of problem (please be detailed as possible and provide log
snippests):
When deployed with single namespace, multiple applications, using same placementrule via ACM subscription, and select all and apply DRPolicy, the DRPC.spec.pvcSelector is empty and VRG is not created for that.

(i) Subscrition type:

3 Apps created: cronjob, mysql, helloworld
Namespace: test-bs
Single Placement: test-bs1-placement-1

$ oc get drpc -n test-bs -owide
NAME                        AGE     PREFERREDCLUSTER   FAILOVERCLUSTER   DESIREDSTATE   CURRENTSTATE   PROGRESSION   START TIME   DURATION   PEER READY
test-bs1-placement-1-drpc   3h40m   akrai-m22-c2  

$ oc get drpc test-bs1-placement-1-drpc  -n test-bs -oyaml
.
.
spec:
  drPolicyRef:
    apiVersion: ramendr.openshift.io/v1alpha1
    kind: DRPolicy
    name: odr-policy
  placementRef:
    apiVersion: apps.open-cluster-management.io/v1
    kind: PlacementRule
    name: test-bs1-placement-1
    namespace: test-bs
  preferredCluster: akrai-m22-c2
  pvcSelector:
    matchLabels: {}
status:
  lastUpdateTime: "2023-05-25T11:00:13Z"
  preferredDecision: {}
.
.

$ oc get pvc,pod,vrg -n test-bs
NAME                                        STATUS   VOLUME                                     CAPACITY   ACCESS MODES   STORAGECLASS                           AGE
persistentvolumeclaim/hello-world-cephfs    Bound    pvc-cd0bfbe0-a5a5-427c-8a87-f3e97d412efb   10Gi       RWO            ocs-external-storagecluster-cephfs     4d2h
persistentvolumeclaim/hello-world-rbd       Bound    pvc-3301f2e6-6dc1-42af-ac8c-68c8b4a394b3   10Gi       RWO            ocs-external-storagecluster-ceph-rbd   4d2h
persistentvolumeclaim/helloworld-pv-claim   Bound    pvc-ae889683-2cf7-41a9-aa65-687bff2871da   10Gi       RWO            ocs-external-storagecluster-cephfs     4d2h
persistentvolumeclaim/mysql-pv-claim        Bound    pvc-a46dc582-22b3-409e-8189-7600f94ffdb8   24Gi       RWO            ocs-external-storagecluster-ceph-rbd   4d2h

NAME                                         READY   STATUS      RESTARTS       AGE
pod/data-viewer-1-build                      0/1     Completed   0              4d2h
pod/data-viewer-5bbc649d6-bznmp              1/1     Running     0              4d2h
pod/hello-world-job-cephfs-28089325-vv7pm    0/1     Completed   0              2m46s
pod/hello-world-job-cephfs-28089326-hhgmv    0/1     Completed   0              106s
pod/hello-world-job-cephfs-28089327-4rhfc    0/1     Completed   0              46s
pod/hello-world-job-rbd-28089325-78jhx       0/1     Completed   0              2m46s
pod/hello-world-job-rbd-28089326-zcn6m       0/1     Completed   0              106s
pod/hello-world-job-rbd-28089327-4v6hd       0/1     Completed   0              46s
pod/helloworld-app-deploy-594944ffd7-glmrr   1/1     Running     0              4d2h
pod/io-writer-mysql-5f9ddd7d9c-4frdg         1/1     Running     0              4d2h
pod/io-writer-mysql-5f9ddd7d9c-52wns         1/1     Running     0              4d2h
pod/io-writer-mysql-5f9ddd7d9c-hkqtz         1/1     Running     0              4d2h
pod/io-writer-mysql-5f9ddd7d9c-s692l         1/1     Running     0              4d2h
pod/io-writer-mysql-5f9ddd7d9c-t84k2         1/1     Running     0              4d2h
pod/mysql-57dd5765d7-jqxnj                   1/1     Running     1 (4d2h ago)   4d2h


Similar observation is seen when using different placementrule, i.e
(ii) Subscription type:

3 Apps created: logwriter, busybox, mysql
Namespace: test-mysql
3 Placement >> for each app there is one placement


Version of all relevant components (if applicable):
OCP: 4.13.0-0.nightly-2023-05-21-164436
ODF: 4.13.0-206.stable
ACM: 2.7.3
CEPH: 17.2.6-54.el9cp (d263f78add497a8b185de2edbf8e4ee49b430f4e) quincy (stable)

Does this issue impact your ability to continue to work with the product
(please explain in detail what is the user impact)?


Is there any workaround available to the best of your knowledge?
not aware

Rate from 1 - 5 the complexity of the scenario you performed that caused this
bug (1 - very simple, 5 - very complex)?
2

Can this issue reproducible?
1/1

Can this issue reproduce from the UI?


If this is a regression, please provide more details to justify this:


Steps to Reproduce:
1. Deploy MDR based cluster
2. Deploy multiple applications using same namespace and same placementrule
3. Select all applications and apply DRPolicy


Actual results:
When DRPolicy is applied to mutiple applications under same namespace, DRPC.spec.pvcSelector is empty and VRG is not created 

Expected results:
VRG should be created for the applications.

Additional info:

Comment 5 Mudit Agarwal 2023-05-31 01:48:49 UTC
Talur, any update on this one?

Comment 15 Shyamsundar 2023-07-12 16:04:59 UTC
*** Bug 2222013 has been marked as a duplicate of this bug. ***

Comment 19 Shrivaibavi Raghaventhiran 2023-09-11 13:23:47 UTC
Tested versions:
-----------------
OCP - 4.14.0-0.nightly-2023-09-02-132842
ODF - 4.14.0-126.stable
ACM - 2.9.0

Test steps:
-----------
1. Created multiple apps under same namespace and with same placement rule

$ oc get pods,pvc -n placementrule
NAME                                  READY   STATUS    RESTARTS   AGE
pod/busybox-cephfs-6c8c84dc5c-k9gsn   1/1     Running   0          6m54s
pod/busybox-rbd-5d6cc5f8b9-c84hb      1/1     Running   0          15m
pod/fluentd-daemon-g2tkx              1/1     Running   0          5m36s
pod/fluentd-daemon-qkb78              1/1     Running   0          5m36s
pod/fluentd-daemon-v9rw7              1/1     Running   0          5m36s

NAME                                       STATUS   VOLUME                                     CAPACITY   ACCESS MODES   STORAGECLASS                           AGE
persistentvolumeclaim/busybox-cephfs-pvc   Bound    pvc-932064eb-9648-4d35-b2f2-14a48a726a8d   5Gi        RWO            ocs-external-storagecluster-cephfs     6m55s
persistentvolumeclaim/busybox-rbd-pvc      Bound    pvc-826fa210-b470-4aa7-9843-695493fd12be   5Gi        RWO            ocs-external-storagecluster-ceph-rbd   15m
persistentvolumeclaim/fluentd-pvc          Bound    pvc-3cb29e8a-f504-469f-83eb-14bd21c5b738   50Gi       RWX            ocs-external-storagecluster-ceph-rbd   5m37s

2. Tried to apply DRPolicy, But was able to assign DRPolicy for one app(cephfs) under namespace(placementrule). UI did not allow to assign DRPOLICY for other two apps (rbd, daemonset) present in the namespace.

$ oc get pods,pvc,vrg -n placementrule
NAME                                  READY   STATUS    RESTARTS   AGE
pod/busybox-cephfs-6c8c84dc5c-k9gsn   1/1     Running   0          6m54s
pod/busybox-rbd-5d6cc5f8b9-c84hb      1/1     Running   0          15m
pod/fluentd-daemon-g2tkx              1/1     Running   0          5m36s
pod/fluentd-daemon-qkb78              1/1     Running   0          5m36s
pod/fluentd-daemon-v9rw7              1/1     Running   0          5m36s

NAME                                       STATUS   VOLUME                                     CAPACITY   ACCESS MODES   STORAGECLASS                           AGE
persistentvolumeclaim/busybox-cephfs-pvc   Bound    pvc-932064eb-9648-4d35-b2f2-14a48a726a8d   5Gi        RWO            ocs-external-storagecluster-cephfs     6m55s
persistentvolumeclaim/busybox-rbd-pvc      Bound    pvc-826fa210-b470-4aa7-9843-695493fd12be   5Gi        RWO            ocs-external-storagecluster-ceph-rbd   15m
persistentvolumeclaim/fluentd-pvc          Bound    pvc-3cb29e8a-f504-469f-83eb-14bd21c5b738   50Gi       RWX            ocs-external-storagecluster-ceph-rbd   5m37s

NAME                                                                         DESIREDSTATE   CURRENTSTATE
volumereplicationgroup.ramendr.openshift.io/placementrule-placement-1-drpc   primary        Primary

Comment 20 Mudit Agarwal 2023-09-26 08:31:50 UTC
Any update on this, is this still feasible for 4.14?


Note You need to log in before you can comment on or make changes to this bug.