Bug 2294704 - [MDR] [UI]: Optimize DRPC creation when multiple workloads are deployed in a single namespace [NEEDINFO]
Summary: [MDR] [UI]: Optimize DRPC creation when multiple workloads are deployed in a ...
Keywords:
Status: ASSIGNED
Alias: None
Product: Red Hat OpenShift Data Foundation
Classification: Red Hat Storage
Component: documentation
Version: 4.16
Hardware: Unspecified
OS: Unspecified
unspecified
low
Target Milestone: ---
: ---
Assignee: Erin Donnelly
QA Contact: Neha Berry
URL:
Whiteboard:
Depends On:
Blocks: 2260844
TreeView+ depends on / blocked
 
Reported: 2024-06-28 09:03 UTC by akarsha
Modified: 2024-08-09 13:17 UTC (History)
9 users (show)

Fixed In Version:
Doc Type: Known Issue
Doc Text:
.Optimize DRPC creation when multiple workloads are deployed in a single namespace When multiple applications refer to the same placement, then enabling DR for any of the applications enables it for all the applications that refer to the placement. If the applications are created after the creation of the DRPC, the PVC label selector in the DRPC might not match the labels of the newer applications. Workaround: In such cases, disabling DR and enabling it again with the right label selector is recommended.
Clone Of:
Environment:
Last Closed:
Embargoed:
skatiyar: needinfo? (rtalur)
gshanmug: needinfo? (akrai)


Attachments (Terms of Use)

Description akarsha 2024-06-28 09:03:59 UTC
Description of problem (please be detailed as possible and provide log
snippests):

When multiple workloads are deployed in a single namespace, following observations are seen:

(i) test-1: 
	type: subscription 
	ns: test-bz
	app1 
	app2 
	single placement 

Deployed both the apps in the same namespace "test-bz", and applied drpolicy on app1. Automatically drpolicy was added to app2
	
Sample output:
	
$ date; date --utc; oc get pod,pvc,vrg -n test-bz
Tuesday 25 June 2024 04:19:15 PM IST
Tuesday 25 June 2024 10:49:15 AM UTC
NAME                                        READY   STATUS    RESTARTS   AGE
pod/busybox-cephfs-pod-1-658f6b7fd5-d2jdf   1/1     Running   0          8m56s
pod/busybox-cephfs-pod-2-5888795b94-nksp2   1/1     Running   0          5m59s
pod/busybox-rbd-pod-1-7f4d7cc6c7-t6znl      1/1     Running   0          8m56s
pod/busybox-rbd-pod-2-6955f9c594-fvhzp      1/1     Running   0          5m59s

NAME                                         STATUS   VOLUME                                     CAPACITY   ACCESS MODES   STORAGECLASS                           VOLUMEATTRIBUTESCLASS   AGE
persistentvolumeclaim/busybox-cephfs-pvc-1   Bound    pvc-eb0d64c8-3f43-4ce6-83b1-81a0913c2f48   100Gi      RWO            ocs-external-storagecluster-cephfs     <unset>                 8m56s
persistentvolumeclaim/busybox-cephfs-pvc-2   Bound    pvc-768a0683-e79b-4d8d-bf14-fe1fb7aa2473   100Gi      RWO            ocs-external-storagecluster-cephfs     <unset>                 6m
persistentvolumeclaim/busybox-rbd-pvc-1      Bound    pvc-5c9c04ef-a73b-48d6-9c09-f45f465bccd9   100Gi      RWO            ocs-external-storagecluster-ceph-rbd   <unset>                 8m56s
persistentvolumeclaim/busybox-rbd-pvc-2      Bound    pvc-994a7a2a-8556-4414-a3e9-6d103dc496e9   100Gi      RWO            ocs-external-storagecluster-ceph-rbd   <unset>                 5m59s

NAME                                                                  DESIREDSTATE   CURRENTSTATE
volumereplicationgroup.ramendr.openshift.io/test-1-placement-1-drpc   primary        Primary
	
$ date; date --utc; oc get vrg -n test-bz -oyaml
Tuesday 25 June 2024 04:19:38 PM IST
Tuesday 25 June 2024 10:49:38 AM UTC
apiVersion: v1
items:
- apiVersion: ramendr.openshift.io/v1alpha1
  kind: VolumeReplicationGroup
  .
  .
  spec:
    pvcSelector:
      matchLabels:
        appname: busybox_app1
  .
  .
		
$ date; date --utc; oc get drpc -n test-bz -oyaml
Tuesday 25 June 2024 04:17:27 PM IST
Tuesday 25 June 2024 10:47:27 AM UTC
apiVersion: v1
items:
- apiVersion: ramendr.openshift.io/v1alpha1
  kind: DRPlacementControl
  .
  .
  spec:
    drPolicyRef:
      apiVersion: ramendr.openshift.io/v1alpha1
      kind: DRPolicy
      name: odr-policy-mdr
    placementRef:
      apiVersion: cluster.open-cluster-management.io/v1beta1
      kind: Placement
      name: test-1-placement-1
      namespace: test-bz
    preferredCluster: akrai-m-c11
    pvcSelector:
      matchLabels:
        appname: busybox_app1
  .
  .
	
Observations: 
a. Only one drpc was created with pvc labels matching app1, and app2 drpc was not created and app2 labels were not present in drpc and vrg
b. Tried failover of app2 from ui -> failover completed, new pvcs created for app2 in secondary cluster
	 
Expectations: if there are more than one apps using a single placement then label selector should be created differently that is ui should create drpc with the label selctor macthing all the apps

(ii) test2:
        type: subscription
	ns: test-dff-namespace
	test-5 -> app
	test-6 -> app 
	

Deployed both apps in the same namespace with different placement for each apps, so two placements were created.  

Observations:
a. During apply drpolicy for test5, saw two labels, i.e labels from test-5 and test-6, so selected both the labels and drpc and vrg was created with only one test-5 label selector
b. Same behavior seen for test-6 app, but we did not select labels of test-5
c. Tested the failover of test-5, only test-5 failover worked - expected
	 

Version of all relevant components (if applicable):
OCP: 4.16.0-0.nightly-2024-06-20-005834
ODF: 4.16.0-130.stable
CEPH: 18.2.1-194.el9cp (04a992766839cd3207877e518a1238cdbac3787e) reef (stable)
ACM: 2.11.0-123
GitOps: 1.12.3
OADP (on managed clusters): 1.4.0

Does this issue impact your ability to continue to work with the product
(please explain in detail what is the user impact)?


Is there any workaround available to the best of your knowledge?


Rate from 1 - 5 the complexity of the scenario you performed that caused this
bug (1 - very simple, 5 - very complex)?


Can this issue reproducible?


Can this issue reproduce from the UI?


If this is a regression, please provide more details to justify this:


Steps to Reproduce:
1. See the description
2.
3.


Actual results:
Only one drpc was created with pvc labels matching app1, and app2 drpc was not created and app2 labels were not present in drpc and vrg

Expected results:
if there are more than one apps using a single placement then label selector should be created differently that is ui should create drpc with the label selctor macthing all the apps

Additional info:

Comment 3 Sanjal Katiyar 2024-07-05 08:16:51 UTC
We can document this behaviour for 4.16, but I want to confirm with the Ramen team (@Talur) whether we really support multiple applications sharing same Placement in our product ??

Comment 4 Sanjal Katiyar 2024-07-05 08:19:28 UTC
(In reply to Sanjal Katiyar from comment #3)
> We can document this behaviour for 4.16, but I want to confirm with the
> Ramen team (@Talur) whether we really support multiple applications sharing
> same Placement in our product ??

also, what will be the expected behaviour in that case ??

Comment 6 Sanjal Katiyar 2024-07-11 17:57:24 UTC
I doubt scenarios replicated as part of this BZ are valid, still will wait for reply on https://bugzilla.redhat.com/show_bug.cgi?id=2294704#c3 before deciding to close/fix or document the BZ.

If single "Placement" is applied to many applications, applying policy to one will DR protect all.

As per the BZ description:
> Expected results:
> if there are more than one apps using a single placement then label selector should be created differently that is ui should create drpc with the label selctor macthing all the apps

UI already displays all the PVC labels present in the application namespace, so whatever labels are selected during applying policy to the first app will be passed down to the DRPC and will be utilised for all other applications as well (which are sharing common "Placement").


Note You need to log in before you can comment on or make changes to this bug.