Description of problem (please be detailed as possible and provide log snippets): [DR] Applications are not getting deployed saying Operation cannot be fulfilled on manifestworks.work.open-cluster-management.io "busybox-drpc-2-busybox-workloads-2-vrg-mw": the object has been modified; please apply your changes to the latest version and try again)' A version of all relevant components (if applicable): OCP version:- 4.10.0-0.nightly-2022-03-29-163038 ODF version:- 4.10.0-210 ACM version:- 2.5.0-DOWNSTREAM-2022-03-29-05-04-50 Does this issue impact your ability to continue to work with the product (please explain in detail what is the user impact)? yes Is there any workaround available to the best of your knowledge? Rate from 1 - 5 the complexity of the scenario you performed that caused this bug (1 - very simple, 5 - very complex)? 1 Can this issue reproducible? yes Can this issue reproduce from the UI? If this is a regression, please provide more details to justify this: Steps to Reproduce: 1. Deploy DR cluster over 2.5 acm 2. Deploy workload 3. Check DRPC status Actual results: status: conditions: - lastTransitionTime: "2022-04-02T13:06:39Z" message: 'failed to create or update VolumeReplicationGroup manifest in namespace vmware-dccp-one (Operation cannot be fulfilled on manifestworks.work.open-cluster-management.io "busybox-drpc-2-busybox-workloads-2-vrg-mw": the object has been modified; please apply your changes to the latest version and try again)' observedGeneration: 2 reason: Deploying status: "False" type: Available - lastTransitionTime: "2022-04-02T13:06:39Z" message: Ready observedGeneration: 2 reason: Success status: "True" type: PeerReady Expected results: Workload should be deployed Additional info: I have tested workload creation via acm Ui as well as cli way
Trying to figure out from ACM team what changes Ramen is supposed to make. Updated the Github issue here: https://github.com/stolostron/backlog/issues/21355
A PR is created to fix this issue here: https://github.com/RamenDR/ramen/pull/423
In the meantime, you can work around this problem by editing the clusterrole for odr-hub-operator.v4.10.0*** and add the following entries: ----------------------------- - apiGroups: - apps.open-cluster-management.io resources: - placementrule/finalizers verbs: - '*' - apiGroups: - cluster.open-cluster-management.io resources: - placementdecisions verbs: - create - delete - get - list - patch - update - watch - apiGroups: - cluster.open-cluster-management.io resources: - placementdecisions/status verbs: - get - patch - update -------------------------------------
Moving it out of 4.10, please do not backport to 4.10
Verification comments:- Verified on - 4.11-113 DRPC Status apiVersion: ramendr.openshift.io/v1alpha1 kind: DRPlacementControl metadata: creationTimestamp: "2022-07-19T08:51:58Z" finalizers: - drpc.ramendr.openshift.io/finalizer generation: 2 labels: app: busybox-sample cluster.open-cluster-management.io/backup: resource name: busybox-drpc namespace: busybox-workloads-1 resourceVersion: "5400391" uid: e9f4a82a-a13f-49a5-8dbc-c15778009893 spec: drPolicyRef: name: drpolicy-5m placementRef: kind: PlacementRule name: busybox-placement namespace: busybox-workloads-1 preferredCluster: kmanohar-clu1 pvcSelector: matchLabels: appname: busybox status: actionDuration: 30.365655808s actionStartTime: "2022-07-19T09:16:50Z" conditions: - lastTransitionTime: "2022-07-19T09:16:50Z" message: Initial deployment completed observedGeneration: 2 reason: Deployed status: "True" type: Available - lastTransitionTime: "2022-07-19T09:16:50Z" message: Ready observedGeneration: 2 reason: Success status: "True" type: PeerReady lastUpdateTime: "2022-07-19T10:39:24Z" phase: Deployed preferredDecision: clusterName: kmanohar-clu1 clusterNamespace: kmanohar-clu1 progression: Completed resourceConditions: conditions: - lastTransitionTime: "2022-07-19T09:16:52Z" message: Restored PV cluster data observedGeneration: 1 reason: Restored status: "True" type: ClusterDataReady - lastTransitionTime: "2022-07-19T09:16:52Z" message: VolumeReplicationGroup is replicating observedGeneration: 1 reason: Replicating status: "False" type: DataProtected - lastTransitionTime: "2022-07-19T10:39:24Z" message: Cluster data of all PVs are protected observedGeneration: 1 reason: Uploaded status: "True" type: ClusterDataProtected - lastTransitionTime: "2022-07-19T09:17:54Z" message: PVCs in the VolumeReplicationGroup are ready for use observedGeneration: 1 reason: Ready status: "True" type: DataReady resourceMeta: generation: 1 kind: VolumeReplicationGroup name: busybox-drpc namespace: busybox-workloads-1 protectedpvcs: - busybox-pvc-1 - busybox-pvc-10 - busybox-pvc-11 - busybox-pvc-12 - busybox-pvc-13 - busybox-pvc-14 - busybox-pvc-15 - busybox-pvc-16 - busybox-pvc-17 - busybox-pvc-18 - busybox-pvc-19 - busybox-pvc-20 - busybox-pvc-2 - busybox-pvc-3 - busybox-pvc-4 - busybox-pvc-6 - busybox-pvc-5 - busybox-pvc-8 - busybox-pvc-7 - busybox-pvc-9
Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory (Important: Red Hat OpenShift Data Foundation 4.11.0 security, enhancement, & bugfix update), and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://access.redhat.com/errata/RHSA-2022:6156