Description of problem (please be detailed as possible and provide log snippests): With the latest acm-operator-bundle-container-v2.9.0-165 . ramen-hub-operator restarted on hub and ramen-dr-cluster-operator restarted on primary managed cluster after deleting and again applying drpolicy. No ramen-dr-cluster-operator restarts seen on secondary cluster. Version of all relevant components (if applicable): ocp. - 4.14.0 odf. - 4.14.0-146 ACM - 2.9.0-165 ceph - 6.1 Does this issue impact your ability to continue to work with the product (please explain in detail what is the user impact)? Is there any workaround available to the best of your knowledge? Rate from 1 - 5 the complexity of the scenario you performed that caused this bug (1 - very simple, 5 - very complex)? Can this issue reproducible? yes Can this issue reproduce from the UI? If this is a regression, please provide more details to justify this: Steps to Reproduce: --------------------- 1. With existing Metro DR environment deleted all apps(subscription + Appset) and drpolicy. 2. Created new drpolicy with same name. 3. Created subscription set app. 4. Applied drpolicy to app. Actual results: Ramen-hub-operator pod is restarted Expected results: Ramen-hub-operator pod should not restart Additional info: hub% oc get pods -n openshift-operators NAME READY STATUS RESTARTS AGE odf-multicluster-console-854b88488b-q6tpn 1/1 Running 0 9d odfmo-controller-manager-8585fbddb8-jctpj 1/1 Running 6 (18h ago) 9d ramen-hub-operator-7dc77db778-szcw4 2/2 Running 5 (17h ago) 9d clust1% oc get csv,pod -n openshift-dr-system NAME DISPLAY VERSION REPLACES PHASE clusterserviceversion.operators.coreos.com/odr-cluster-operator.v4.14.0-146.stable Openshift DR Cluster Operator 4.14.0-146.stable odr-cluster-operator.v4.14.0-139.stable Succeeded clusterserviceversion.operators.coreos.com/volsync-product.v0.7.4 VolSync 0.7.4 volsync-product.v0.7.3 Succeeded NAME READY STATUS RESTARTS AGE pod/ramen-dr-cluster-operator-9c78ffc78-2z5nx 2/2 Running 2 (3h46m ago) 3h54m