Bug 2139835

Summary: ramen-dr-cluster ManifestWork does not reconcile properly
Product: [Red Hat Storage] Red Hat OpenShift Data Foundation Reporter: Jason Kincl <jkincl>
Component: odf-drAssignee: Elena Gershkovich <egershko>
odf-dr sub component: ramen QA Contact: Pratik Surve <prsurve>
Status: CLOSED ERRATA Docs Contact:
Severity: low    
Priority: unspecified CC: aclewett, egershko, jcall, kseeger, muagarwa, nberry, nsoffer, odf-bz-bot, rtalur, srangana
Version: 4.11Flags: egershko: needinfo-
egershko: needinfo-
kseeger: needinfo-
Target Milestone: ---   
Target Release: ODF 4.16.0   
Hardware: All   
OS: All   
Whiteboard:
Fixed In Version: 4.16.0-71 Doc Type: No Doc Update
Doc Text:
Story Points: ---
Clone Of: Environment:
Last Closed: 2024-07-17 13:10:39 UTC Type: Bug
Regression: --- Mount Type: ---
Documentation: --- CRM:
Verified Versions: Category: ---
oVirt Team: --- RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: --- Target Upstream Version:
Embargoed:

Description Jason Kincl 2022-11-03 15:11:05 UTC
Description of problem (please be detailed as possible and provide log
snippests):

If the ManifestWork on the ACM Hub cluster for ramen-dr-cluster is deleted it is not recreated by the operator.

We found that we could "kickstart" a reconcile operation by the operator by editing the ConfigMap on the hub cluster ramen-hub-operator-config

Version of all relevant components (if applicable):
OpenShift DR Hub Operator: odr-hub-operator.v4.11.2


Does this issue impact your ability to continue to work with the product
(please explain in detail what is the user impact)?

no

Is there any workaround available to the best of your knowledge?

touch a value on the configmap and the ManifestWork will be recreated

Rate from 1 - 5 the complexity of the scenario you performed that caused this
bug (1 - very simple, 5 - very complex)?

1


Steps to Reproduce:
1. Delete ManifestWork ramen-hub-operator-config on hub cluster for a managed cluster
2. wait


Actual results:

No ManifestWork is recreated

Expected results:

ManifestWork is recreated


Additional info:

Comment 2 Annette Clewett 2022-11-14 15:41:26 UTC
Current Workaround:
If the ManifestWork on the ACM Hub cluster for ramen-dr-cluster is deleted it is not recreated by the operator.

You can "kickstart" a reconcile operation by the operator by editing the ConfigMap on the hub cluster ramen-hub-operator-config:

$ oc edit configmap ramen-hub-operator-config -n openshift-operators
[...]
apiVersion: v1
data:
  ramen_manager_config.yaml: |
    apiVersion: ramendr.openshift.io/v1alpha1
    drClusterOperator:
      catalogSourceName: redhat-operators
      catalogSourceNamespaceName: openshift-marketplace
      channelName: stable-4.11
      clusterServiceVersionName: odr-cluster-operator.v4.11.3
      deploymentAutomationEnabled: true
      namespaceName: openshift-dr-system
[...]

Change 'deploymentAutomationEnabled' to 'false' and save change. Edit configmap again and modify 'deploymentAutomationEnabled' back to true.

Check that the 'ramen-dr-cluster' ManifestWork is created again in the correct managedcluster namespace (wherever it was deleted from):

$ oc get manifestworks.work.open-cluster-management.io -A | grep ramen
cluster1        ramen-dr-cluster                             6d21h
cluster2        ramen-dr-cluster                             33s

Comment 16 krishnaram Karthick 2023-08-24 15:24:15 UTC
Moving the verification to 4.15 as this is a low severity bug.

Comment 25 Mudit Agarwal 2024-03-11 14:44:11 UTC
Not a 4.15 blocker

Comment 34 errata-xmlrpc 2024-07-17 13:10:39 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory (Important: Red Hat OpenShift Data Foundation 4.16.0 security, enhancement & bug fix update), and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHSA-2024:4591