Bug 2060301 - [DR] ramen-dr-cluster-operator is looking for ramen-hub-operator-config instead of ramen-dr-cluster-operator-config
Summary: [DR] ramen-dr-cluster-operator is looking for ramen-hub-operator-config inste...
Keywords:
Status: CLOSED CURRENTRELEASE
Alias: None
Product: Red Hat OpenShift Data Foundation
Classification: Red Hat Storage
Component: odf-dr
Version: 4.10
Hardware: Unspecified
OS: Unspecified
unspecified
urgent
Target Milestone: ---
: ODF 4.10.0
Assignee: Benamar Mekhissi
QA Contact: Shrivaibavi Raghaventhiran
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2022-03-03 09:19 UTC by Pratik Surve
Modified: 2023-08-09 17:00 UTC (History)
8 users (show)

Fixed In Version: 4.10.0-184
Doc Type: No Doc Update
Doc Text:
Clone Of:
Environment:
Last Closed: 2022-04-21 09:12:51 UTC
Embargoed:


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
Github RamenDR ramen pull 401 0 None open Fix hard coded ramen operator config name 2022-03-03 16:25:59 UTC
Github red-hat-storage ramen pull 18 0 None open Bug 2060301: Fix hard coded ramen operator config name 2022-03-04 00:12:14 UTC

Description Pratik Surve 2022-03-03 09:19:31 UTC
Description of problem (please be detailed as possible and provide log
snippets):

[DR] ramen-dr-cluster-operator is looking for ramen-hub-operator-config instead of ramen-dr-cluster-operator-config


Version of all relevant components (if applicable):
ODF version:- 4.10.0-175
OCP version:- 4.10.0-0.nightly-2022-03-01-224543

Does this issue impact your ability to continue to work with the product
(please explain in detail what is the user impact)?


Is there any workaround available to the best of your knowledge?


Rate from 1 - 5 the complexity of the scenario you performed that caused this
bug (1 - very simple, 5 - very complex)?


Can this issue reproducible?
yes

Can this issue reproduce from the UI?


If this is a regression, please provide more details to justify this:


Steps to Reproduce:
1. Deploy DR cluster
2. Deploy DR workload
3. check ramen-dr-cluster-operator pod logs


Actual results:
2022-03-03T09:03:02.158Z	ERROR	controllers.VolumeReplicationGroup.vrginstance	controllers/volumereplicationgroup_controller.go:933	error fetching PV cluster data from S3 profile s3profile-vmware-dccp-one-ocs-storagecluster	{"VolumeReplicationGroup": "busybox-workloads-2/busybox-drpc-2", "State": "primary", "error": "error when downloading PVs, err failed to get profile s3profile-vmware-dccp-one-ocs-storagecluster for caller busybox-workloads-2/busybox-drpc-2, configmaps \"ramen-hub-operator-config\" not found"}
github.com/ramendr/ramen/controllers.(*VRGInstance).processAsPrimary
	/remote-source/app/controllers/volumereplicationgroup_controller.go:933
github.com/ramendr/ramen/controllers.(*VRGInstance).processVRGActions
	/remote-source/app/controllers/volumereplicationgroup_controller.go:410
github.com/ramendr/ramen/controllers.(*VRGInstance).processVRG
	/remote-source/app/controllers/volumereplicationgroup_controller.go:398
github.com/ramendr/ramen/controllers.(*VolumeReplicationGroupReconciler).Reconcile
	/remote-source/app/controllers/volumereplicationgroup_controller.go:312
sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).reconcileHandler
	/remote-source/deps/gomod/pkg/mod/sigs.k8s.io/controller-runtime.7/pkg/internal/controller/controller.go:298
sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).processNextWorkItem
	/remote-source/deps/gomod/pkg/mod/sigs.k8s.io/controller-runtime.7/pkg/internal/controller/controller.go:253
sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).Start.func2.2
	/remote-source/deps/gomod/pkg/mod/sigs.k8s.io/controller-runtime.7/pkg/internal/controller/controller.go:214
2022-03-03T09:03:02.160Z	ERROR	controllers.VolumeReplicationGroup.vrginstance	controllers/volumereplicationgroup_controller.go:933	error fetching PV cluster data from S3 profile s3profile-prsurve-vm-dev-ocs-storagecluster	{"VolumeReplicationGroup": "busybox-workloads-2/busybox-drpc-2", "State": "primary", "error": "error when downloading PVs, err failed to get profile s3profile-prsurve-vm-dev-ocs-storagecluster for caller busybox-workloads-2/busybox-drpc-2, configmaps \"ramen-hub-operator-config\" not found"}
github.com/ramendr/ramen/controllers.(*VRGInstance).processAsPrimary
	/remote-source/app/controllers/volumereplicationgroup_controller.go:933
github.com/ramendr/ramen/controllers.(*VRGInstance).processVRGActions
	/remote-source/app/controllers/volumereplicationgroup_controller.go:410
github.com/ramendr/ramen/controllers.(*VRGInstance).processVRG
	/remote-source/app/controllers/volumereplicationgroup_controller.go:398
github.com/ramendr/ramen/controllers.(*VolumeReplicationGroupReconciler).Reconcile
	/remote-source/app/controllers/volumereplicationgroup_controller.go:312
sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).reconcileHandler
	/remote-source/deps/gomod/pkg/mod/sigs.k8s.io/controller-runtime.7/pkg/internal/controller/controller.go:298
sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).processNextWorkItem
	/remote-source/deps/gomod/pkg/mod/sigs.k8s.io/controller-runtime.7/pkg/internal/controller/controller.go:253
sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).Start.func2.2
	/remote-source/deps/gomod/pkg/mod/sigs.k8s.io/controller-runtime.7/pkg/internal/controller/controller.go:214
2022-03-03T09:03:02.160Z	INFO	controllers.VolumeReplicationGroup.vrginstance	controllers/volumereplicationgroup_controller.go:410	failed to restorePVs using profile list ([s3profile-vmware-dccp-one-ocs-storagecluster s3profile-prsurve-vm-dev-ocs-storagecluster])	{"VolumeReplicationGroup": "busybox-workloads-2/busybox-drpc-2", "State": "primary"}
2022-03-03T09:03:02.160Z	INFO	controllers.VolumeReplicationGroup.vrginstance	controllers/volumereplicationgroup_controller.go:398	Restoring PVs failed	{"VolumeReplicationGroup": "busybox-workloads-2/busybox-drpc-2", "State": "primary", "errorValue": "failed to restorePVs using profile list ([s3profile-vmware-dccp-one-ocs-storagecluster s3profile-prsurve-vm-dev-ocs-storagecluster]): %!w(<nil>)"}
2022-03-03T09:03:02.160Z	INFO	controllers.VolumeReplicationGroup.vrginstance	controllers/volumereplicationgroup_controller.go:410	Updating VRG status	{"VolumeReplicationGroup": "busybox-workloads-2/busybox-drpc-2", "State": "primary"}
2022-03-03T09:03:02.167Z	INFO	controllers.VolumeReplicationGroup.vrginstance	controllers/volumereplicationgroup_controller.go:410	Updated VRG Status {State: ProtectedPVCs:[] Conditions:[{Type:DataReady Status:Unknown ObservedGeneration:1 LastTransitionTime:2022-03-03 08:29:12 +0000 UTC Reason:Initializing Message:Initializing VolumeReplicationGroup} {Type:DataProtected Status:Unknown ObservedGeneration:1 LastTransitionTime:2022-03-03 08:29:12 +0000 UTC Reason:Initializing Message:Initializing VolumeReplicationGroup} {Type:ClusterDataReady Status:False ObservedGeneration:1 LastTransitionTime:2022-03-03 09:03:02 +0000 UTC Reason:Error Message:Failed to restore PVs (failed to restorePVs using profile list ([s3profile-vmware-dccp-one-ocs-storagecluster s3profile-prsurve-vm-dev-ocs-storagecluster]): %!w(<nil>))} {Type:ClusterDataProtected Status:Unknown ObservedGeneration:1 LastTransitionTime:2022-03-03 08:29:12 +0000 UTC Reason:Initializing Message:Initializing VolumeReplicationGroup}] ObservedGeneration:1 LastUpdateTime:2022-03-03 09:03:02 +0000 UTC}	{"VolumeReplicationGroup": "busybox-workloads-2/busybox-drpc-2", "State": "primary"}
2022-03-03T09:03:02.167Z	INFO	controllers.VolumeReplicationGroup.vrginstance	controllers/volumereplicationgroup_controller.go:398	Exiting processing VolumeReplicationGroup	{"VolumeReplicationGroup": "busybox-workloads-2/busybox-drpc-2", "State": "primary"}

Expected results:


Additional info:

Comment 3 Benamar Mekhissi 2022-03-03 16:18:29 UTC
We have a PR out for review. https://github.com/RamenDR/ramen/pull/401

Comment 8 Shrivaibavi Raghaventhiran 2022-03-21 15:30:15 UTC
Tested versions:
-----------------
ODF - quay.io/rhceph-dev/ocs-registry:4.10.0-198
OCP - 4.10.0-0.nightly-2022-03-16-165813
ACM - 2.4

Test Step:
----------
1. Install DR cluster
2. Install DR workload
3. Check for any error messages in DR-operator-pod logs regarding wrong configmap

Did not see any error messages regarding the wrong configmap usage in the logs, attached the logs as an attachment for reference. 

2022-03-17T20:01:34.636Z        DEBUG   events  runtime/asm_amd64.s:1371        Normal  {"object": {"kind":"ConfigMap","namespace":"openshift-dr-system","name":"dr-cluster.ramendr.openshift.io","uid":"0fdfc395-7983-4d23-8abd-d765ef849811","apiVersion":"v1","resourceVersion":"619711"}, "reason": "LeaderElection", "message": "ramen-dr-cluster-operator-54dc8dc7f4-4j4xw_4eeff553-d946-4f99-b518-73eed129a809 became leader"}

With above references moving the BZ to Verified state.


Note You need to log in before you can comment on or make changes to this bug.