Bug 2274765 - [RDR] [Discovered Apps] Ceph fs imperative workloads are not getting DR protected
Summary: [RDR] [Discovered Apps] Ceph fs imperative workloads are not getting DR prot...
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: Red Hat OpenShift Data Foundation
Classification: Red Hat Storage
Component: odf-dr
Version: 4.16
Hardware: Unspecified
OS: Unspecified
unspecified
urgent
Target Milestone: ---
: ODF 4.16.0
Assignee: Raghavendra Talur
QA Contact: Pratik Surve
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2024-04-12 18:06 UTC by Pratik Surve
Modified: 2024-11-15 04:25 UTC (History)
3 users (show)

Fixed In Version: 4.16.0-107
Doc Type: No Doc Update
Doc Text:
Clone Of:
Environment:
Last Closed: 2024-07-17 13:19:23 UTC
Embargoed:


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
Github RamenDR ramen pull 1391 0 None open Add volsync support for MultiNamespace 2024-05-13 08:28:04 UTC
Github red-hat-storage ramen pull 272 0 None open Bug 2274765: Add volsync support for MultiNamespace 2024-05-21 18:04:14 UTC
Red Hat Product Errata RHSA-2024:4591 0 None None None 2024-07-17 13:19:25 UTC

Description Pratik Surve 2024-04-12 18:06:26 UTC
Description of problem (please be detailed as possible and provide log
snippests):

[RDR] Ceph fs imperative workloads are not getting DR protected 

Version of all relevant components (if applicable):

OCP version:- 4.16.0-0.nightly-2024-04-03-065948
ODF version:- 4.16.0-73
CEPH version:- ceph version 18.2.1-76.el9cp (2517f8a5ef5f5a6a22013b2fb11a591afd474668) reef (stable)
ACM version:- 2.10.0
SUBMARINER version:- v0.17.0
VOLSYNC version:- volsync-product.v0.9.0
VOLSYNC method:- destinationCopyMethod: Direct

Does this issue impact your ability to continue to work with the product
(please explain in detail what is the user impact)?


Is there any workaround available to the best of your knowledge?


Rate from 1 - 5 the complexity of the scenario you performed that caused this
bug (1 - very simple, 5 - very complex)?


Can this issue reproducible?

Yes
Can this issue reproduce from the UI?


If this is a regression, please provide more details to justify this:


Steps to Reproduce:
1.Deploy RDR 
2.Deploy ceph fs workloads
3.Check drpc status


Actual results:
2024-04-12T17:54:26.918Z	ERROR	controllers.VolumeReplicationGroup	volsync/vshandler.go:516	unable to validate PVC or add ownership	{"VolumeReplicationGroup": {"name":"app-busybox-cephfs-1","namespace":"ramen-ops"}, "rid": "0547ae18-8f47-432a-ba02-4ddafbc13721", "pvcName": "busybox-pvc-1", "error": "PersistentVolumeClaim \"busybox-pvc-1\" not found"}
github.com/ramendr/ramen/controllers/volsync.(*VSHandler).TakePVCOwnership
	/remote-source/app/controllers/volsync/vshandler.go:516
github.com/ramendr/ramen/controllers/volsync.(*VSHandler).PreparePVC
	/remote-source/app/controllers/volsync/vshandler.go:498
github.com/ramendr/ramen/controllers.(*VRGInstance).reconcilePVCAsVolSyncPrimary
	/remote-source/app/controllers/vrg_volsync.go:141
github.com/ramendr/ramen/controllers.(*VRGInstance).reconcileVolSyncAsPrimary
	/remote-source/app/controllers/vrg_volsync.go:96
github.com/ramendr/ramen/controllers.(*VRGInstance).reconcileAsPrimary
	/remote-source/app/controllers/volumereplicationgroup_controller.go:932
github.com/ramendr/ramen/controllers.(*VRGInstance).processAsPrimary
	/remote-source/app/controllers/volumereplicationgroup_controller.go:883
github.com/ramendr/ramen/controllers.(*VRGInstance).processVRG
	/remote-source/app/controllers/volumereplicationgroup_controller.go:551
github.com/ramendr/ramen/controllers.(*VolumeReplicationGroupReconciler).Reconcile
	/remote-source/app/controllers/volumereplicationgroup_controller.go:438
sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).Reconcile
	/remote-source/deps/gomod/pkg/mod/sigs.k8s.io/controller-runtime.3/pkg/internal/controller/controller.go:119
sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).reconcileHandler
	/remote-source/deps/gomod/pkg/mod/sigs.k8s.io/controller-runtime.3/pkg/internal/controller/controller.go:316
sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).processNextWorkItem
	/remote-source/deps/gomod/pkg/mod/sigs.k8s.io/controller-runtime.3/pkg/internal/controller/controller.go:266
sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).Start.func2.2
	/remote-source/deps/gomod/pkg/mod/sigs.k8s.io/controller-runtime.3/pkg/internal/controller/controller.go:227


Expected results:

Ceph-fs workloads should get dr protected 
Additional info:

On seconday site pvc are getting created on ramen-ops ns which is not the behaviour it should get created in namespace where workloads are created

Comment 23 errata-xmlrpc 2024-07-17 13:19:23 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory (Important: Red Hat OpenShift Data Foundation 4.16.0 security, enhancement & bug fix update), and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHSA-2024:4591

Comment 24 Red Hat Bugzilla 2024-11-15 04:25:32 UTC
The needinfo request[s] on this closed bug have been removed as they have been unresolved for 120 days


Note You need to log in before you can comment on or make changes to this bug.