Bug 2235708

Summary: [RDR][CEPHFS] When failover is done for cephfs workload drpc stuck in `Cleaning Up` and sync for workloads are stopped
Product: [Red Hat Storage] Red Hat OpenShift Data Foundation Reporter: Pratik Surve <prsurve>
Component: odf-drAssignee: Elena Gershkovich <egershko>
odf-dr sub component: ramen QA Contact: Pratik Surve <prsurve>
Status: CLOSED ERRATA Docs Contact:
Severity: urgent    
Priority: unspecified CC: bmekhiss, egershko, kramdoss, kseeger, muagarwa, sheggodu
Version: 4.14Flags: kramdoss: needinfo+
Target Milestone: ---   
Target Release: ODF 4.14.0   
Hardware: Unspecified   
OS: Unspecified   
Whiteboard:
Fixed In Version: 4.14.0-124 Doc Type: No Doc Update
Doc Text:
Story Points: ---
Clone Of: Environment:
Last Closed: 2023-11-08 18:54:25 UTC Type: Bug
Regression: --- Mount Type: ---
Documentation: --- CRM:
Verified Versions: Category: ---
oVirt Team: --- RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: --- Target Upstream Version:
Embargoed:

Description Pratik Surve 2023-08-29 13:46:13 UTC
Description of problem (please be detailed as possible and provide log
snippests):

[RDR][CEPHFS] When failover is done for cephfs workload drpc stuck in `Cleaning Up` and sync for workloads are stopped

Version of all relevant components (if applicable):

OCP version:- 4.14.0-0.nightly-2023-08-11-055332
ODF version:- 4.14.0-117
CEPH version:- ceph version 17.2.6-115.el9cp (968b780fae1bced13d322da769a9d7223d701a01) quincy (stable)
ACM version:- 2.9.0-109
SUBMARINER version:- 0.16.0
VOLSYNC version:- volsync-product.v0.7.4
VOLSYNC method:- destinationCopyMethod: LocalDirect

Does this issue impact your ability to continue to work with the product
(please explain in detail what is the user impact)?
Yes

Is there any workaround available to the best of your knowledge?


Rate from 1 - 5 the complexity of the scenario you performed that caused this
bug (1 - very simple, 5 - very complex)?


Can this issue reproducible?
Yes

Can this issue reproduce from the UI?


If this is a regression, please provide more details to justify this:


Steps to Reproduce:
1. Deploy RDR cluster
2. Use the volsync method as `LocalDirect` 
3. Deploy Cephfs workload
4. Perform failover
 


Actual results:
oc get drpc -A -o wide -w                                                                    
NAMESPACE          NAME                              AGE     PREFERREDCLUSTER   FAILOVERCLUSTER   DESIREDSTATE   CURRENTSTATE   PROGRESSION   START TIME             DURATION              PEER READY
openshift-gitops   busybox-1-cephfs-placement-drpc   3d21h   prsurve-vm-d       prsurve-c1        Failover       FailedOver     Cleaning Up   2023-08-29T04:23:54Z                         False



Expected results:
DRPC should be in completed state

Additional info:

Comment 11 errata-xmlrpc 2023-11-08 18:54:25 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory (Important: Red Hat OpenShift Data Foundation 4.14.0 security, enhancement & bug fix update), and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHSA-2023:6832