Bug 2207925

Summary: [RDR] When performing cross cluster failover of 2 apps cleanup of app-1 is stuck and failover of app-2 is stuck
Product: [Red Hat Storage] Red Hat OpenShift Data Foundation Reporter: Pratik Surve <prsurve>
Component: odf-drAssignee: Shyamsundar <srangana>
odf-dr sub component: ramen QA Contact: Sidhant Agrawal <sagrawal>
Status: CLOSED ERRATA Docs Contact:
Severity: high    
Priority: unspecified CC: amagrawa, idryomov, kseeger, mrajanna, muagarwa, odf-bz-bot, srangana
Version: 4.13   
Target Milestone: ---   
Target Release: ODF 4.15.0   
Hardware: Unspecified   
OS: Unspecified   
Whiteboard:
Fixed In Version: Doc Type: Known Issue
Doc Text:
Story Points: ---
Clone Of: Environment:
Last Closed: 2024-03-19 15:21:13 UTC Type: Bug
Regression: --- Mount Type: ---
Documentation: --- CRM:
Verified Versions: Category: ---
oVirt Team: --- RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: --- Target Upstream Version:
Embargoed:
Bug Depends On: 2219437    
Bug Blocks: 2154341    

Description Pratik Surve 2023-05-17 10:25:53 UTC
Description of problem (please be detailed as possible and provide log
snippests):
[RDR] When performing cross cluster failover of 2 apps cleanup of app-1 is stuck and failover of app-2 is stuck


Version of all relevant components (if applicable):

OCP version:- 4.13.0-0.nightly-2023-05-11-225357
ODF version:- 4.13.0-199
CEPH version:- ceph version 17.2.6-47.el9cp (6add4f24d1eff88e1db808ecdc16fd5b2db96dd4) quincy (stable)
ACM version:- 2.8.0-169
SUBMARINER version:- v0.15.0
VOLSYNC version:- volsync-product.v0.7.1

Does this issue impact your ability to continue to work with the product
(please explain in detail what is the user impact)?
yes

Is there any workaround available to the best of your knowledge?


Rate from 1 - 5 the complexity of the scenario you performed that caused this
bug (1 - very simple, 5 - very complex)?


Can this issue reproducible?


Can this issue reproduce from the UI?


If this is a regression, please provide more details to justify this:


Steps to Reproduce:
1.Deploy RDR cluster 
2.Deploy app-1 over c1 and deploy app-2 over c2
3. Perform failover of both the apps ie app-1 -> c2 and app-2 ->c1


Actual results:
PVC are stuck in termination state 

oc get drpc -A -o wide           
NAMESPACE          NAME                              AGE    PREFERREDCLUSTER   FAILOVERCLUSTER   DESIREDSTATE   CURRENTSTATE   PROGRESSION                 START TIME             DURATION          PEER READY
openshift-gitops   busybox-1-placement-drpc          46h    prsurve-c1         prsurve-vm-d      Failover       FailedOver     Cleaning Up                 2023-05-16T12:44:10Z                     True
openshift-gitops   busybox-2-kernel-placement-drpc   2d1h   prsurve-c1                           Relocate       Relocated      Completed                   2023-05-16T10:32:57Z   10m19.27439987s   True
openshift-gitops   busybox-8-placement-drpc          2d2h   prsurve-c1         prsurve-c1        Failover       FailingOver    WaitingForResourceRestore   2023-05-16T12:44:04Z                     False




Expected results:


Additional info:

Comment 9 Mudit Agarwal 2023-06-05 11:36:46 UTC
Not a 4.13 blocker, adding it as a known issue.
@srangana please fill the doc text.

Comment 20 Mudit Agarwal 2023-09-11 07:18:27 UTC
Added exception flag

Comment 21 Mudit Agarwal 2023-09-26 09:07:53 UTC
What's the current status for this, are we still targeting it for 4.14?

Comment 29 errata-xmlrpc 2024-03-19 15:21:13 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory (Important: Red Hat OpenShift Data Foundation 4.15.0 security, enhancement, & bug fix update), and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHSA-2024:1383