Bug 2254494

Summary: [4.14 clone][RDR] DRPC reports wrong Phase/Progression as Deployed/Completed the moment failover is triggered
Product: [Red Hat Storage] Red Hat OpenShift Data Foundation Reporter: Karolin Seeger <kseeger>
Component: odf-drAssignee: Elena Gershkovich <egershko>
odf-dr sub component: ramen QA Contact: Sidhant Agrawal <sagrawal>
Status: CLOSED ERRATA Docs Contact:
Severity: high    
Priority: unspecified CC: egershko, kseeger, muagarwa, sheggodu
Version: 4.14   
Target Milestone: ---   
Target Release: ODF 4.14.6   
Hardware: Unspecified   
OS: Unspecified   
Whiteboard:
Fixed In Version: 4.14.6-1 Doc Type: No Doc Update
Doc Text:
Story Points: ---
Clone Of: Environment:
Last Closed: 2024-04-01 09:17:36 UTC Type: ---
Regression: --- Mount Type: ---
Documentation: --- CRM:
Verified Versions: Category: ---
oVirt Team: --- RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: --- Target Upstream Version:
Embargoed:
Bug Depends On: 2239590    
Bug Blocks:    

Description Karolin Seeger 2023-12-14 08:58:45 UTC
This bug was initially created as a copy of Bug #2239590

I am copying this bug because: 4.14 clone



Description of problem (please be detailed as possible and provide log
snippests): These 2 rbd based workloads were failedover to C2 which were in deployed state earlier.
amagrawa:acm$ drpc
NAMESPACE             NAME                                   AGE     PREFERREDCLUSTER   FAILOVERCLUSTER   DESIREDSTATE   CURRENTSTATE   PROGRESSION   START TIME             DURATION           PEER READY
busybox-workloads-2   busybox-workloads-2-placement-1-drpc   2d23h   amagrawa-c1        amagrawa-c2       Failover       FailedOver     Completed     2023-09-18T19:18:58Z   48m18.757117217s   True

openshift-gitops      busybox-workloads-1-placement-drpc     2d23h   amagrawa-c1        amagrawa-c2       Failover       FailedOver     Completed     2023-09-18T19:17:45Z   44m1.2498824s      True



Version of all relevant components (if applicable):
ODF 4.14.0-132.stable
OCP 4.14.0-0.nightly-2023-09-02-132842
ACM 2.9.0-DOWNSTREAM-2023-08-24-09-30-12
subctl version: v0.16.0
ceph version 17.2.6-138.el9cp (b488c8dad42b2ecffcd96f3d76eeeecce48b8590) quincy (stable)


Does this issue impact your ability to continue to work with the product
(please explain in detail what is the user impact)?


Is there any workaround available to the best of your knowledge?


Rate from 1 - 5 the complexity of the scenario you performed that caused this
bug (1 - very simple, 5 - very complex)?


Can this issue reproducible?


Can this issue reproduce from the UI?


If this is a regression, please provide more details to justify this:


Steps to Reproduce:
1. Deploy appset and subscription based DR protected rbd based workloads on a RDR setup and run IOs for some time (4-5 days in this case)
2. Perform submariner connectivity check, ensure mirroring is working, lastGroupSyncTime is within desired range, rbd image status is healthy, etc.
3. Bring master nodes of primary cluster down
4. Perform failover of both appset and subscription based workloads when cluster is marked unavailable on the ACM UI
5. Make sure to check drpc -o wide status in a loop before failover is triggered and check the CURRENTSTATE which reports FailedOver and PROGRESSION which says Completed the moment failover is trigerred and stays like that for a few mins and then the PROGRESSION status changes further to WaitForStorageMaintenanceActivation, WaitingForResourceRestore, Cleaning Up etc.

Actual results: DRPC directly reports progression as Completed the moment failover is triggered.

Must gather logs are kept here- http://rhsqe-repo.lab.eng.blr.redhat.com/OCS/ocs-qe-bugs/bz-aman/19sept23-1/


Expected results: DRPC progression directly reporting as Completed the moment failover is triggered is misleading and appropriate progression status should be shown instead.


Additional info:

Comment 10 Elena Gershkovich 2024-03-06 09:46:05 UTC
The PR is ready: https://github.com/red-hat-storage/ramen/pull/172

Comment 16 errata-xmlrpc 2024-04-01 09:17:36 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory (Red Hat OpenShift Data Foundation 4.14.6 Bug Fix Update), and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHBA-2024:1579