Bug 2218487

Summary: [MDR][Fusion] PVC remain in pending state after successful failover
Product: [Red Hat Storage] Red Hat OpenShift Data Foundation Reporter: pallavi <pallavi.joshi>
Component: odf-drAssignee: Shyamsundar <srangana>
odf-dr sub component: ramen QA Contact: Parikshith <pbyregow>
Status: CLOSED ERRATA Docs Contact:
Severity: urgent    
Priority: unspecified CC: hnallurv, kseeger, muagarwa, ocs-bugs, odf-bz-bot, rtalur, sheggodu, srangana
Version: 4.13   
Target Milestone: ---   
Target Release: ODF 4.13.1   
Hardware: Unspecified   
OS: Unspecified   
Whiteboard:
Fixed In Version: 4.13.1-9 Doc Type: If docs needed, set a value
Doc Text:
Story Points: ---
Clone Of:
: 2223553 (view as bug list) Environment:
Last Closed: 2023-08-02 16:07:42 UTC Type: Bug
Regression: --- Mount Type: ---
Documentation: --- CRM:
Verified Versions: Category: ---
oVirt Team: --- RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: --- Target Upstream Version:
Embargoed:
Bug Depends On:    
Bug Blocks: 2223553    
Attachments:
Description Flags
Ramen Artifacts
none
Ramen-Screenshots
none
PersistentVolumes-SourceAndTargetClusters none

Description pallavi 2023-06-29 10:55:32 UTC
Description of problem (please be detailed as possible and provide log
snippests):
This issue is reproducible while using metrodr offering of ibm-spectrum-fusion product.
Steps to reproduce: 
Initiate failover of an application on cluster where earlier PVCs in terminating state which is needed while changing ReplicationState of vrg to secondary.
After successful failover, PVCs in pending state.
Observed that UID field of PV in 'ClaimRef' not refreshed.

Info about fusion relocation workflow:
As a part of fusion 'relocation' workflow, before starting relocation on new cluster, VRG ReplicationState on older cluster is updated as 'secondary' and PVCs are also marked for deletion.
When failback is initiated again on older cluster (for application relocated to new cluster), older vrg(having secondary ReplicationState) is deleted and new vrg is created. Along with older vrg, PVC also gets automatically deleted since its already marked for deletion.
Before starting failback on given cluster where we faced PVC pending issue, following was VRG and PVC status

VRG having ReplicationState as secondary
PVCs in terminating state.
After failback, older vrg got deleted, new vrg created successfully and PVC was in pending state. Corresponding PV's claim Ref field which stores PVC uid was not identical as that of new PVC which was in pending state(indicating its not refreshed)




Version of all relevant components (if applicable): 


Does this issue impact your ability to continue to work with the product
(please explain in detail what is the user impact)? yes


Is there any workaround available to the best of your knowledge? 


Rate from 1 - 5 the complexity of the scenario you performed that caused this
bug (1 - very simple, 5 - very complex)? 1 


Can this issue reproducible? yes


Can this issue reproduce from the UI? yes


If this is a regression, please provide more details to justify this:


Steps to Reproduce:
1. Initiate failover of an application on cluster where earlier PVCs in terminating state which is needed while changing ReplicationState of vrg to secondary.
2. After successful failover, PVCs in pending state.
3. Observed that UID field of PV in 'ClaimRef' not refreshed.
Actual results: PVC remains in pending state


Expected results: PVC should be bound to correct PV


Additional info: This bug is reproduced in MetroDR service of ibm-spectrum-fusion product

Comment 5 pallavi 2023-07-05 11:20:09 UTC
Created attachment 1974090 [details]
Ramen Artifacts

Comment 6 pallavi 2023-07-05 11:30:40 UTC
Created attachment 1974091 [details]
Ramen-Screenshots

Comment 7 pallavi 2023-07-05 12:49:44 UTC
Created attachment 1974134 [details]
PersistentVolumes-SourceAndTargetClusters

Comment 9 Harish NV Rao 2023-07-18 06:48:39 UTC
QE is acking this BZ for 4.13.1 based on the test plan - https://docs.google.com/document/d/1zg120opbyDgkcRM0rY5HA1T4gRCsF-zEk0TNEI1T8eY/edit

Comment 22 errata-xmlrpc 2023-08-02 16:07:42 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory (Moderate: Red Hat OpenShift Data Foundation 4.13.1 security and bug fix update), and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHSA-2023:4437