Bug 2223553 - [MDR][Fusion][4.14 clone] PVC remain in pending state after successful failover
Summary: [MDR][Fusion][4.14 clone] PVC remain in pending state after successful failover
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: Red Hat OpenShift Data Foundation
Classification: Red Hat Storage
Component: odf-dr
Version: 4.13
Hardware: Unspecified
OS: Unspecified
unspecified
urgent
Target Milestone: ---
: ODF 4.14.0
Assignee: Raghavendra Talur
QA Contact: avdhoot
URL:
Whiteboard:
Depends On: 2218487
Blocks:
TreeView+ depends on / blocked
 
Reported: 2023-07-18 07:55 UTC by Mudit Agarwal
Modified: 2024-05-13 03:20 UTC (History)
11 users (show)

Fixed In Version:
Doc Type: No Doc Update
Doc Text:
Clone Of: 2218487
Environment:
Last Closed: 2023-11-08 18:52:23 UTC
Embargoed:


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
Github RamenDR ramen pull 972 0 None Merged Set ClusterDataReady false if PV exists, is bound, and claim's deletion timestamp is non-zero 2023-07-18 07:57:05 UTC
Red Hat Product Errata RHSA-2023:6832 0 None None None 2023-11-08 18:54:09 UTC

Description Mudit Agarwal 2023-07-18 07:55:22 UTC
+++ This bug was initially created as a clone of Bug #2218487 +++

Description of problem (please be detailed as possible and provide log
snippests):
This issue is reproducible while using metrodr offering of ibm-spectrum-fusion product.
Steps to reproduce: 
Initiate failover of an application on cluster where earlier PVCs in terminating state which is needed while changing ReplicationState of vrg to secondary.
After successful failover, PVCs in pending state.
Observed that UID field of PV in 'ClaimRef' not refreshed.

Info about fusion relocation workflow:
As a part of fusion 'relocation' workflow, before starting relocation on new cluster, VRG ReplicationState on older cluster is updated as 'secondary' and PVCs are also marked for deletion.
When failback is initiated again on older cluster (for application relocated to new cluster), older vrg(having secondary ReplicationState) is deleted and new vrg is created. Along with older vrg, PVC also gets automatically deleted since its already marked for deletion.
Before starting failback on given cluster where we faced PVC pending issue, following was VRG and PVC status

VRG having ReplicationState as secondary
PVCs in terminating state.
After failback, older vrg got deleted, new vrg created successfully and PVC was in pending state. Corresponding PV's claim Ref field which stores PVC uid was not identical as that of new PVC which was in pending state(indicating its not refreshed)




Version of all relevant components (if applicable): 


Does this issue impact your ability to continue to work with the product
(please explain in detail what is the user impact)? yes


Is there any workaround available to the best of your knowledge? 


Rate from 1 - 5 the complexity of the scenario you performed that caused this
bug (1 - very simple, 5 - very complex)? 1 


Can this issue reproducible? yes


Can this issue reproduce from the UI? yes


If this is a regression, please provide more details to justify this:


Steps to Reproduce:
1. Initiate failover of an application on cluster where earlier PVCs in terminating state which is needed while changing ReplicationState of vrg to secondary.
2. After successful failover, PVCs in pending state.
3. Observed that UID field of PV in 'ClaimRef' not refreshed.
Actual results: PVC remains in pending state


Expected results: PVC should be bound to correct PV


Additional info: This bug is reproduced in MetroDR service of ibm-spectrum-fusion product

--- Additional comment from RHEL Program Management on 2023-06-29 10:55:38 UTC ---

This bug having no release flag set previously, is now set with release flag 'odf‑4.14.0' to '?', and so is being proposed to be fixed at the ODF 4.14.0 release. Note that the 3 Acks (pm_ack, devel_ack, qa_ack), if any previously set while release flag was missing, have now been reset since the Acks are to be set against a release flag.

--- Additional comment from RHEL Program Management on 2023-06-29 10:55:38 UTC ---

Since this bug has severity set to 'urgent', it is being proposed as a blocker for the currently set release flag. Please resolve ASAP.

--- Additional comment from Shyamsundar on 2023-06-29 13:05:20 UTC ---

Have requested a session with Fusion engineers for better clarity of the issue, and to troubleshoot the problem. Once understood better, more information would be added here.

--- Additional comment from Shyamsundar on 2023-07-03 12:45:38 UTC ---

Currently working with IBM-Fusion team to reproduce the issue and determine the root cause.

A walkthrough of the Fusion DR operator steps and looking at Ramen code to determine if there are any potential issues did not yield any results, and hence waiting for a reproduction and logs from the same to analyze the issue.

--- Additional comment from pallavi on 2023-07-05 11:20:09 UTC ---



--- Additional comment from pallavi on 2023-07-05 11:30:40 UTC ---



--- Additional comment from pallavi on 2023-07-05 12:49:44 UTC ---



--- Additional comment from Shyamsundar on 2023-07-13 16:42:27 UTC ---

Upstream PR https://github.com/RamenDR/ramen/pull/972 merged and tested by the fusion team.

--- Additional comment from Harish NV Rao on 2023-07-18 06:48:39 UTC ---

QE is acking this BZ for 4.13.1 based on the test plan - https://docs.google.com/document/d/1zg120opbyDgkcRM0rY5HA1T4gRCsF-zEk0TNEI1T8eY/edit

--- Additional comment from RHEL Program Management on 2023-07-18 06:48:47 UTC ---

This BZ is being approved for ODF 4.14.0 release, upon receipt of the 3 ACKs (PM,Devel,QA) for the release flag 'odf‑4.14.0

--- Additional comment from RHEL Program Management on 2023-07-18 06:48:47 UTC ---

Since this bug has been approved for ODF 4.14.0 release, through release flag 'odf-4.14.0+', the Target Release is being set to 'ODF 4.14.0

--- Additional comment from RHEL Program Management on 2023-07-18 07:54:27 UTC ---

This BZ is being approved for an ODF 4.13.z z-stream update, upon receipt of the 3 ACKs (PM,Devel,QA) for the release flag 'odf‑4.13.z', and having been marked for an approved z-stream update

--- Additional comment from RHEL Program Management on 2023-07-18 07:54:27 UTC ---

Since this bug has been approved for ODF 4.13.1 release, through release flag 'odf-4.13.z+', and appropriate update number entry at the 'Internal Whiteboard', the Target Release is being set to 'ODF 4.13.1'

Comment 10 avdhoot 2023-10-03 06:52:51 UTC
Observation:
After doing failover of all apps second time as mentioned in https://bugzilla.redhat.com/show_bug.cgi?id=2223553#c5 all
apps are stuck in WaitingForResourceRestore state.

Note:
Used WA to delete all apps- 
https://bugzilla.redhat.com/show_bug.cgi?id=2239140#c2

Comment 11 avdhoot 2023-10-03 06:53:55 UTC
Observation:
After doing failover of all apps second time as mentioned in https://bugzilla.redhat.com/show_bug.cgi?id=2223553#c5 all
apps are stuck in WaitingForResourceRestore state.

Note:
Used WA to delete all apps- 
https://bugzilla.redhat.com/show_bug.cgi?id=2239140#c2

Comment 14 avdhoot 2023-10-16 06:57:21 UTC
As I remembered I started failover aprrox 10 mins after app is created.

Comment 16 Mudit Agarwal 2023-10-17 11:24:21 UTC
Moving this out of 4.14

Comment 18 avdhoot 2023-10-26 06:43:52 UTC

I have executed the test case mentioned in test plane on fresh MDR setup with latest ODF build. 
Bug is not reproduced as mentioned in Comment 11 with latest odf build on fresh cluster.

Test Plan - https://docs.google.com/document/d/1zg120opbyDgkcRM0rY5HA1T4gRCsF-zEk0TNEI1T8eY/edit

Product version:
OCP  - 4.14.0
ODF  - 4.14.0-156
ACM  - 2.9.0
ceph - 6.1


Hence marking this bug as verified.

Comment 25 errata-xmlrpc 2023-11-08 18:52:23 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory (Important: Red Hat OpenShift Data Foundation 4.14.0 security, enhancement & bug fix update), and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHSA-2023:6832

Comment 26 Ulroch Molmohr 2024-05-13 03:20:27 UTC
Access and discover a variety of solutions. Within the Fusion'relocation' procedure, PVCs are also marked for deletion and the VRG ReplicationState on the older cluster is updated to'secondary' prior to beginning relocation on the new cluster https://basketbrosio.com
Test Plan - https://docs.google.com/document/d/1zg120opbyDgkcRM0rY5HA1T4gRCsF-zEk0TNEI1T8eY/edit


Note You need to log in before you can comment on or make changes to this bug.