Bug 2290677 - [MDR]: Post relocation new PVC created on primary cluster
Summary: [MDR]: Post relocation new PVC created on primary cluster
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: Red Hat OpenShift Data Foundation
Classification: Red Hat Storage
Component: odf-dr
Version: 4.16
Hardware: Unspecified
OS: Unspecified
urgent
urgent
Target Milestone: ---
: ODF 4.16.0
Assignee: Shyamsundar
QA Contact: akarsha
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2024-06-06 07:25 UTC by akarsha
Modified: 2024-07-17 13:24 UTC (History)
5 users (show)

Fixed In Version: 4.16.0-124
Doc Type: No Doc Update
Doc Text:
Clone Of:
Environment:
Last Closed: 2024-07-17 13:24:33 UTC
Embargoed:


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
Github RamenDR ramen pull 1446 0 None open Fix updating placement without checking for VRG readiness in relocate 2024-06-06 23:11:39 UTC
Github red-hat-storage ramen pull 289 0 None open Bug 2290677: Update placement decision only once Primary VRG is ready (for relocate) 2024-06-08 00:18:07 UTC
Red Hat Product Errata RHSA-2024:4591 0 None None None 2024-07-17 13:24:34 UTC

Description akarsha 2024-06-06 07:25:49 UTC
Description of problem (please be detailed as possible and provide log
snippests):

Post relocation new PVC created on primary cluster for 2 of the apps "bb-cephsub"  and "cronjob-sub-ns".

Note that before performing failover and relocation we removed osd-0 disk from a host, zap it and add it back as a new OSD on the same host

Version of all relevant components (if applicable):
OCP: 4.16.0-0.nightly-2024-05-23-173505
ODF: 4.16.0-118.stable
ACM: 2.11.0-90
CEPH: 18.2.0-192.el9cp (d96dcaf9fd7ef5530bddebb07f804049c840d87e) reef (stable)

Does this issue impact your ability to continue to work with the product
(please explain in detail what is the user impact)?


Is there any workaround available to the best of your knowledge?


Rate from 1 - 5 the complexity of the scenario you performed that caused this
bug (1 - very simple, 5 - very complex)?


Can this issue reproducible?
1/1

Can this issue reproduce from the UI?


If this is a regression, please provide more details to justify this:


Steps to Reproduce:
1. Deploy MDR cluster
2. Deploy all types of applications
3. Performed the RHCS procedure: removed osd-0 disk from a host, zap it and add it back as a new OSD on the same host
4. Once ceph all healthy, fenced c1 and tried the failover of the applications from c1 to c2
5. Then tried the relocation of all the applications


Actual results:
Post relocation new PVC created on primary cluster

Expected results:
Relocation should succeed

Additional info:
As mentioned by Shyam this might be race condition, as on same cluster few other application were able to failover and relocate successfully.

Comment 5 Shyamsundar 2024-06-06 23:12:32 UTC
Details of the root cause can be found in the linked upstream PR.

The issue should have started around Dec-2023 when some changes were made in this area of code.

Comment 6 Sunil Kumar Acharya 2024-06-18 06:45:26 UTC
Please update the RDT flag/text appropriately.

Comment 11 errata-xmlrpc 2024-07-17 13:24:33 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory (Important: Red Hat OpenShift Data Foundation 4.16.0 security, enhancement & bug fix update), and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHSA-2024:4591


Note You need to log in before you can comment on or make changes to this bug.