Bug 2290677

Summary: [MDR]: Post relocation new PVC created on primary cluster
Product: [Red Hat Storage] Red Hat OpenShift Data Foundation Reporter: akarsha <akrai>
Component: odf-drAssignee: Shyamsundar <srangana>
odf-dr sub component: ramen QA Contact: akarsha <akrai>
Status: CLOSED ERRATA Docs Contact:
Severity: urgent    
Priority: urgent CC: hnallurv, kseeger, muagarwa, sagrawal, srangana
Version: 4.16   
Target Milestone: ---   
Target Release: ODF 4.16.0   
Hardware: Unspecified   
OS: Unspecified   
Whiteboard:
Fixed In Version: 4.16.0-124 Doc Type: No Doc Update
Doc Text:
Story Points: ---
Clone Of: Environment:
Last Closed: 2024-07-17 13:24:33 UTC Type: Bug
Regression: --- Mount Type: ---
Documentation: --- CRM:
Verified Versions: Category: ---
oVirt Team: --- RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: --- Target Upstream Version:
Embargoed:

Description akarsha 2024-06-06 07:25:49 UTC
Description of problem (please be detailed as possible and provide log
snippests):

Post relocation new PVC created on primary cluster for 2 of the apps "bb-cephsub"  and "cronjob-sub-ns".

Note that before performing failover and relocation we removed osd-0 disk from a host, zap it and add it back as a new OSD on the same host

Version of all relevant components (if applicable):
OCP: 4.16.0-0.nightly-2024-05-23-173505
ODF: 4.16.0-118.stable
ACM: 2.11.0-90
CEPH: 18.2.0-192.el9cp (d96dcaf9fd7ef5530bddebb07f804049c840d87e) reef (stable)

Does this issue impact your ability to continue to work with the product
(please explain in detail what is the user impact)?


Is there any workaround available to the best of your knowledge?


Rate from 1 - 5 the complexity of the scenario you performed that caused this
bug (1 - very simple, 5 - very complex)?


Can this issue reproducible?
1/1

Can this issue reproduce from the UI?


If this is a regression, please provide more details to justify this:


Steps to Reproduce:
1. Deploy MDR cluster
2. Deploy all types of applications
3. Performed the RHCS procedure: removed osd-0 disk from a host, zap it and add it back as a new OSD on the same host
4. Once ceph all healthy, fenced c1 and tried the failover of the applications from c1 to c2
5. Then tried the relocation of all the applications


Actual results:
Post relocation new PVC created on primary cluster

Expected results:
Relocation should succeed

Additional info:
As mentioned by Shyam this might be race condition, as on same cluster few other application were able to failover and relocate successfully.

Comment 5 Shyamsundar 2024-06-06 23:12:32 UTC
Details of the root cause can be found in the linked upstream PR.

The issue should have started around Dec-2023 when some changes were made in this area of code.

Comment 6 Sunil Kumar Acharya 2024-06-18 06:45:26 UTC
Please update the RDT flag/text appropriately.

Comment 11 errata-xmlrpc 2024-07-17 13:24:33 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory (Important: Red Hat OpenShift Data Foundation 4.16.0 security, enhancement & bug fix update), and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHSA-2024:4591