Bug 2239580 - [RDR][CEPHFS] Failover/Relocate Operation end up syncing back to the secondary all files.
Summary: [RDR][CEPHFS] Failover/Relocate Operation end up syncing back to the secondar...
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: Red Hat OpenShift Data Foundation
Classification: Red Hat Storage
Component: odf-dr
Version: 4.14
Hardware: Unspecified
OS: Unspecified
unspecified
unspecified
Target Milestone: ---
: ODF 4.14.0
Assignee: Benamar Mekhissi
QA Contact: Pratik Surve
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2023-09-19 07:05 UTC by Pratik Surve
Modified: 2023-11-08 18:55 UTC (History)
4 users (show)

Fixed In Version: 4.14.0-148
Doc Type: No Doc Update
Doc Text:
Clone Of:
Environment:
Last Closed: 2023-11-08 18:54:31 UTC
Embargoed:


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
Github RamenDR ramen pull 1087 0 None open Rollback to the last snapshot on failover (only) 2023-10-06 19:29:25 UTC
Github red-hat-storage ramen pull 146 0 None open Bug 2239580: Rollback to the last snapshot on failover (only) 2023-10-10 19:17:48 UTC
Red Hat Product Errata RHSA-2023:6832 0 None None None 2023-11-08 18:55:10 UTC

Description Pratik Surve 2023-09-19 07:05:25 UTC
Description of problem (please be detailed as possible and provide log
snippests):
[RDR][CEPHFS] Failover/Relocate Operation end up syncing back to the secondary all files.

Version of all relevant components (if applicable):
OCP version:- 4.14.0-0.nightly-2023-09-15-055234
ODF version:- 4.14.0-135
CEPH version:- ceph version 17.2.6-138.el9cp (b488c8dad42b2ecffcd96f3d76eeeecce48b8590) quincy (stable)
ACM version:- 2.9.0-109
SUBMARINER version:- devel
VOLSYNC version:- volsync-product.v0.7.4
VOLSYNC method:- destinationCopyMethod: LocalDirect

Does this issue impact your ability to continue to work with the product
(please explain in detail what is the user impact)?


Is there any workaround available to the best of your knowledge?


Rate from 1 - 5 the complexity of the scenario you performed that caused this
bug (1 - very simple, 5 - very complex)?


Can this issue reproducible?
yes

Can this issue reproduce from the UI?


If this is a regression, please provide more details to justify this:


Steps to Reproduce:
1. Deploy RDR cluster 
2. Deploy cephfs workloads
3. Perform Failover or relocate operation


Actual results:
First failover or relocation ends up syncing back all files

Expected results:


Additional info:

Comment 5 Benamar Mekhissi 2023-10-06 19:29:26 UTC
Details are in the PR: https://github.com/RamenDR/ramen/pull/1087

Comment 9 errata-xmlrpc 2023-11-08 18:54:31 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory (Important: Red Hat OpenShift Data Foundation 4.14.0 security, enhancement & bug fix update), and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHSA-2023:6832


Note You need to log in before you can comment on or make changes to this bug.