Bug 2267907 - [RDR] CephFS subvolume left behind in managed cluster after deleting the application
Summary: [RDR] CephFS subvolume left behind in managed cluster after deleting the appl...
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: Red Hat OpenShift Data Foundation
Classification: Red Hat Storage
Component: odf-dr
Version: 4.15
Hardware: Unspecified
OS: Unspecified
unspecified
high
Target Milestone: ---
: ODF 4.16.0
Assignee: Benamar Mekhissi
QA Contact: Sidhant Agrawal
URL:
Whiteboard:
Depends On:
Blocks: 2273997
TreeView+ depends on / blocked
 
Reported: 2024-03-05 14:06 UTC by Sidhant Agrawal
Modified: 2024-11-15 04:25 UTC (History)
8 users (show)

Fixed In Version: 4.16.0-94
Doc Type: No Doc Update
Doc Text:
Clone Of:
: 2273997 (view as bug list)
Environment:
Last Closed: 2024-07-17 13:14:56 UTC
Embargoed:


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
Github RamenDR ramen pull 1276 0 None open Fix Race Condition and Standardize Resource Metadata Handling 2024-03-21 13:38:52 UTC
Github red-hat-storage ramen pull 255 0 None open Bug 2267907: Fix Race Condition and Standardize Resource Metadata Handling 2024-05-06 15:42:17 UTC
Red Hat Product Errata RHSA-2024:4591 0 None None None 2024-07-17 13:15:04 UTC

Description Sidhant Agrawal 2024-03-05 14:06:54 UTC
Description of problem (please be detailed as possible and provide log snippests):
On a RDR setup, after performing failover operation and then deleting DR workload (CephFS based), observed that few subvolumes were not deleted from the secondary managed cluster.

Version of all relevant components (if applicable):
OCP: 4.15.0-0.nightly-2024-02-29-223316
ODF: 4.15.0-150
ceph version 17.2.6-196.el9cp (cbbf2cfb549196ca18c0c9caff9124d83ed681a4) quincy (stable)
ACM: 2.10.0-DOWNSTREAM-2024-02-28-06-06-55
Submariner: 0.17.0 (iib:680159)
VolSync: 0.8.0

Does this issue impact your ability to continue to work with the product
(please explain in detail what is the user impact)?
No

Is there any workaround available to the best of your knowledge?


Rate from 1 - 5 the complexity of the scenario you performed that caused this
bug (1 - very simple, 5 - very complex)?
2

Can this issue reproducible?
Not always.
In the same run, test_failover[primary_up_cephfs] failed but the other test test_failover[primary_down_cephfs] passed

Can this issue reproduce from the UI?


If this is a regression, please provide more details to justify this:


Steps to Reproduce:

1. Deploy CephFS based workload consisting of 20 pods, PVCs on C1 (sagrawal-nc1)
2. Wait for around (2 * scheduling_interval) to run IOs
3. Perform failover from C1 (sagrawal-nc1) to C2 (sagrawal-nc2)
4. Verify resource created on secondary cluster and resources cleanup from primary cluster
5. Delete the workload
6. Verify backend subvolumes are deleted

Automated test: tests/functional/disaster-recovery/regional-dr/test_failover.py::TestFailover::test_failover[primary_up_cephfs]

Console logs from automated test run: https://url.corp.redhat.com/3333dd9


Actual results:
Subvolumes left behind in managed cluster (sagrawal-nc2)
Actual error message when running this command "ceph fs subvolume getpath ocs-storagecluster-cephfilesystem csi-vol-ae98923a-fec6-42dd-aca5-52ef54768dfe csi --format json" :
Error ENOENT: subvolume 'csi-vol-ae98923a-fec6-42dd-aca5-52ef54768dfe' is removed and has only snapshots retained


Expected results:
Subvolumes removed from both managed cluster. 
Expected error message when running this command "ceph fs subvolume getpath ocs-storagecluster-cephfilesystem csi-vol-ae98923a-fec6-42dd-aca5-52ef54768dfe csi --format json" :
Error ENOENT: subvolume 'csi-vol-ae98923a-fec6-42dd-aca5-52ef54768dfe' does not exist

Additional info:

> Workload deletion command from logs:
2024-03-04 20:39:21  15:09:21 - MainThread - ocs_ci.utility.utils - INFO - C[sagrawal-acm] - Executing command: oc delete -k ocs-workloads/rdr/busybox/cephfs/app-busybox-1/subscriptions/busybox

> Test failed after multiple retries waiting for subvolume to be deleted
2024-03-04 20:52:45  AssertionError: Error occurred while verifying volume is present in backend: Error during execution of command: oc -n openshift-storage rsh rook-ceph-tools-dbddf8896-qhn4j ceph fs subvolume getpath ocs-storagecluster-cephfilesystem csi-vol-ae98923a-fec6-42dd-aca5-52ef54768dfe csi --format json.
2024-03-04 20:52:45  Error is Error ENOENT: subvolume 'csi-vol-ae98923a-fec6-42dd-aca5-52ef54768dfe' is removed and has only snapshots retained
2024-03-04 20:52:45  command terminated with exit code 2
2024-03-04 20:52:45   ImageUUID: ae98923a-fec6-42dd-aca5-52ef54768dfe. Interface type: CephFileSystem
2024-03-04 20:52:45  15:22:44 - MainThread - ocs_ci.helpers.helpers - ERROR - C[sagrawal-nc2] - Volume corresponding to uuid ae98923a-fec6-42dd-aca5-52ef54768dfe is not deleted in backend


Latest output after several hours from toolbox pod (cluster - sagrawal-nc2):
sh-5.1$ date
Tue Mar  5 13:29:02 UTC 2024
sh-5.1$ ceph fs subvolume ls ocs-storagecluster-cephfilesystem csi
[
    {
        "name": "csi-vol-ae98923a-fec6-42dd-aca5-52ef54768dfe"
    },
    {
        "name": "csi-vol-aeee95f3-b0d6-4e0f-8d11-da07ff482088"
    }
]
sh-5.1$ ceph fs subvolume getpath ocs-storagecluster-cephfilesystem csi-vol-ae98923a-fec6-42dd-aca5-52ef54768dfe csi --format json

Error ENOENT: subvolume 'csi-vol-ae98923a-fec6-42dd-aca5-52ef54768dfe' is removed and has only snapshots retained
sh-5.1$ ceph fs subvolume getpath ocs-storagecluster-cephfilesystem csi-vol-aeee95f3-b0d6-4e0f-8d11-da07ff482088 csi --format json

Error ENOENT: subvolume 'csi-vol-aeee95f3-b0d6-4e0f-8d11-da07ff482088' is removed and has only snapshots retained

Comment 9 Benamar Mekhissi 2024-03-21 13:38:26 UTC
PR: https://github.com/RamenDR/ramen/pull/1276

Comment 18 errata-xmlrpc 2024-07-17 13:14:56 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory (Important: Red Hat OpenShift Data Foundation 4.16.0 security, enhancement & bug fix update), and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHSA-2024:4591

Comment 19 Red Hat Bugzilla 2024-11-15 04:25:20 UTC
The needinfo request[s] on this closed bug have been removed as they have been unresolved for 120 days


Note You need to log in before you can comment on or make changes to this bug.