Bug 2266020

Summary: [CephFS - Consistency Group] - quiesce state remains in QUIESCING if IO was run subvolume uuid parent directory
Product: [Red Hat Storage] Red Hat Ceph Storage Reporter: sumr
Component: CephFSAssignee: Patrick Donnelly <pdonnell>
Status: CLOSED ERRATA QA Contact: sumr
Severity: medium Docs Contact: Akash Raj <akraj>
Priority: unspecified    
Version: 7.1CC: akraj, amk, ceph-eng-bugs, cephqe-warriors, hyelloji, lusov, ngangadh, pdonnell, tserlin
Target Milestone: ---   
Target Release: 7.1   
Hardware: x86_64   
OS: Linux   
Whiteboard:
Fixed In Version: ceph-18.2.1-58.el9cp Doc Type: No Doc Update
Doc Text:
Story Points: ---
Clone Of: Environment:
Last Closed: 2024-06-13 14:27:38 UTC Type: Bug
Regression: --- Mount Type: ---
Documentation: --- CRM:
Verified Versions: Category: ---
oVirt Team: --- RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: --- Target Upstream Version:
Embargoed:
Bug Depends On:    
Bug Blocks: 2267614, 2298578, 2298579    

Description sumr 2024-02-26 10:46:42 UTC
Description of problem:
This issue is filed with reference to findings in BZ 2265262.

Quiesce state remains in Quiescing if IO was run subvolume uuid parent dir during quiesce. Thereafter, even with no IO, the quiesce on the set doesn't succeed until cancelled and reset.


Version-Release number of selected component (if applicable): "18.2.1-38.el9cp (c709c61e19be87249f04b05dec1e586c1f9dd7b0) reef"


How reproducible: Yes


Steps to Reproduce:
1. Create few subvolumes, mount cephfs volume /
2. Run IO to path of subvolumes like /mnt/cephfs/volumes/_nogroup/sv2/cg_testfile where /mnt/cephfs is the mount point for cephfs volume path '/'
3. In parallel to IO start quiesce on respective set of subvolumes

Actual results: Quiesce doesn't succeed, it continues to remain in quiescing until timeout.


Expected results: Quiesce should succeed.


Additional info:

Comment 13 errata-xmlrpc 2024-06-13 14:27:38 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory (Critical: Red Hat Ceph Storage 7.1 security, enhancements, and bug fix update), and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHSA-2024:3925