Description of problem: This issue is filed with reference to findings in BZ 2265262. Quiesce state remains in Quiescing if IO was run subvolume uuid parent dir during quiesce. Thereafter, even with no IO, the quiesce on the set doesn't succeed until cancelled and reset. Version-Release number of selected component (if applicable): "18.2.1-38.el9cp (c709c61e19be87249f04b05dec1e586c1f9dd7b0) reef" How reproducible: Yes Steps to Reproduce: 1. Create few subvolumes, mount cephfs volume / 2. Run IO to path of subvolumes like /mnt/cephfs/volumes/_nogroup/sv2/cg_testfile where /mnt/cephfs is the mount point for cephfs volume path '/' 3. In parallel to IO start quiesce on respective set of subvolumes Actual results: Quiesce doesn't succeed, it continues to remain in quiescing until timeout. Expected results: Quiesce should succeed. Additional info:
Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory (Critical: Red Hat Ceph Storage 7.1 security, enhancements, and bug fix update), and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://access.redhat.com/errata/RHSA-2024:3925