Description of problem: Quiesce state on subvolumes continues to remain in QUIESCING state for >20mins. [root@ceph-sumar-regression-1ygael-node7 ~]# ceph fs quiesce cephfs --set-id cg_test2 --query { "epoch": 205, "set_version": 126, "sets": { "cg_test2": { "version": 126, "age_ref": 0.0, "state": { "name": "QUIESCING", "age": 1210.4 }, "timeout": 3600.0, "expiration": 3600.0, "members": { "file:/volumes/svg1/sv_non_def_2/b74fcfcb-45f6-4f99-98e6-362df0e2bd1b": { "excluded": false, "state": { "name": "QUIESCING", "age": 1210.4 } }, "file:/volumes/_nogroup/sv4/38634197-a855-4634-b6c1-4cc5fadb4051": { "excluded": false, "state": { "name": "QUIESCING", "age": 1210.4 } }, Version-Release number of selected component (if applicable): How reproducible: Steps to Reproduce: 1. Create subvolumes across default and non-default subvolumegroups 2. add data to all subvolumes 3. Run quiesce without --await on subvolumes and read of data on subvolumes through kernel mount, in parallel Actual results: Quiesce state on all subvolumes remains in QUIESCING even after waiting for >20mins Also the read op started on subvolumes remains in hung state. Both seems not completing. Expected results: Quiesce state should change to QUIESCED after few mins Additional info:
Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory (Critical: Red Hat Ceph Storage 7.1 security, enhancements, and bug fix update), and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://access.redhat.com/errata/RHSA-2024:3925