+++ This bug was initially created as a clone of Bug #1857134 +++ Description of problem: During `fs subvolume clone`, libcephfs hit the "Disk quota exceeded error" that caused the subvolume clone to be stuck in progress instead of entering failed state Version-Release number of selected component (if applicable): How reproducible: Always Steps to Reproduce: 1. Create a subvolume of size 1MiB and fully populate it. # ceph fs subvolume create a subvol00 1048576 # ceph fs subvolume getpath a subvol00 /volumes/_nogroup/subvol00/9a2992e6-9440-4510-bca9-0b4a8f50f80d # ceph-fuse /mnt/ceph-fuse/ # cd /mnt/ceph-fuse/ # cd volumes/_nogroup/subvol00/9a2992e6-9440-4510-bca9-0b4a8f50f80d/ # dd if=/dev/zero of=output.file bs=1048576 count=1 # cd ~ 2. Create a snapshot. # ceph fs subvolume snapshot create a subvol00 snap00 3. Resize the subvolume to 512KiB. # ceph fs subvolume resize a subvol00 524288 4. Create a snapshot clone of snapshot created. # ceph fs subvolume snapshot protect a subvol00 snap00 # ceph fs subvolume snapshot clone a subvol00 snap00 clone00 5. Check the clone state. # ceph fs clone status a clone00 Actual result: The subvolume clone gets stuck 'in progress' state. Expected results: The subvolume clone is in 'failed' state, and you should be able to remove it. # ceph fs clone status a clone00 { "status": { "state": "failed", "source": { "volume": "a", "subvolume": "subvol00", "snapshot": "snap00" } } } # ceph fs subvolume rm a clone00 --force --- Additional comment from Ram Raja on 2020-07-15 08:50:20 UTC --- The fix for the Ceph tracker ticker is merged in master.
Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory (Red Hat Ceph Storage 4.1 Bug Fix update), and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://access.redhat.com/errata/RHBA-2020:4144