Description of problem: During `fs subvolume clone`, libcephfs hit the "Disk quota exceeded error" that caused the subvolume clone to be stuck in progress instead of entering failed state Version-Release number of selected component (if applicable): How reproducible: Always Steps to Reproduce: 1. Create a subvolume of size 1MiB and fully populate it. # ceph fs subvolume create a subvol00 1048576 # ceph fs subvolume getpath a subvol00 /volumes/_nogroup/subvol00/9a2992e6-9440-4510-bca9-0b4a8f50f80d # ceph-fuse /mnt/ceph-fuse/ # cd /mnt/ceph-fuse/ # cd volumes/_nogroup/subvol00/9a2992e6-9440-4510-bca9-0b4a8f50f80d/ # dd if=/dev/zero of=output.file bs=1048576 count=1 # cd ~ 2. Create a snapshot. # ceph fs subvolume snapshot create a subvol00 snap00 3. Resize the subvolume to 512KiB. # ceph fs subvolume resize a subvol00 524288 4. Create a snapshot clone of snapshot created. # ceph fs subvolume snapshot protect a subvol00 snap00 # ceph fs subvolume snapshot clone a subvol00 snap00 clone00 5. Check the clone state. # ceph fs clone status a clone00 Actual result: The subvolume clone gets stuck 'in progress' state. Expected results: The subvolume clone is in 'failed' state, and you should be able to remove it. # ceph fs clone status a clone00 { "status": { "state": "failed", "source": { "volume": "a", "subvolume": "subvol00", "snapshot": "snap00" } } } # ceph fs subvolume rm a clone00 --force
The fix for the Ceph tracker ticker is merged in master.