Bug 1857134

Summary: mgr/volumes: fs subvolume clones stuck in progress when libcephfs hits certain errors
Product: [Red Hat Storage] Red Hat Ceph Storage Reporter: Ram Raja <rraja>
Component: CephFSAssignee: Ram Raja <rraja>
Status: CLOSED UPSTREAM QA Contact: subhash <vpoliset>
Severity: low Docs Contact:
Priority: low    
Version: 5.0CC: ceph-eng-bugs, pdonnell, sweil
Target Milestone: ---   
Target Release: 5.0   
Hardware: All   
OS: All   
Whiteboard:
Fixed In Version: Doc Type: If docs needed, set a value
Doc Text:
Story Points: ---
Clone Of:
: 1857143 (view as bug list) Environment:
Last Closed: 2020-07-31 21:54:53 UTC Type: Bug
Regression: --- Mount Type: ---
Documentation: --- CRM:
Verified Versions: Category: ---
oVirt Team: --- RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: --- Target Upstream Version:
Embargoed:
Bug Depends On:    
Bug Blocks: 1857143    

Description Ram Raja 2020-07-15 08:34:24 UTC
Description of problem:
During `fs subvolume clone`, libcephfs hit the "Disk quota exceeded error" that caused the subvolume clone to be stuck in progress instead of entering failed state


Version-Release number of selected component (if applicable):


How reproducible: Always


Steps to Reproduce:
1. Create a subvolume of size 1MiB and fully populate it.
# ceph fs subvolume create a subvol00 1048576
# ceph fs subvolume getpath a subvol00
/volumes/_nogroup/subvol00/9a2992e6-9440-4510-bca9-0b4a8f50f80d
# ceph-fuse /mnt/ceph-fuse/
# cd /mnt/ceph-fuse/
# cd volumes/_nogroup/subvol00/9a2992e6-9440-4510-bca9-0b4a8f50f80d/
# dd if=/dev/zero of=output.file bs=1048576 count=1
# cd ~

2. Create a snapshot.
# ceph fs subvolume snapshot create a subvol00 snap00

3. Resize the subvolume to 512KiB.
# ceph fs subvolume resize a subvol00 524288

4. Create a snapshot clone of snapshot created.
# ceph fs subvolume snapshot protect a subvol00 snap00
# ceph fs subvolume snapshot clone a subvol00 snap00 clone00

5. Check the clone state.
# ceph fs clone status a clone00

Actual result:
The subvolume clone gets stuck 'in progress' state.


Expected results:
The subvolume clone is in 'failed' state, and you should be able to remove it.
# ceph fs clone status a clone00
{
  "status": {
    "state": "failed",
    "source": {
      "volume": "a",
      "subvolume": "subvol00",
      "snapshot": "snap00"
    }
  }
}
# ceph fs subvolume rm a clone00 --force

Comment 1 Ram Raja 2020-07-15 08:50:20 UTC
The fix for the Ceph tracker ticker is merged in master.