Bug 1857143

Summary: mgr/volumes: fs subvolume clones stuck in progress when libcephfs hits certain errors
Product: [Red Hat Storage] Red Hat Ceph Storage Reporter: Ram Raja <rraja>
Component: CephFSAssignee: Ram Raja <rraja>
Status: CLOSED ERRATA QA Contact: subhash <vpoliset>
Severity: low Docs Contact:
Priority: low    
Version: 4.1CC: ceph-eng-bugs, pdonnell, sweil, tserlin, vpoliset
Target Milestone: z2   
Target Release: 4.1   
Hardware: All   
OS: All   
Whiteboard:
Fixed In Version: ceph-14.2.8-94.el8cp, ceph-14.2.8-94.el7cp Doc Type: If docs needed, set a value
Doc Text:
Story Points: ---
Clone Of: 1857134 Environment:
Last Closed: 2020-09-30 17:26:28 UTC Type: ---
Regression: --- Mount Type: ---
Documentation: --- CRM:
Verified Versions: Category: ---
oVirt Team: --- RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: --- Target Upstream Version:
Embargoed:
Bug Depends On: 1857134    
Bug Blocks:    

Description Ram Raja 2020-07-15 08:58:18 UTC
+++ This bug was initially created as a clone of Bug #1857134 +++

Description of problem:
During `fs subvolume clone`, libcephfs hit the "Disk quota exceeded error" that caused the subvolume clone to be stuck in progress instead of entering failed state


Version-Release number of selected component (if applicable):


How reproducible: Always


Steps to Reproduce:
1. Create a subvolume of size 1MiB and fully populate it.
# ceph fs subvolume create a subvol00 1048576
# ceph fs subvolume getpath a subvol00
/volumes/_nogroup/subvol00/9a2992e6-9440-4510-bca9-0b4a8f50f80d
# ceph-fuse /mnt/ceph-fuse/
# cd /mnt/ceph-fuse/
# cd volumes/_nogroup/subvol00/9a2992e6-9440-4510-bca9-0b4a8f50f80d/
# dd if=/dev/zero of=output.file bs=1048576 count=1
# cd ~

2. Create a snapshot.
# ceph fs subvolume snapshot create a subvol00 snap00

3. Resize the subvolume to 512KiB.
# ceph fs subvolume resize a subvol00 524288

4. Create a snapshot clone of snapshot created.
# ceph fs subvolume snapshot protect a subvol00 snap00
# ceph fs subvolume snapshot clone a subvol00 snap00 clone00

5. Check the clone state.
# ceph fs clone status a clone00

Actual result:
The subvolume clone gets stuck 'in progress' state.


Expected results:
The subvolume clone is in 'failed' state, and you should be able to remove it.
# ceph fs clone status a clone00
{
  "status": {
    "state": "failed",
    "source": {
      "volume": "a",
      "subvolume": "subvol00",
      "snapshot": "snap00"
    }
  }
}
# ceph fs subvolume rm a clone00 --force

--- Additional comment from Ram Raja on 2020-07-15 08:50:20 UTC ---

The fix for the Ceph tracker ticker is merged in master.

Comment 10 errata-xmlrpc 2020-09-30 17:26:28 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory (Red Hat Ceph Storage 4.1 Bug Fix update), and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHBA-2020:4144