Bug 1857143 - mgr/volumes: fs subvolume clones stuck in progress when libcephfs hits certain errors
Summary: mgr/volumes: fs subvolume clones stuck in progress when libcephfs hits certai...
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: Red Hat Ceph Storage
Classification: Red Hat Storage
Component: CephFS
Version: 4.1
Hardware: All
OS: All
low
low
Target Milestone: z2
: 4.1
Assignee: Ram Raja
QA Contact: subhash
URL:
Whiteboard:
Depends On: 1857134
Blocks:
TreeView+ depends on / blocked
 
Reported: 2020-07-15 08:58 UTC by Ram Raja
Modified: 2020-09-30 17:26 UTC (History)
5 users (show)

Fixed In Version: ceph-14.2.8-94.el8cp, ceph-14.2.8-94.el7cp
Doc Type: If docs needed, set a value
Doc Text:
Clone Of: 1857134
Environment:
Last Closed: 2020-09-30 17:26:28 UTC
Embargoed:


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
Ceph Project Bug Tracker 46464 0 None None None 2020-07-22 23:35:12 UTC
Red Hat Product Errata RHBA-2020:4144 0 None None None 2020-09-30 17:26:51 UTC

Description Ram Raja 2020-07-15 08:58:18 UTC
+++ This bug was initially created as a clone of Bug #1857134 +++

Description of problem:
During `fs subvolume clone`, libcephfs hit the "Disk quota exceeded error" that caused the subvolume clone to be stuck in progress instead of entering failed state


Version-Release number of selected component (if applicable):


How reproducible: Always


Steps to Reproduce:
1. Create a subvolume of size 1MiB and fully populate it.
# ceph fs subvolume create a subvol00 1048576
# ceph fs subvolume getpath a subvol00
/volumes/_nogroup/subvol00/9a2992e6-9440-4510-bca9-0b4a8f50f80d
# ceph-fuse /mnt/ceph-fuse/
# cd /mnt/ceph-fuse/
# cd volumes/_nogroup/subvol00/9a2992e6-9440-4510-bca9-0b4a8f50f80d/
# dd if=/dev/zero of=output.file bs=1048576 count=1
# cd ~

2. Create a snapshot.
# ceph fs subvolume snapshot create a subvol00 snap00

3. Resize the subvolume to 512KiB.
# ceph fs subvolume resize a subvol00 524288

4. Create a snapshot clone of snapshot created.
# ceph fs subvolume snapshot protect a subvol00 snap00
# ceph fs subvolume snapshot clone a subvol00 snap00 clone00

5. Check the clone state.
# ceph fs clone status a clone00

Actual result:
The subvolume clone gets stuck 'in progress' state.


Expected results:
The subvolume clone is in 'failed' state, and you should be able to remove it.
# ceph fs clone status a clone00
{
  "status": {
    "state": "failed",
    "source": {
      "volume": "a",
      "subvolume": "subvol00",
      "snapshot": "snap00"
    }
  }
}
# ceph fs subvolume rm a clone00 --force

--- Additional comment from Ram Raja on 2020-07-15 08:50:20 UTC ---

The fix for the Ceph tracker ticker is merged in master.

Comment 10 errata-xmlrpc 2020-09-30 17:26:28 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory (Red Hat Ceph Storage 4.1 Bug Fix update), and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHBA-2020:4144


Note You need to log in before you can comment on or make changes to this bug.