Note: This bug is displayed in read-only format because the product is no longer active in Red Hat Bugzilla.
This project is now read‑only. Starting Monday, February 2, please use https://ibm-ceph.atlassian.net/ for all bug tracking management.

Bug 2379716

Summary: Clone Operations consuming complete disk space
Product: [Red Hat Storage] Red Hat Ceph Storage Reporter: Amarnath <amk>
Component: CephFSAssignee: Rishabh Dave <ridave>
Status: CLOSED ERRATA QA Contact: Amarnath <amk>
Severity: high Docs Contact: Rivka Pollack <rpollack>
Priority: unspecified    
Version: 8.1CC: ceph-eng-bugs, cephqe-warriors, hyelloji, jcaratza, mamohan, ngangadh, ridave, rpollack, tserlin, vereddy
Target Milestone: ---Keywords: Regression
Target Release: 8.1z2   
Hardware: Unspecified   
OS: Unspecified   
Whiteboard:
Fixed In Version: ceph-19.2.1-235.el9cp Doc Type: No Doc Update
Doc Text:
Story Points: ---
Clone Of: Environment:
Last Closed: 2025-08-18 14:01:14 UTC Type: Bug
Regression: --- Mount Type: ---
Documentation: --- CRM:
Verified Versions: Category: ---
oVirt Team: --- RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: --- Target Upstream Version:
Embargoed:

Description Amarnath 2025-07-12 15:53:48 UTC
Description of problem:

Clone Operations consuming complete disk space

Steps Followed :
Created a subvolume of size 5 GB
ceph fs subvolume create cephfs subvol_2 --size 5368706371 --group_name subvolgroup_1

Filled 3 GB(approx) data
ython3 /home/cephuser/smallfile/smallfile_cli.py --operation create --threads 10 --file-size 4000 --files 100 --files-per-dir 10 --dirs-per-dir 2 --top /mnt/cephfs_fuseb6dhvrprpi_1/

After filling this is the status of ceph cluster
[root@ceph-amk-sanity-03n5th-node8 ~]# ceph -s
  cluster:
    id:     3b768d82-5db7-11f0-8595-fa163e126104
    health: HEALTH_WARN
            1 slow ops, oldest one blocked for 128776 sec, mon.ceph-amk-sanity-03n5th-node3 has slow ops
 
  services:
    mon: 3 daemons, quorum ceph-amk-sanity-03n5th-node1-installer,ceph-amk-sanity-03n5th-node3,ceph-amk-sanity-03n5th-node2 (age 35h)
    mgr: ceph-amk-sanity-03n5th-node1-installer.mikzcu(active, since 35h), standbys: ceph-amk-sanity-03n5th-node2.vrsmee
    mds: 3/3 daemons up, 2 standby
    osd: 16 osds: 16 up (since 35h), 16 in (since 35h)
 
  data:
    volumes: 2/2 healthy
    pools:   5 pools, 609 pgs
    objects: 1.07k objects, 3.8 GiB
    usage:   13 GiB used, 227 GiB / 240 GiB avail
    pgs:     609 active+clean



Usage shows 13 GB and 227 gb free

Triggered 2 clone operations 
[root@ceph-amk-sanity-03n5th-node8 ~]# ceph fs subvolume snapshot create cephfs subvol_2 snap_1 --group_name subvolgroup_1
[root@ceph-amk-sanity-03n5th-node8 ~]# ceph fs subvolume snapshot ls cephfs subvol_2 --group_name subvolgroup_1
[
    {
        "name": "snap_1"
    }
]
[root@ceph-amk-sanity-03n5th-node8 ~]# ceph fs subvolume snapshot clone cephfs subvol_2 snap_1 clone_1 --group_name subvolgroup_1
[root@ceph-amk-sanity-03n5th-node8 ~]# ceph fs subvolume snapshot clone cephfs subvol_2 snap_1 clone_2 --group_name subvolgroup_1

For third clone operation it failed with
[root@ceph-amk-sanity-03n5th-node8 ~]# ceph fs subvolume snapshot clone cephfs subvol_2 snap_1 clone_3 --group_name subvolgroup_1
Error EINVAL: exception in subvolume metadata: No space left on device

Same test is passed in 19.2.1-222.el9cp

failing in ceph Version 19.2.1-234.el9cp


Version-Release number of selected component (if applicable):
[root@ceph-amk-sanity-03n5th-node8 ~]# ceph versions
{
    "mon": {
        "ceph version 19.2.1-233.el9cp (5ec54c4ef554996d493e37c98546e510b51acd85) squid (stable)": 3
    },
    "mgr": {
        "ceph version 19.2.1-233.el9cp (5ec54c4ef554996d493e37c98546e510b51acd85) squid (stable)": 2
    },
    "osd": {
        "ceph version 19.2.1-233.el9cp (5ec54c4ef554996d493e37c98546e510b51acd85) squid (stable)": 16
    },
    "mds": {
        "ceph version 19.2.1-233.el9cp (5ec54c4ef554996d493e37c98546e510b51acd85) squid (stable)": 5
    },
    "overall": {
        "ceph version 19.2.1-233.el9cp (5ec54c4ef554996d493e37c98546e510b51acd85) squid (stable)": 26
    }
}


How reproducible:


Steps to Reproduce:
1.
2.
3.

Actual results:


Expected results:


Additional info:

Comment 1 Storage PM bot 2025-07-12 15:53:59 UTC
Please specify the severity of this bug. Severity is defined here:
https://bugzilla.redhat.com/page.cgi?id=fields.html#bug_severity.

Comment 14 errata-xmlrpc 2025-08-18 14:01:14 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory (Red Hat Ceph Storage 8.1 security and bug fix updates), and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHBA-2025:14015