Note: This bug is displayed in read-only format because the product is no longer active in Red Hat Bugzilla.
This project is now read‑only. Starting Monday, February 2, please use https://ibm-ceph.atlassian.net/ for all bug tracking management.

Bug 2313740

Summary: [CephFS - Subvolume] Not all subvolumes are listed when Cluster Storage is in almost full state
Product: [Red Hat Storage] Red Hat Ceph Storage Reporter: sumr
Component: CephFSAssignee: Kotresh HR <khiremat>
Status: CLOSED ERRATA QA Contact: sumr
Severity: high Docs Contact:
Priority: high    
Version: 8.0CC: ceph-eng-bugs, cephqe-warriors, gfarnum, jcollin, khiremat, mamohan, ngangadh, vshankar
Target Milestone: ---Flags: khiremat: needinfo-
khiremat: needinfo-
khiremat: needinfo-
gfarnum: needinfo-
Target Release: 9.0   
Hardware: x86_64   
OS: Linux   
Whiteboard:
Fixed In Version: ceph-20.1.0-20 Doc Type: If docs needed, set a value
Doc Text:
Story Points: ---
Clone Of: Environment:
Last Closed: 2026-01-29 06:52:18 UTC Type: Bug
Regression: --- Mount Type: ---
Documentation: --- CRM:
Verified Versions: Category: ---
oVirt Team: --- RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: --- Target Upstream Version:
Embargoed:

Description sumr 2024-09-20 10:38:22 UTC
Description of problem:

When Cluster Storage space is in almost full state, tried listing subvolumes in filesystem. The Expectation was it generates required output which is list of subvolumes. This can be a use case where user can plan to free up some space in certain subvolumes from list when in full state.

But, output for 'ceph fs subvolume ls cephfs' prints partial list of subvolumes.

Logs:

*****************************************************************

[root@ceph-sumar-tfa-fix-8nhx13-node9 ~]# ceph df
--- RAW STORAGE ---
CLASS     SIZE   AVAIL     USED  RAW USED  %RAW USED
hdd    560 GiB  76 GiB  484 GiB   484 GiB      86.45
TOTAL  560 GiB  76 GiB  484 GiB   484 GiB      86.45
 
--- POOLS ---
POOL                ID   PGS   STORED  OBJECTS     USED   %USED  MAX AVAIL
.mgr                 1     1  598 KiB        2  1.8 MiB  100.00        0 B
cephfs.cephfs.meta   2    16  613 MiB   24.75k  1.8 GiB  100.00        0 B
cephfs.cephfs.data   3  1024  147 GiB    1.15M  446 GiB  100.00        0 B

root@ceph-sumar-tfa-fix-8nhx13-node9 ~]# ceph fs status
cephfs - 6 clients
======
RANK      STATE                            MDS                          ACTIVITY     DNS    INOS   DIRS   CAPS  
 0        active      cephfs.ceph-sumar-tfa-fix-8nhx13-node4.swgmdk  Reqs:   28 /s   485k   459k  11.7k  30.7k  
 1        active      cephfs.ceph-sumar-tfa-fix-8nhx13-node7.njetga  Reqs:    0 /s   101k  48.1k   471      5   
 2        active      cephfs.ceph-sumar-tfa-fix-8nhx13-node2.jcaybs  Reqs:    0 /s   170    175    130      2   
0-s   standby-replay  cephfs.ceph-sumar-tfa-fix-8nhx13-node5.hyskuk  Evts:   80 /s   203k   163k  8571      0   
1-s   standby-replay  cephfs.ceph-sumar-tfa-fix-8nhx13-node8.cilvdm  Evts:    0 /s   136k  44.5k   464      0   
2-s   standby-replay  cephfs.ceph-sumar-tfa-fix-8nhx13-node6.wjzpkf  Evts:    0 /s     0      3      1      0   
       POOL           TYPE     USED  AVAIL  
cephfs.cephfs.meta  metadata  1733M     0   
cephfs.cephfs.data    data     473G     0   
                 STANDBY MDS                   
cephfs.ceph-sumar-tfa-fix-8nhx13-node3.qxvevu  
MDS version: ceph version 19.1.1-62.el9cp (15005be6f81af48462a2de37e490f5d6a6d2e860) squid (rc)


[root@ceph-sumar-tfa-fix-8nhx13-node9 ~]# ceph fs subvolume ls cephfs
[
    {
        "name": "sv1"
    },
    {
        "name": "sv2"
    }
]
[root@ceph-sumar-tfa-fix-8nhx13-node9 ~]# ceph fs subvolume ls cephfs svg1
[
    {
        "name": "sv3"
    }
]
[root@ceph-sumar-tfa-fix-8nhx13-node9 ~]# 

root@ceph-sumar-tfa-fix-8nhx13-node10 mnt]# cd cephfs/
[root@ceph-sumar-tfa-fix-8nhx13-node10 cephfs]# ls
volumes
[root@ceph-sumar-tfa-fix-8nhx13-node10 cephfs]# cd volumes/
[root@ceph-sumar-tfa-fix-8nhx13-node10 volumes]# ls
_:sv1.meta  _:sv2.meta  _deleting  _index  _nogroup  _svg1:sv4.meta  svg1
[root@ceph-sumar-tfa-fix-8nhx13-node10 volumes]# cd _deleting/
[root@ceph-sumar-tfa-fix-8nhx13-node10 _deleting]# ls
[root@ceph-sumar-tfa-fix-8nhx13-node10 _deleting]# cd ..
[root@ceph-sumar-tfa-fix-8nhx13-node10 volumes]# cd _nogroup/
[root@ceph-sumar-tfa-fix-8nhx13-node10 _nogroup]# ls
sv1  sv2  sv3_clone
[root@ceph-sumar-tfa-fix-8nhx13-node10 _nogroup]# cd sv3_clone
[root@ceph-sumar-tfa-fix-8nhx13-node10 sv3_clone]# ls
fd6926f6-68a8-469d-9b94-375238da6df9
[root@ceph-sumar-tfa-fix-8nhx13-node10 sv3_clone]# cd fd6926f6-68a8-469d-9b94-375238da6df9/
[root@ceph-sumar-tfa-fix-8nhx13-node10 fd6926f6-68a8-469d-9b94-375238da6df9]# ls
smallfile_dir1  smallfile_dir2
[root@ceph-sumar-tfa-fix-8nhx13-node10 fd6926f6-68a8-469d-9b94-375238da6df9]# cd ..

[root@ceph-sumar-tfa-fix-8nhx13-node10 sv3_clone]# cd ..
[root@ceph-sumar-tfa-fix-8nhx13-node10 _nogroup]# cd ../svg1
[root@ceph-sumar-tfa-fix-8nhx13-node10 svg1]# ls
sv1_clone  sv3  sv4

[root@ceph-sumar-tfa-fix-8nhx13-node9 ~]# ceph fs subvolume ls cephfs
[
    {
        "name": "sv1"
    },
    {
        "name": "sv2"
    },
    {
        "name": "sv3_clone"
    }
]
[root@ceph-sumar-tfa-fix-8nhx13-node9 ~]# ceph fs subvolume ls cephfs svg1
[
    {
        "name": "sv3"
    }
]

root@ceph-sumar-tfa-fix-8nhx13-node9 ~]# ceph fs subvolume ls cephfs svg1
[
    {
        "name": "sv3"
    },
    {
        "name": "sv4"
    },
    {
        "name": "sv1_clone"
    }
]
*******************************************************

Version-Release number of selected component (if applicable): 19.1.1-62.el9cp 


How reproducible:


Steps to Reproduce:
1. Create few subvolumes across default and non-default groups
2. Add data to fill storage space
3. List subvolumes when cluster is full.

Actual results: Not all subvolumes were listed


Expected results: All subvolumes should be listed


Additional info: Upon multiple retries and traversing the filesystem mount path for corresponding subvolumes search, the missing subvolumes were later listed in 'ceph fs subvolume ls cephfs' command output.

Comment 34 errata-xmlrpc 2026-01-29 06:52:18 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory (Moderate: Red Hat Ceph Storage 9.0 Security and Enhancement update), and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHSA-2026:1536