Bug 2240583 - pybind/mgr/volumes: pending_subvolume_deletions count is always zero in fs volume info output
Summary: pybind/mgr/volumes: pending_subvolume_deletions count is always zero in fs vo...
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: Red Hat Ceph Storage
Classification: Red Hat Storage
Component: Ceph-Mgr Plugins
Version: 7.0
Hardware: Unspecified
OS: Unspecified
unspecified
high
Target Milestone: ---
: 7.1
Assignee: Kotresh HR
QA Contact: Amarnath
ceph-docs@redhat.com
URL:
Whiteboard:
Depends On:
Blocks: 2240585 2240586
TreeView+ depends on / blocked
 
Reported: 2023-09-25 11:13 UTC by Kotresh HR
Modified: 2024-06-13 14:21 UTC (History)
5 users (show)

Fixed In Version: ceph-18.2.1-2.el9cp
Doc Type: If docs needed, set a value
Doc Text:
Clone Of:
: 2240585 2240586 (view as bug list)
Environment:
Last Closed: 2024-06-13 14:21:43 UTC
Embargoed:


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
Ceph Project Bug Tracker 62278 0 None None None 2023-09-25 11:13:27 UTC
Red Hat Issue Tracker RHCEPH-7537 0 None None None 2023-09-25 11:13:56 UTC
Red Hat Product Errata RHSA-2024:3925 0 None None None 2024-06-13 14:21:53 UTC

Description Kotresh HR 2023-09-25 11:13:28 UTC
Description of problem:
pybind/mgr/volumes: pending_subvolume_deletions count is always zero in fs volume info output

https://tracker.ceph.com/issues/62278

Version-Release number of selected component (if applicable):


How reproducible:


Steps to Reproduce:
1.
2.
3.

Actual results:


Expected results:


Additional info:

Comment 1 RHEL Program Management 2023-09-25 11:13:36 UTC
Please specify the severity of this bug. Severity is defined here:
https://bugzilla.redhat.com/page.cgi?id=fields.html#bug_severity.

Comment 5 Amarnath 2024-05-06 20:09:59 UTC
Hi Kotresh,

I created 100 subvolumes and deleted them but I don't see any change in pending_subvolume_deletions count

[root@ceph-amk-mirror-e90msi-node8 nfs_9GNKZ]# for i in {1..100}; do ceph fs subvolume create cephfs "subvol_${i}"; done
    
[root@ceph-amk-mirror-e90msi-node8 nfs_9GNKZ]# 
[root@ceph-amk-mirror-e90msi-node8 nfs_9GNKZ]# for i in {1..100}; do ceph fs subvolume rm cephfs "subvol_${i}"; done
[root@ceph-amk-mirror-e90msi-node8 nfs_9GNKZ]# ceph fs subvolume ls cephfs
[]
[root@ceph-amk-mirror-e90msi-node8 nfs_9GNKZ]# ceph fs volume info cephfs
{
    "mon_addrs": [
        "10.0.211.109:6789",
        "10.0.208.154:6789",
        "10.0.210.104:6789"
    ],
    "pending_subvolume_deletions": 0,
    "pools": {
        "data": [
            {
                "avail": 55874289664,
                "name": "cephfs.cephfs.data",
                "used": 2579496960
            }
        ],
        "metadata": [
            {
                "avail": 55874289664,
                "name": "cephfs.cephfs.meta",
                "used": 611917824
            }
        ]
    },
    "used_size": 0
}

Regards,
Amarnath

Comment 6 Scott Ostapovicz 2024-05-13 20:00:15 UTC
Is anyone actively working on this BZ?  I see that the upstream tracker has been closed.  We have approximately 1 week until the final RC build for 7.1.

Comment 8 Amarnath 2024-05-16 13:32:26 UTC
Hi All,

After adding the data into the subvolume created we can see the pending_subvolume_deletions getting updated


[root@ceph-amk-clients-71dnpt-node9 ~]#  ceph fs volume info cephfs
{
    "mon_addrs": [
        "10.0.210.19:6789",
        "10.0.208.103:6789",
        "10.0.210.115:6789"
    ],
    "pending_subvolume_deletions": 0,
    "pools": {
        "data": [
            {
                "avail": 53018558464,
                "name": "cephfs.cephfs.data",
                "used": 467435520
            }
        ],
        "metadata": [
            {
                "avail": 53018558464,
                "name": "cephfs.cephfs.meta",
                "used": 722640822
            }
        ]
    },
    "used_size": 155189794
}
[root@ceph-amk-clients-71dnpt-node9 ~]#  ceph fs subvolume rm cephfs "subvol_1"
^[[A[root@ceph-amk-clients-71dnpt-node9 ~]#  ceph fs volume info cephfs
{
    "mon_addrs": [
        "10.0.210.19:6789",
        "10.0.208.103:6789",
        "10.0.210.115:6789"
    ],
    "pending_subvolume_deletions": 1,
    "pools": {
        "data": [
            {
                "avail": 53479243776,
                "name": "cephfs.cephfs.data",
                "used": 491692032
            }
        ],
        "metadata": [
            {
                "avail": 53479243776,
                "name": "cephfs.cephfs.meta",
                "used": 739921864
            }
        ]
    },
    "used_size": 155767459
}
[root@ceph-amk-clients-71dnpt-node9 ~]#  ceph fs volume info cephfs
{
    "mon_addrs": [
        "10.0.210.19:6789",
        "10.0.208.103:6789",
        "10.0.210.115:6789"
    ],
    "pending_subvolume_deletions": 1,
    "pools": {
        "data": [
            {
                "avail": 53372796928,
                "name": "cephfs.cephfs.data",
                "used": 538193920
            }
        ],
        "metadata": [
            {
                "avail": 53372796928,
                "name": "cephfs.cephfs.meta",
                "used": 773410760
            }
        ]
    },
    "used_size": 155812515
}
[root@ceph-amk-clients-71dnpt-node9 ~]#  ceph fs subvolume rm cephfs "subvol_2"
[root@ceph-amk-clients-71dnpt-node9 ~]#  ceph fs volume info cephfs
{
    "mon_addrs": [
        "10.0.210.19:6789",
        "10.0.208.103:6789",
        "10.0.210.115:6789"
    ],
    "pending_subvolume_deletions": 2,
    "pools": {
        "data": [
            {
                "avail": 53216247808,
                "name": "cephfs.cephfs.data",
                "used": 601165824
            }
        ],
        "metadata": [
            {
                "avail": 53216247808,
                "name": "cephfs.cephfs.meta",
                "used": 836988872
            }
        ]
    },
    "used_size": 168022691
}
[root@ceph-amk-clients-71dnpt-node9 ~]# 

Created subvolumes with data using 

for i in {1..10}; do ceph fs subvolume create cephfs "subvol_${i}";python3 /home/cephuser/smallfile/smallfile_cli.py --operation create --threads 10 --file-size 4 --files 1000 --files-per-dir 10 --dirs-per-dir 2 --top /mnt/ceph-fue/volumes/_nogroup/subvol_${i} ;done

Regards,
Amarnath

Comment 9 errata-xmlrpc 2024-06-13 14:21:43 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory (Critical: Red Hat Ceph Storage 7.1 security, enhancements, and bug fix update), and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHSA-2024:3925


Note You need to log in before you can comment on or make changes to this bug.