Description of problem: pybind/mgr/volumes: pending_subvolume_deletions count is always zero in fs volume info output https://tracker.ceph.com/issues/62278 Version-Release number of selected component (if applicable): How reproducible: Steps to Reproduce: 1. 2. 3. Actual results: Expected results: Additional info:
Please specify the severity of this bug. Severity is defined here: https://bugzilla.redhat.com/page.cgi?id=fields.html#bug_severity.
Hi Kotresh, I created 100 subvolumes and deleted them but I don't see any change in pending_subvolume_deletions count [root@ceph-amk-mirror-e90msi-node8 nfs_9GNKZ]# for i in {1..100}; do ceph fs subvolume create cephfs "subvol_${i}"; done [root@ceph-amk-mirror-e90msi-node8 nfs_9GNKZ]# [root@ceph-amk-mirror-e90msi-node8 nfs_9GNKZ]# for i in {1..100}; do ceph fs subvolume rm cephfs "subvol_${i}"; done [root@ceph-amk-mirror-e90msi-node8 nfs_9GNKZ]# ceph fs subvolume ls cephfs [] [root@ceph-amk-mirror-e90msi-node8 nfs_9GNKZ]# ceph fs volume info cephfs { "mon_addrs": [ "10.0.211.109:6789", "10.0.208.154:6789", "10.0.210.104:6789" ], "pending_subvolume_deletions": 0, "pools": { "data": [ { "avail": 55874289664, "name": "cephfs.cephfs.data", "used": 2579496960 } ], "metadata": [ { "avail": 55874289664, "name": "cephfs.cephfs.meta", "used": 611917824 } ] }, "used_size": 0 } Regards, Amarnath
Is anyone actively working on this BZ? I see that the upstream tracker has been closed. We have approximately 1 week until the final RC build for 7.1.
Hi All, After adding the data into the subvolume created we can see the pending_subvolume_deletions getting updated [root@ceph-amk-clients-71dnpt-node9 ~]# ceph fs volume info cephfs { "mon_addrs": [ "10.0.210.19:6789", "10.0.208.103:6789", "10.0.210.115:6789" ], "pending_subvolume_deletions": 0, "pools": { "data": [ { "avail": 53018558464, "name": "cephfs.cephfs.data", "used": 467435520 } ], "metadata": [ { "avail": 53018558464, "name": "cephfs.cephfs.meta", "used": 722640822 } ] }, "used_size": 155189794 } [root@ceph-amk-clients-71dnpt-node9 ~]# ceph fs subvolume rm cephfs "subvol_1" ^[[A[root@ceph-amk-clients-71dnpt-node9 ~]# ceph fs volume info cephfs { "mon_addrs": [ "10.0.210.19:6789", "10.0.208.103:6789", "10.0.210.115:6789" ], "pending_subvolume_deletions": 1, "pools": { "data": [ { "avail": 53479243776, "name": "cephfs.cephfs.data", "used": 491692032 } ], "metadata": [ { "avail": 53479243776, "name": "cephfs.cephfs.meta", "used": 739921864 } ] }, "used_size": 155767459 } [root@ceph-amk-clients-71dnpt-node9 ~]# ceph fs volume info cephfs { "mon_addrs": [ "10.0.210.19:6789", "10.0.208.103:6789", "10.0.210.115:6789" ], "pending_subvolume_deletions": 1, "pools": { "data": [ { "avail": 53372796928, "name": "cephfs.cephfs.data", "used": 538193920 } ], "metadata": [ { "avail": 53372796928, "name": "cephfs.cephfs.meta", "used": 773410760 } ] }, "used_size": 155812515 } [root@ceph-amk-clients-71dnpt-node9 ~]# ceph fs subvolume rm cephfs "subvol_2" [root@ceph-amk-clients-71dnpt-node9 ~]# ceph fs volume info cephfs { "mon_addrs": [ "10.0.210.19:6789", "10.0.208.103:6789", "10.0.210.115:6789" ], "pending_subvolume_deletions": 2, "pools": { "data": [ { "avail": 53216247808, "name": "cephfs.cephfs.data", "used": 601165824 } ], "metadata": [ { "avail": 53216247808, "name": "cephfs.cephfs.meta", "used": 836988872 } ] }, "used_size": 168022691 } [root@ceph-amk-clients-71dnpt-node9 ~]# Created subvolumes with data using for i in {1..10}; do ceph fs subvolume create cephfs "subvol_${i}";python3 /home/cephuser/smallfile/smallfile_cli.py --operation create --threads 10 --file-size 4 --files 1000 --files-per-dir 10 --dirs-per-dir 2 --top /mnt/ceph-fue/volumes/_nogroup/subvol_${i} ;done Regards, Amarnath
Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory (Critical: Red Hat Ceph Storage 7.1 security, enhancements, and bug fix update), and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://access.redhat.com/errata/RHSA-2024:3925