Description of problem: The 'size' shown in the output of snapshot info command relies on rstats which is incorrect snapshot size. It tracks size of the subvolume from the snapshot has been taken instead of the snapshot itself. Hence having the 'size' field in the output of 'snapshot info' doesn't make sense until the rstats is fixed. Please check attached upstream tracker for more information. Version-Release number of selected component (if applicable): How reproducible: Steps to Reproduce: 1. 2. 3. Actual results: Expected results: Additional info:
We are not seeing size parameter as part of snapshot info output [root@ceph-amk-bootstrap-clx3kj-node7 ec9d72a9-b955-4961-96f2-435b9b5843b7]# ceph fs subvolume snapshot info cephfs subvol_clone_status snap_1 subvolgroup_clone_status_1 { "created_at": "2023-01-23 08:28:43.445930", "data_pool": "cephfs.cephfs.data", "has_pending_clones": "yes", "pending_clones": [ { "name": "clone_status_100" } ] } [root@ceph-amk-bootstrap-clx3kj-node7 ec9d72a9-b955-4961-96f2-435b9b5843b7]# ceph fs subvolume snapshot info cephfs subvol_clone_status snap_2 subvolgroup_clone_status_1 Error ENOENT: snapshot 'snap_2' does not exist [root@ceph-amk-bootstrap-clx3kj-node7 ec9d72a9-b955-4961-96f2-435b9b5843b7]# ceph fs subvolume snapshot info cephfs subvol_clone_status snap_1 subvolgroup_clone_status_1 { "created_at": "2023-01-23 08:28:43.445930", "data_pool": "cephfs.cephfs.data", "has_pending_clones": "yes", "pending_clones": [ { "name": "clone_status_101" } ] } [root@ceph-amk-bootstrap-clx3kj-node7 ec9d72a9-b955-4961-96f2-435b9b5843b7]# ceph fs subvolume snapshot info cephfs clone_status_1 snap_1 { "created_at": "2023-01-23 16:19:20.367917", "data_pool": "cephfs.cephfs.data", "has_pending_clones": "no" } [root@ceph-amk-bootstrap-clx3kj-node7 ec9d72a9-b955-4961-96f2-435b9b5843b7]# ceph versions { "mon": { "ceph version 16.2.10-103.el8cp (4a5dd59c2e6616f05cc94e6aab2bddf1339ca4f4) pacific (stable)": 3 }, "mgr": { "ceph version 16.2.10-103.el8cp (4a5dd59c2e6616f05cc94e6aab2bddf1339ca4f4) pacific (stable)": 2 }, "osd": { "ceph version 16.2.10-103.el8cp (4a5dd59c2e6616f05cc94e6aab2bddf1339ca4f4) pacific (stable)": 12 }, "mds": { "ceph version 16.2.10-103.el8cp (4a5dd59c2e6616f05cc94e6aab2bddf1339ca4f4) pacific (stable)": 3 }, "overall": { "ceph version 16.2.10-103.el8cp (4a5dd59c2e6616f05cc94e6aab2bddf1339ca4f4) pacific (stable)": 20 } } Regards, Amarnath
Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory (Important: Red Hat Ceph Storage 5.3 Bug fix and security update), and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://access.redhat.com/errata/RHSA-2023:0980