This bug was initially created as a copy of Bug #2125578 I am copying this bug because: Description of problem: The 'size' shown in the output of snapshot info command relies on rstats which is incorrect snapshot size. It tracks size of the subvolume from the snapshot has been taken instead of the snapshot itself. Hence having the 'size' field in the output of 'snapshot info' doesn't make sense until the rstats is fixed. Please check attached upstream tracker for more information. Version-Release number of selected component (if applicable): How reproducible: Steps to Reproduce: 1. 2. 3. Actual results: Expected results: Additional info:
Please specify the severity of this bug. Severity is defined here: https://bugzilla.redhat.com/page.cgi?id=fields.html#bug_severity.
Verifed with below steps We are not seeing size in the info dict [root@ceph-amk-bz-8be6sz-node7 ~]# ceph fs subvolumegroup create cephfs subvolgroup_1 [root@ceph-amk-bz-8be6sz-node7 ~]# ceph fs subvolume create cephfs subvol_1 --group_name subvolgroup_1 [root@ceph-amk-bz-8be6sz-node7 ~]# ceph fs subvolume getpath cephfs subvol_1 subvolgroup_1 /volumes/subvolgroup_1/subvol_1/b735d599-c424-4b25-b3a8-8f5c54cc4ade [root@ceph-amk-bz-8be6sz-node7 ~]# mkdir /mnt/cephfs_fuse [root@ceph-amk-bz-8be6sz-node7 ~]# ceph-fuse /mnt/cephfs_fuse/ -r /volumes/subvolgroup_1/subvol_1/b735d599-c424-4b25-b3a8-8f5c54cc4ade 2022-10-03T07:31:22.295-0400 7f84d5d76180 -1 init, newargv = 0x55820bf5c7b0 newargc=15 ceph-fuse[6593]: starting ceph client ceph-fuse[6593]: starting fuse [root@ceph-amk-bz-8be6sz-node7 ~]# cd /mnt/cephfs_fuse/ [root@ceph-amk-bz-8be6sz-node7 cephfs_fuse]# wget https://mirror.openshift.com/pub/openshift-v4/dependencies/rhcos/pre-release/latest-4.11/rhcos-live.x86_64.iso --2022-10-03 07:31:42-- https://mirror.openshift.com/pub/openshift-v4/dependencies/rhcos/pre-release/latest-4.11/rhcos-live.x86_64.iso Connecting to mirror.openshift.com (mirror.openshift.com)|18.67.76.76|:443... connected. HTTP request sent, awaiting response... 200 OK Length: 1131413504 (1.1G) [application/octet-stream] Saving to: ‘rhcos-live.x86_64.iso’ rhcos-live.x86_64.iso 100%[=====================================================================================================================>] 1.05G 32.7MB/s in 34s 2022-10-03 07:33:12 (32.0 MB/s) - ‘rhcos-live.x86_64.iso’ saved [1131413504/1131413504] [root@ceph-amk-bz-8be6sz-node7 cephfs_fuse]# ceph fs subvolume snapshot create cephfs subvol_1 snap_1 --group_name subvolgroup_1 [root@ceph-amk-bz-8be6sz-node7 cephfs_fuse]# ceph fs subvolume snapshot info cephfs subvol_1 snap_1 --group_name subvolgroup_1 { "created_at": "2022-10-03 11:33:51.888867", "data_pool": "cephfs.cephfs.data", "has_pending_clones": "no" } [root@ceph-amk-bz-8be6sz-node7 cephfs_fuse]# [root@ceph-amk-bz-8be6sz-node7 cephfs_fuse]# ceph versions { "mon": { "ceph version 17.2.3-46.el9cp (df049de85f82b273a1b804f620c8f95d8dabec66) quincy (stable)": 3 }, "mgr": { "ceph version 17.2.3-46.el9cp (df049de85f82b273a1b804f620c8f95d8dabec66) quincy (stable)": 2 }, "osd": { "ceph version 17.2.3-46.el9cp (df049de85f82b273a1b804f620c8f95d8dabec66) quincy (stable)": 12 }, "mds": { "ceph version 17.2.3-46.el9cp (df049de85f82b273a1b804f620c8f95d8dabec66) quincy (stable)": 3 }, "overall": { "ceph version 17.2.3-46.el9cp (df049de85f82b273a1b804f620c8f95d8dabec66) quincy (stable)": 20 } } [root@ceph-amk-bz-8be6sz-node7 cephfs_fuse]#
Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory (Red Hat Ceph Storage 6.0 Bug Fix update), and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://access.redhat.com/errata/RHBA-2023:1360