Bug 2130422 - [CephFS] mgr/volumes: Remove incorrect 'size' in the output of 'snapshot info' command
Summary: [CephFS] mgr/volumes: Remove incorrect 'size' in the output of 'snapshot info...
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: Red Hat Ceph Storage
Classification: Red Hat Storage
Component: CephFS
Version: 5.2
Hardware: x86_64
OS: Linux
unspecified
high
Target Milestone: ---
: 6.0
Assignee: Kotresh HR
QA Contact: Amarnath
Masauso Lungu
URL:
Whiteboard:
Depends On:
Blocks: 2126050
TreeView+ depends on / blocked
 
Reported: 2022-09-28 06:56 UTC by Kotresh HR
Modified: 2023-03-20 18:59 UTC (History)
6 users (show)

Fixed In Version: ceph-17.2.3-45.el9cp
Doc Type: Bug Fix
Doc Text:
.The subvolume snapshot info command no longer has the 'size' field in the output Previously, the output of the `subvolume snapshot` command would return an incorrect snapshot `size`. This was due to the fact that the `snapshot info` command relies on `rstats` to track the snapshot size. The `rstats` tracks the size of the snapshot from its corresponding subvolume instead of the snapshot itself. With this fix, the ‘size’ field is removed from the output of the 'snapshot info' command until the `rstats` is fixed.
Clone Of:
Environment:
Last Closed: 2023-03-20 18:58:27 UTC
Embargoed:


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
Ceph Project Bug Tracker 55822 0 None None None 2022-09-28 06:56:30 UTC
Red Hat Issue Tracker RHCEPH-5372 0 None None None 2022-09-28 07:44:17 UTC
Red Hat Product Errata RHBA-2023:1360 0 None None None 2023-03-20 18:59:18 UTC

Description Kotresh HR 2022-09-28 06:56:31 UTC
This bug was initially created as a copy of Bug #2125578

I am copying this bug because: 



Description of problem:
The 'size' shown in the output of snapshot info command relies on rstats which
is incorrect snapshot size. It tracks size of the subvolume from the
snapshot has been taken instead of the snapshot itself. Hence having the
'size' field in the output of 'snapshot info' doesn't make sense until the
rstats is fixed.

Please check attached upstream tracker for more information.

Version-Release number of selected component (if applicable):


How reproducible:


Steps to Reproduce:
1.
2.
3.

Actual results:


Expected results:


Additional info:

Comment 1 RHEL Program Management 2022-09-28 06:56:39 UTC
Please specify the severity of this bug. Severity is defined here:
https://bugzilla.redhat.com/page.cgi?id=fields.html#bug_severity.

Comment 6 Amarnath 2022-10-03 11:38:21 UTC
Verifed with below steps
We are not seeing size in the info dict

[root@ceph-amk-bz-8be6sz-node7 ~]# ceph fs subvolumegroup create cephfs subvolgroup_1
[root@ceph-amk-bz-8be6sz-node7 ~]# ceph fs subvolume create cephfs subvol_1 --group_name subvolgroup_1
[root@ceph-amk-bz-8be6sz-node7 ~]# ceph fs subvolume getpath cephfs subvol_1 subvolgroup_1
/volumes/subvolgroup_1/subvol_1/b735d599-c424-4b25-b3a8-8f5c54cc4ade
[root@ceph-amk-bz-8be6sz-node7 ~]#  mkdir /mnt/cephfs_fuse
[root@ceph-amk-bz-8be6sz-node7 ~]# ceph-fuse /mnt/cephfs_fuse/ -r  /volumes/subvolgroup_1/subvol_1/b735d599-c424-4b25-b3a8-8f5c54cc4ade
2022-10-03T07:31:22.295-0400 7f84d5d76180 -1 init, newargv = 0x55820bf5c7b0 newargc=15
ceph-fuse[6593]: starting ceph client
ceph-fuse[6593]: starting fuse
[root@ceph-amk-bz-8be6sz-node7 ~]# cd /mnt/cephfs_fuse/
[root@ceph-amk-bz-8be6sz-node7 cephfs_fuse]# wget https://mirror.openshift.com/pub/openshift-v4/dependencies/rhcos/pre-release/latest-4.11/rhcos-live.x86_64.iso
--2022-10-03 07:31:42--  https://mirror.openshift.com/pub/openshift-v4/dependencies/rhcos/pre-release/latest-4.11/rhcos-live.x86_64.iso
Connecting to mirror.openshift.com (mirror.openshift.com)|18.67.76.76|:443... connected.
HTTP request sent, awaiting response... 200 OK
Length: 1131413504 (1.1G) [application/octet-stream]
Saving to: ‘rhcos-live.x86_64.iso’

rhcos-live.x86_64.iso                                100%[=====================================================================================================================>]   1.05G  32.7MB/s    in 34s     

2022-10-03 07:33:12 (32.0 MB/s) - ‘rhcos-live.x86_64.iso’ saved [1131413504/1131413504]

[root@ceph-amk-bz-8be6sz-node7 cephfs_fuse]# ceph fs subvolume snapshot create cephfs subvol_1 snap_1 --group_name subvolgroup_1
[root@ceph-amk-bz-8be6sz-node7 cephfs_fuse]# ceph fs subvolume snapshot info cephfs subvol_1 snap_1 --group_name subvolgroup_1
{
    "created_at": "2022-10-03 11:33:51.888867",
    "data_pool": "cephfs.cephfs.data",
    "has_pending_clones": "no"
}
[root@ceph-amk-bz-8be6sz-node7 cephfs_fuse]# 
[root@ceph-amk-bz-8be6sz-node7 cephfs_fuse]# ceph versions
{
    "mon": {
        "ceph version 17.2.3-46.el9cp (df049de85f82b273a1b804f620c8f95d8dabec66) quincy (stable)": 3
    },
    "mgr": {
        "ceph version 17.2.3-46.el9cp (df049de85f82b273a1b804f620c8f95d8dabec66) quincy (stable)": 2
    },
    "osd": {
        "ceph version 17.2.3-46.el9cp (df049de85f82b273a1b804f620c8f95d8dabec66) quincy (stable)": 12
    },
    "mds": {
        "ceph version 17.2.3-46.el9cp (df049de85f82b273a1b804f620c8f95d8dabec66) quincy (stable)": 3
    },
    "overall": {
        "ceph version 17.2.3-46.el9cp (df049de85f82b273a1b804f620c8f95d8dabec66) quincy (stable)": 20
    }
}
[root@ceph-amk-bz-8be6sz-node7 cephfs_fuse]#

Comment 20 errata-xmlrpc 2023-03-20 18:58:27 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory (Red Hat Ceph Storage 6.0 Bug Fix update), and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHBA-2023:1360


Note You need to log in before you can comment on or make changes to this bug.