Bug 2125578 - [CephFS] mgr/volumes: Remove incorrect 'size' in the output of 'snapshot info' command
Summary: [CephFS] mgr/volumes: Remove incorrect 'size' in the output of 'snapshot info...
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: Red Hat Ceph Storage
Classification: Red Hat Storage
Component: CephFS
Version: 5.2
Hardware: x86_64
OS: Linux
unspecified
medium
Target Milestone: ---
: 5.3z1
Assignee: Kotresh HR
QA Contact: Amarnath
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2022-09-09 10:51 UTC by Kotresh HR
Modified: 2023-02-28 10:06 UTC (History)
5 users (show)

Fixed In Version: ceph-16.2.10-100.el8cp
Doc Type: If docs needed, set a value
Doc Text:
Clone Of:
Environment:
Last Closed: 2023-02-28 10:05:18 UTC
Embargoed:


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
Ceph Project Bug Tracker 55822 0 None None None 2022-09-09 10:51:27 UTC
Red Hat Issue Tracker RHCEPH-5658 0 None None None 2022-11-21 06:01:10 UTC
Red Hat Product Errata RHSA-2023:0980 0 None None None 2023-02-28 10:06:07 UTC

Description Kotresh HR 2022-09-09 10:51:28 UTC
Description of problem:
The 'size' shown in the output of snapshot info command relies on rstats which
is incorrect snapshot size. It tracks size of the subvolume from the
snapshot has been taken instead of the snapshot itself. Hence having the
'size' field in the output of 'snapshot info' doesn't make sense until the
rstats is fixed.

Please check attached upstream tracker for more information.

Version-Release number of selected component (if applicable):


How reproducible:


Steps to Reproduce:
1.
2.
3.

Actual results:


Expected results:


Additional info:

Comment 9 Amarnath 2023-01-23 18:37:12 UTC
We are not seeing size parameter as part of snapshot info output

[root@ceph-amk-bootstrap-clx3kj-node7 ec9d72a9-b955-4961-96f2-435b9b5843b7]# ceph fs subvolume snapshot info cephfs subvol_clone_status snap_1 subvolgroup_clone_status_1
{
    "created_at": "2023-01-23 08:28:43.445930",
    "data_pool": "cephfs.cephfs.data",
    "has_pending_clones": "yes",
    "pending_clones": [
        {
            "name": "clone_status_100"
        }
    ]
}
[root@ceph-amk-bootstrap-clx3kj-node7 ec9d72a9-b955-4961-96f2-435b9b5843b7]# ceph fs subvolume snapshot info cephfs subvol_clone_status snap_2 subvolgroup_clone_status_1
Error ENOENT: snapshot 'snap_2' does not exist
[root@ceph-amk-bootstrap-clx3kj-node7 ec9d72a9-b955-4961-96f2-435b9b5843b7]# ceph fs subvolume snapshot info cephfs subvol_clone_status snap_1 subvolgroup_clone_status_1
{
    "created_at": "2023-01-23 08:28:43.445930",
    "data_pool": "cephfs.cephfs.data",
    "has_pending_clones": "yes",
    "pending_clones": [
        {
            "name": "clone_status_101"
        }
    ]
}
[root@ceph-amk-bootstrap-clx3kj-node7 ec9d72a9-b955-4961-96f2-435b9b5843b7]# ceph fs subvolume snapshot info cephfs clone_status_1 snap_1 
{
    "created_at": "2023-01-23 16:19:20.367917",
    "data_pool": "cephfs.cephfs.data",
    "has_pending_clones": "no"
}

[root@ceph-amk-bootstrap-clx3kj-node7 ec9d72a9-b955-4961-96f2-435b9b5843b7]# ceph versions
{
    "mon": {
        "ceph version 16.2.10-103.el8cp (4a5dd59c2e6616f05cc94e6aab2bddf1339ca4f4) pacific (stable)": 3
    },
    "mgr": {
        "ceph version 16.2.10-103.el8cp (4a5dd59c2e6616f05cc94e6aab2bddf1339ca4f4) pacific (stable)": 2
    },
    "osd": {
        "ceph version 16.2.10-103.el8cp (4a5dd59c2e6616f05cc94e6aab2bddf1339ca4f4) pacific (stable)": 12
    },
    "mds": {
        "ceph version 16.2.10-103.el8cp (4a5dd59c2e6616f05cc94e6aab2bddf1339ca4f4) pacific (stable)": 3
    },
    "overall": {
        "ceph version 16.2.10-103.el8cp (4a5dd59c2e6616f05cc94e6aab2bddf1339ca4f4) pacific (stable)": 20
    }
}

Regards,
Amarnath

Comment 10 errata-xmlrpc 2023-02-28 10:05:18 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory (Important: Red Hat Ceph Storage 5.3 Bug fix and security update), and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHSA-2023:0980


Note You need to log in before you can comment on or make changes to this bug.