Bug 2125578
| Summary: | [CephFS] mgr/volumes: Remove incorrect 'size' in the output of 'snapshot info' command | ||
|---|---|---|---|
| Product: | [Red Hat Storage] Red Hat Ceph Storage | Reporter: | Kotresh HR <khiremat> |
| Component: | CephFS | Assignee: | Kotresh HR <khiremat> |
| Status: | CLOSED ERRATA | QA Contact: | Amarnath <amk> |
| Severity: | medium | Docs Contact: | |
| Priority: | unspecified | ||
| Version: | 5.2 | CC: | ceph-eng-bugs, cephqe-warriors, hyelloji, tserlin, vshankar |
| Target Milestone: | --- | Keywords: | CodeChange |
| Target Release: | 5.3z1 | ||
| Hardware: | x86_64 | ||
| OS: | Linux | ||
| Whiteboard: | |||
| Fixed In Version: | ceph-16.2.10-100.el8cp | Doc Type: | If docs needed, set a value |
| Doc Text: | Story Points: | --- | |
| Clone Of: | Environment: | ||
| Last Closed: | 2023-02-28 10:05:18 UTC | Type: | Bug |
| Regression: | --- | Mount Type: | --- |
| Documentation: | --- | CRM: | |
| Verified Versions: | Category: | --- | |
| oVirt Team: | --- | RHEL 7.3 requirements from Atomic Host: | |
| Cloudforms Team: | --- | Target Upstream Version: | |
| Embargoed: | |||
|
Description
Kotresh HR
2022-09-09 10:51:28 UTC
We are not seeing size parameter as part of snapshot info output
[root@ceph-amk-bootstrap-clx3kj-node7 ec9d72a9-b955-4961-96f2-435b9b5843b7]# ceph fs subvolume snapshot info cephfs subvol_clone_status snap_1 subvolgroup_clone_status_1
{
"created_at": "2023-01-23 08:28:43.445930",
"data_pool": "cephfs.cephfs.data",
"has_pending_clones": "yes",
"pending_clones": [
{
"name": "clone_status_100"
}
]
}
[root@ceph-amk-bootstrap-clx3kj-node7 ec9d72a9-b955-4961-96f2-435b9b5843b7]# ceph fs subvolume snapshot info cephfs subvol_clone_status snap_2 subvolgroup_clone_status_1
Error ENOENT: snapshot 'snap_2' does not exist
[root@ceph-amk-bootstrap-clx3kj-node7 ec9d72a9-b955-4961-96f2-435b9b5843b7]# ceph fs subvolume snapshot info cephfs subvol_clone_status snap_1 subvolgroup_clone_status_1
{
"created_at": "2023-01-23 08:28:43.445930",
"data_pool": "cephfs.cephfs.data",
"has_pending_clones": "yes",
"pending_clones": [
{
"name": "clone_status_101"
}
]
}
[root@ceph-amk-bootstrap-clx3kj-node7 ec9d72a9-b955-4961-96f2-435b9b5843b7]# ceph fs subvolume snapshot info cephfs clone_status_1 snap_1
{
"created_at": "2023-01-23 16:19:20.367917",
"data_pool": "cephfs.cephfs.data",
"has_pending_clones": "no"
}
[root@ceph-amk-bootstrap-clx3kj-node7 ec9d72a9-b955-4961-96f2-435b9b5843b7]# ceph versions
{
"mon": {
"ceph version 16.2.10-103.el8cp (4a5dd59c2e6616f05cc94e6aab2bddf1339ca4f4) pacific (stable)": 3
},
"mgr": {
"ceph version 16.2.10-103.el8cp (4a5dd59c2e6616f05cc94e6aab2bddf1339ca4f4) pacific (stable)": 2
},
"osd": {
"ceph version 16.2.10-103.el8cp (4a5dd59c2e6616f05cc94e6aab2bddf1339ca4f4) pacific (stable)": 12
},
"mds": {
"ceph version 16.2.10-103.el8cp (4a5dd59c2e6616f05cc94e6aab2bddf1339ca4f4) pacific (stable)": 3
},
"overall": {
"ceph version 16.2.10-103.el8cp (4a5dd59c2e6616f05cc94e6aab2bddf1339ca4f4) pacific (stable)": 20
}
}
Regards,
Amarnath
Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory (Important: Red Hat Ceph Storage 5.3 Bug fix and security update), and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://access.redhat.com/errata/RHSA-2023:0980 |