Bug 2130426 - [CephFS] mgr/volumes: display in-progress clones for a snapshot
Summary: [CephFS] mgr/volumes: display in-progress clones for a snapshot
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: Red Hat Ceph Storage
Classification: Red Hat Storage
Component: CephFS
Version: 5.2
Hardware: x86_64
OS: Linux
unspecified
medium
Target Milestone: ---
: 6.0
Assignee: Kotresh HR
QA Contact: Amarnath
Masauso Lungu
URL:
Whiteboard:
Depends On:
Blocks: 2126050
TreeView+ depends on / blocked
 
Reported: 2022-09-28 07:08 UTC by Kotresh HR
Modified: 2023-03-20 18:59 UTC (History)
6 users (show)

Fixed In Version: ceph-17.2.3-45.el9cp
Doc Type: Enhancement
Doc Text:
.Users can list the in-progress or pending clones for a subvolume snapshot Previously, there was no way of knowing the set of clone operations in-progress or pending for a subvolume snapshot, unless the user knew the clone subvolume’s name and used `clone status` to infer the details. With this release, for a given subvolume snapshot name, the in-progress or pending clones can be listed.
Clone Of:
Environment:
Last Closed: 2023-03-20 18:58:27 UTC
Embargoed:


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
Ceph Project Bug Tracker 55041 0 None None None 2022-09-28 07:08:06 UTC
Red Hat Issue Tracker RHCEPH-5373 0 None None None 2022-09-28 07:44:51 UTC
Red Hat Product Errata RHBA-2023:1360 0 None None None 2023-03-20 18:59:18 UTC

Description Kotresh HR 2022-09-28 07:08:07 UTC
This bug was initially created as a copy of Bug #2125575

I am copying this bug because: 



Description of problem:
Right now, there is no way of knowing the set of clone operations n-progress/pending for a subvolume snapshot (unless one knows the clone subvolume name and use `clone status` to infer the details).

Introduce a way to list in-progress/pending clones for a subvolume snapshot. This would involve fetching "clone indexes" from subvolume metadata and resolving the index to the target subvolume (clone).

Please check the upstream tracker attached for more information.


Version-Release number of selected component (if applicable):


How reproducible:


Steps to Reproduce:
1.
2.
3.

Actual results:


Expected results:


Additional info:

Comment 8 Amarnath 2022-10-06 01:56:19 UTC
Verified on 
[root@ceph-amk-up-tmh9aw-node7 b88f45dd-0dac-4296-9108-5e533252c63e]# ceph versions
{
    "mon": {
        "ceph version 17.2.3-47.el9cp (1a00ea1102fdf76fbf54be3e10d21cc19fc61270) quincy (stable)": 3
    },
    "mgr": {
        "ceph version 17.2.3-47.el9cp (1a00ea1102fdf76fbf54be3e10d21cc19fc61270) quincy (stable)": 2
    },
    "osd": {
        "ceph version 17.2.3-47.el9cp (1a00ea1102fdf76fbf54be3e10d21cc19fc61270) quincy (stable)": 12
    },
    "mds": {
        "ceph version 17.2.3-47.el9cp (1a00ea1102fdf76fbf54be3e10d21cc19fc61270) quincy (stable)": 3
    },
    "overall": {
        "ceph version 17.2.3-47.el9cp (1a00ea1102fdf76fbf54be3e10d21cc19fc61270) quincy (stable)": 20
    }
}
[root@ceph-amk-up-tmh9aw-node7 b88f45dd-0dac-4296-9108-5e533252c63e]# 



[root@ceph-amk-up-tmh9aw-node7 b88f45dd-0dac-4296-9108-5e533252c63e]# ceph fs subvolume snapshot clone cephfs subvol_1 snap_1 clone_3 --group_name subvolgroup_1 --target_group_name subvolgroup_1
[root@ceph-amk-up-tmh9aw-node7 b88f45dd-0dac-4296-9108-5e533252c63e]# ceph fs subvolume snapshot info cephfs subvol_1 snap_1 --group_name subvolgroup_1
{
    "created_at": "2022-10-06 01:35:11.115548",
    "data_pool": "cephfs.cephfs.data",
    "has_pending_clones": "yes",
    "pending_clones": [
        {
            "name": "clone_3",
            "target_group": "subvolgroup_1"
        }
    ]
}
[root@ceph-amk-up-tmh9aw-node7 b88f45dd-0dac-4296-9108-5e533252c63e]# ceph fs subvolume snapshot info cephfs subvol_1 snap_1 --group_name subvolgroup_1
{
    "created_at": "2022-10-06 01:35:11.115548",
    "data_pool": "cephfs.cephfs.data",
    "has_pending_clones": "no"
}

can target_group be renamed to target_group_name. to keep inline with the command arguments ?

Regards,
Amarnath

Comment 9 Kotresh HR 2022-10-06 05:39:53 UTC
(In reply to Amarnath from comment #8)
> Verified on 
>...
>...
> can target_group be renamed to target_group_name. to keep inline with the
> command arguments ?

I think it should be fine. I don't see a hard requirement.

> 
> Regards,
> Amarnath

Comment 21 errata-xmlrpc 2023-03-20 18:58:27 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory (Red Hat Ceph Storage 6.0 Bug Fix update), and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHBA-2023:1360


Note You need to log in before you can comment on or make changes to this bug.