This bug was initially created as a copy of Bug #2125575 I am copying this bug because: Description of problem: Right now, there is no way of knowing the set of clone operations n-progress/pending for a subvolume snapshot (unless one knows the clone subvolume name and use `clone status` to infer the details). Introduce a way to list in-progress/pending clones for a subvolume snapshot. This would involve fetching "clone indexes" from subvolume metadata and resolving the index to the target subvolume (clone). Please check the upstream tracker attached for more information. Version-Release number of selected component (if applicable): How reproducible: Steps to Reproduce: 1. 2. 3. Actual results: Expected results: Additional info:
Verified on [root@ceph-amk-up-tmh9aw-node7 b88f45dd-0dac-4296-9108-5e533252c63e]# ceph versions { "mon": { "ceph version 17.2.3-47.el9cp (1a00ea1102fdf76fbf54be3e10d21cc19fc61270) quincy (stable)": 3 }, "mgr": { "ceph version 17.2.3-47.el9cp (1a00ea1102fdf76fbf54be3e10d21cc19fc61270) quincy (stable)": 2 }, "osd": { "ceph version 17.2.3-47.el9cp (1a00ea1102fdf76fbf54be3e10d21cc19fc61270) quincy (stable)": 12 }, "mds": { "ceph version 17.2.3-47.el9cp (1a00ea1102fdf76fbf54be3e10d21cc19fc61270) quincy (stable)": 3 }, "overall": { "ceph version 17.2.3-47.el9cp (1a00ea1102fdf76fbf54be3e10d21cc19fc61270) quincy (stable)": 20 } } [root@ceph-amk-up-tmh9aw-node7 b88f45dd-0dac-4296-9108-5e533252c63e]# [root@ceph-amk-up-tmh9aw-node7 b88f45dd-0dac-4296-9108-5e533252c63e]# ceph fs subvolume snapshot clone cephfs subvol_1 snap_1 clone_3 --group_name subvolgroup_1 --target_group_name subvolgroup_1 [root@ceph-amk-up-tmh9aw-node7 b88f45dd-0dac-4296-9108-5e533252c63e]# ceph fs subvolume snapshot info cephfs subvol_1 snap_1 --group_name subvolgroup_1 { "created_at": "2022-10-06 01:35:11.115548", "data_pool": "cephfs.cephfs.data", "has_pending_clones": "yes", "pending_clones": [ { "name": "clone_3", "target_group": "subvolgroup_1" } ] } [root@ceph-amk-up-tmh9aw-node7 b88f45dd-0dac-4296-9108-5e533252c63e]# ceph fs subvolume snapshot info cephfs subvol_1 snap_1 --group_name subvolgroup_1 { "created_at": "2022-10-06 01:35:11.115548", "data_pool": "cephfs.cephfs.data", "has_pending_clones": "no" } can target_group be renamed to target_group_name. to keep inline with the command arguments ? Regards, Amarnath
(In reply to Amarnath from comment #8) > Verified on >... >... > can target_group be renamed to target_group_name. to keep inline with the > command arguments ? I think it should be fine. I don't see a hard requirement. > > Regards, > Amarnath
Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory (Red Hat Ceph Storage 6.0 Bug Fix update), and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://access.redhat.com/errata/RHBA-2023:1360