Bug 2130426

Summary: [CephFS] mgr/volumes: display in-progress clones for a snapshot
Product: [Red Hat Storage] Red Hat Ceph Storage Reporter: Kotresh HR <khiremat>
Component: CephFSAssignee: Kotresh HR <khiremat>
Status: CLOSED ERRATA QA Contact: Amarnath <amk>
Severity: medium Docs Contact: Masauso Lungu <mlungu>
Priority: unspecified    
Version: 5.2CC: ceph-eng-bugs, cephqe-warriors, hyelloji, mlungu, pasik, vereddy
Target Milestone: ---   
Target Release: 6.0   
Hardware: x86_64   
OS: Linux   
Whiteboard:
Fixed In Version: ceph-17.2.3-45.el9cp Doc Type: Enhancement
Doc Text:
.Users can list the in-progress or pending clones for a subvolume snapshot Previously, there was no way of knowing the set of clone operations in-progress or pending for a subvolume snapshot, unless the user knew the clone subvolume’s name and used `clone status` to infer the details. With this release, for a given subvolume snapshot name, the in-progress or pending clones can be listed.
Story Points: ---
Clone Of: Environment:
Last Closed: 2023-03-20 18:58:27 UTC Type: ---
Regression: --- Mount Type: ---
Documentation: --- CRM:
Verified Versions: Category: ---
oVirt Team: --- RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: --- Target Upstream Version:
Embargoed:
Bug Depends On:    
Bug Blocks: 2126050    

Description Kotresh HR 2022-09-28 07:08:07 UTC
This bug was initially created as a copy of Bug #2125575

I am copying this bug because: 



Description of problem:
Right now, there is no way of knowing the set of clone operations n-progress/pending for a subvolume snapshot (unless one knows the clone subvolume name and use `clone status` to infer the details).

Introduce a way to list in-progress/pending clones for a subvolume snapshot. This would involve fetching "clone indexes" from subvolume metadata and resolving the index to the target subvolume (clone).

Please check the upstream tracker attached for more information.


Version-Release number of selected component (if applicable):


How reproducible:


Steps to Reproduce:
1.
2.
3.

Actual results:


Expected results:


Additional info:

Comment 8 Amarnath 2022-10-06 01:56:19 UTC
Verified on 
[root@ceph-amk-up-tmh9aw-node7 b88f45dd-0dac-4296-9108-5e533252c63e]# ceph versions
{
    "mon": {
        "ceph version 17.2.3-47.el9cp (1a00ea1102fdf76fbf54be3e10d21cc19fc61270) quincy (stable)": 3
    },
    "mgr": {
        "ceph version 17.2.3-47.el9cp (1a00ea1102fdf76fbf54be3e10d21cc19fc61270) quincy (stable)": 2
    },
    "osd": {
        "ceph version 17.2.3-47.el9cp (1a00ea1102fdf76fbf54be3e10d21cc19fc61270) quincy (stable)": 12
    },
    "mds": {
        "ceph version 17.2.3-47.el9cp (1a00ea1102fdf76fbf54be3e10d21cc19fc61270) quincy (stable)": 3
    },
    "overall": {
        "ceph version 17.2.3-47.el9cp (1a00ea1102fdf76fbf54be3e10d21cc19fc61270) quincy (stable)": 20
    }
}
[root@ceph-amk-up-tmh9aw-node7 b88f45dd-0dac-4296-9108-5e533252c63e]# 



[root@ceph-amk-up-tmh9aw-node7 b88f45dd-0dac-4296-9108-5e533252c63e]# ceph fs subvolume snapshot clone cephfs subvol_1 snap_1 clone_3 --group_name subvolgroup_1 --target_group_name subvolgroup_1
[root@ceph-amk-up-tmh9aw-node7 b88f45dd-0dac-4296-9108-5e533252c63e]# ceph fs subvolume snapshot info cephfs subvol_1 snap_1 --group_name subvolgroup_1
{
    "created_at": "2022-10-06 01:35:11.115548",
    "data_pool": "cephfs.cephfs.data",
    "has_pending_clones": "yes",
    "pending_clones": [
        {
            "name": "clone_3",
            "target_group": "subvolgroup_1"
        }
    ]
}
[root@ceph-amk-up-tmh9aw-node7 b88f45dd-0dac-4296-9108-5e533252c63e]# ceph fs subvolume snapshot info cephfs subvol_1 snap_1 --group_name subvolgroup_1
{
    "created_at": "2022-10-06 01:35:11.115548",
    "data_pool": "cephfs.cephfs.data",
    "has_pending_clones": "no"
}

can target_group be renamed to target_group_name. to keep inline with the command arguments ?

Regards,
Amarnath

Comment 9 Kotresh HR 2022-10-06 05:39:53 UTC
(In reply to Amarnath from comment #8)
> Verified on 
>...
>...
> can target_group be renamed to target_group_name. to keep inline with the
> command arguments ?

I think it should be fine. I don't see a hard requirement.

> 
> Regards,
> Amarnath

Comment 21 errata-xmlrpc 2023-03-20 18:58:27 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory (Red Hat Ceph Storage 6.0 Bug Fix update), and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHBA-2023:1360