Bug 1894038

Summary: pybind/mgr/volumes: Make number of cloner threads configurable
Product: [Red Hat Storage] Red Hat Ceph Storage Reporter: Kotresh HR <khiremat>
Component: CephFSAssignee: Kotresh HR <khiremat>
Status: CLOSED ERRATA QA Contact: Amarnath <amk>
Severity: low Docs Contact: Ranjini M N <rmandyam>
Priority: medium    
Version: 4.1CC: ceph-eng-bugs, rmandyam, tserlin, vereddy
Target Milestone: ---   
Target Release: 4.3   
Hardware: All   
OS: All   
Whiteboard:
Fixed In Version: ceph-14.2.22-2.el8cp, ceph-14.2.22-2.el7cp Doc Type: Enhancement
Doc Text:
.Use `max_concurrent_clones` option to configure the number of clone threads Previously, the number of concurrent clones was not configurable and the default was 4. With this release, the maximum number of concurrent clones is configurable using the manager configuration option: .Syntax [source,subs="verbatim,quotes"] ---- ceph config set mgr mgr/volumes/max_concurrent_clones _VALUE_ ---- Increasing the maximum number of concurrent clones could improve the performance of the storage cluster.
Story Points: ---
Clone Of: Environment:
Last Closed: 2022-05-05 07:53:21 UTC Type: Bug
Regression: --- Mount Type: ---
Documentation: --- CRM:
Verified Versions: Category: ---
oVirt Team: --- RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: --- Target Upstream Version:
Embargoed:
Bug Depends On:    
Bug Blocks: 2031070    

Description Kotresh HR 2020-11-03 12:01:17 UTC
Description of problem:
The number of cloner threads is set to 4 and can't be configured.
This is bottle neck if the system resource is capable of handling
more number parallel clones. Hence provide an option to configure
the number of cloner threads.

Please check upstream tracker for more details 

Version-Release number of selected component (if applicable):


How reproducible:


Steps to Reproduce:
1.
2.
3.

Actual results:


Expected results:


Additional info:

Comment 1 Hemanth Kumar 2021-10-13 06:55:36 UTC
@khiremat - Are we planning to have this in 4.3 ?? Any update on this ?

Comment 7 Amarnath 2021-10-26 12:36:15 UTC
Functionality wise it is working as expected

[root@ceph-amk4-uafwty-node7 ~]# ceph fs subvolume snapshot create cephfs subvol_1 snap_1 --group_name subvolgroup_1
[root@ceph-amk4-uafwty-node7 ~]# ceph fs subvolume snapshot clone cephfs subvol_1 snap_1 clone_1 --group_name subvolgroup_1
[root@ceph-amk4-uafwty-node7 ~]# ceph fs clone status cephfs clone_1
{
  "status": {
    "source": {
      "volume": "cephfs", 
      "group": "subvolgroup_1", 
      "snapshot": "snap_1", 
      "subvolume": "subvol_1"
    }, 
    "state": "in-progress"
  }
}
[root@ceph-amk4-uafwty-node7 ~]# ceph fs subvolume snapshot clone cephfs subvol_1 snap_1 clone_2 --group_name subvolgroup_1
[root@ceph-amk4-uafwty-node7 ~]# ceph fs subvolume snapshot clone cephfs subvol_1 snap_1 clone_3 --group_name subvolgroup_1
[root@ceph-amk4-uafwty-node7 ~]# ceph fs subvolume snapshot clone cephfs subvol_1 snap_1 clone_4 --group_name subvolgroup_1
[root@ceph-amk4-uafwty-node7 ~]# ceph fs subvolume snapshot clone cephfs subvol_1 snap_1 clone_5 --group_name subvolgroup_1
[root@ceph-amk4-uafwty-node7 ~]# ceph fs clone status cephfs clone_1
{
  "status": {
    "source": {
      "volume": "cephfs", 
      "group": "subvolgroup_1", 
      "snapshot": "snap_1", 
      "subvolume": "subvol_1"
    }, 
    "state": "in-progress"
  }
}
[root@ceph-amk4-uafwty-node7 ~]# ceph fs clone status cephfs clone_2
{
  "status": {
    "source": {
      "volume": "cephfs", 
      "group": "subvolgroup_1", 
      "snapshot": "snap_1", 
      "subvolume": "subvol_1"
    }, 
    "state": "in-progress"
  }
}
[root@ceph-amk4-uafwty-node7 ~]# ceph fs clone status cephfs clone_3
{
  "status": {
    "source": {
      "volume": "cephfs", 
      "group": "subvolgroup_1", 
      "snapshot": "snap_1", 
      "subvolume": "subvol_1"
    }, 
    "state": "in-progress"
  }
}
[root@ceph-amk4-uafwty-node7 ~]# ceph fs clone status cephfs clone_4
{
  "status": {
    "source": {
      "volume": "cephfs", 
      "group": "subvolgroup_1", 
      "snapshot": "snap_1", 
      "subvolume": "subvol_1"
    }, 
    "state": "in-progress"
  }
}
[root@ceph-amk4-uafwty-node7 ~]# ceph fs clone status cephfs clone_5
{
  "status": {
    "source": {
      "volume": "cephfs", 
      "group": "subvolgroup_1", 
      "snapshot": "snap_1", 
      "subvolume": "subvol_1"
    }, 
    "state": "pending"
  }
}
[root@ceph-amk4-uafwty-node7 ~]# ceph config set mgr mgr/volumes/max_concurrent_clones 2
[root@ceph-amk4-uafwty-node7 ~]# ceph fs subvolume snapshot clone cephfs subvol_1 snap_1 clone_6 --group_name subvolgroup_1
[root@ceph-amk4-uafwty-node7 ~]# ceph fs subvolume snapshot clone cephfs subvol_1 snap_1 clone_7 --group_name subvolgroup_1
[root@ceph-amk4-uafwty-node7 ~]# ceph fs subvolume snapshot clone cephfs subvol_1 snap_1 clone_8 --group_name subvolgroup_1
[root@ceph-amk4-uafwty-node7 ~]# ceph fs subvolume snapshot clone cephfs subvol_1 snap_1 clone_9 --group_name subvolgroup_1
[root@ceph-amk4-uafwty-node7 ~]# ceph fs clone status cephfs clone_6
{
  "status": {
    "source": {
      "volume": "cephfs", 
      "group": "subvolgroup_1", 
      "snapshot": "snap_1", 
      "subvolume": "subvol_1"
    }, 
    "state": "in-progress"
  }
}
[root@ceph-amk4-uafwty-node7 ~]# ceph fs clone status cephfs clone_7
{
  "status": {
    "source": {
      "volume": "cephfs", 
      "group": "subvolgroup_1", 
      "snapshot": "snap_1", 
      "subvolume": "subvol_1"
    }, 
    "state": "in-progress"
  }
}
[root@ceph-amk4-uafwty-node7 ~]# ceph fs clone status cephfs clone_8
{
  "status": {
    "source": {
      "volume": "cephfs", 
      "group": "subvolgroup_1", 
      "snapshot": "snap_1", 
      "subvolume": "subvol_1"
    }, 
    "state": "pending"
  }
}
[root@ceph-amk4-uafwty-node7 ~]# ceph fs clone status cephfs clone_9
{
  "status": {
    "source": {
      "volume": "cephfs", 
      "group": "subvolgroup_1", 
      "snapshot": "snap_1", 
      "subvolume": "subvol_1"
    }, 
    "state": "pending"
  }
}

Comment 9 Amarnath 2021-10-28 14:07:20 UTC
Thanks Koteresh

Command works fine n 4.3 builds 
[root@ceph-amk4-4idci5-node7 ~]# ceph config get mgr.ceph-amk4-4idci5-node2 mgr/volumes/max_concurrent_clones
2
[root@ceph-amk4-4idci5-node7 ~]#

Comment 17 errata-xmlrpc 2022-05-05 07:53:21 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory (Moderate: Red Hat Ceph Storage 4.3 Security and Bug Fix update), and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHSA-2022:1716