Description of problem: When cinder.conf:rbd_max_clone_depth=1 cinder volume fails cinder.volume.manager does not appear to have the capability to use an updated 'smaller' value of "rbd_max_clone_depth". I'm not seeing a condition calling flatten(for existing RBDs exceeding new clone_depth). In this instance, customer updated rbd_max_clone_depth = 1 # default: 5 REF: https://docs.openstack.org/cinder/queens/sample_config.html # Maximum number of nested volume clones that are taken before a flatten # occurs. Set to 0 to disable cloning. (integer value) #rbd_max_clone_depth = 5 Version-Release number of selected component (if applicable): Distro: [redhat-release] Red Hat Enterprise Linux Server release 7.8 (Maipo) [rhosp-release] Red Hat OpenStack Platform release 13.0.12 (Queens) [os-release] Red Hat Cloud Infrastructure 7.8 (Maipo) openstack-cinder-12.0.10-11.el7ost.noarch Wed Jul 22 17:30:43 2020 puppet-cinder-12.4.1-7.el7ost.noarch Wed Jul 22 17:29:05 2020 python2-cinderclient-3.5.0-2.el7ost.noarch Wed Jul 22 17:29:19 2020 python-cinder-12.0.10-11.el7ost.noarch Wed Jul 22 17:30:26 2020 How reproducible: Change #rbd_max_clone_depth = 5 to rbd_max_clone_depth = 1 Actual results: ~~~ 2020-10-20 11:22:33.012 55 ERROR cinder.volume.manager Traceback (most recent call last): 2020-10-20 11:22:33.012 55 ERROR cinder.volume.manager File "/usr/lib/python2.7/site-packages/taskflow/engines/action_engine/executor.py", line 53, in _execute_task 2020-10-20 11:22:33.012 55 ERROR cinder.volume.manager result = task.execute(**arguments) 2020-10-20 11:22:33.012 55 ERROR cinder.volume.manager File "/usr/lib/python2.7/site-packages/cinder/volume/flows/manager/create_volume.py", line 1034, in execute 2020-10-20 11:22:33.012 55 ERROR cinder.volume.manager context, volume, **volume_spec) 2020-10-20 11:22:33.012 55 ERROR cinder.volume.manager File "/usr/lib/python2.7/site-packages/cinder/volume/flows/manager/create_volume.py", line 492, in _create_from_source_volume 2020-10-20 11:22:33.012 55 ERROR cinder.volume.manager model_update = self.driver.create_cloned_volume(volume, srcvol_ref) 2020-10-20 11:22:33.012 55 ERROR cinder.volume.manager File "/usr/lib/python2.7/site-packages/cinder/volume/drivers/rbd.py", line 614, in create_cloned_volume 2020-10-20 11:22:33.012 55 ERROR cinder.volume.manager depth = self._get_clone_depth(client, src_name) 2020-10-20 11:22:33.012 55 ERROR cinder.volume.manager File "/usr/lib/python2.7/site-packages/cinder/volume/drivers/rbd.py", line 548, in _get_clone_depth 2020-10-20 11:22:33.012 55 ERROR cinder.volume.manager return self._get_clone_depth(client, parent, depth + 1) 2020-10-20 11:22:33.012 55 ERROR cinder.volume.manager File "/usr/lib/python2.7/site-packages/cinder/volume/drivers/rbd.py", line 548, in _get_clone_depth 2020-10-20 11:22:33.012 55 ERROR cinder.volume.manager return self._get_clone_depth(client, parent, depth + 1) 2020-10-20 11:22:33.012 55 ERROR cinder.volume.manager File "/usr/lib/python2.7/site-packages/cinder/volume/drivers/rbd.py", line 546, in _get_clone_depth 2020-10-20 11:22:33.012 55 ERROR cinder.volume.manager (self.configuration.rbd_max_clone_depth)) 2020-10-20 11:22:33.012 55 ERROR cinder.volume.manager Exception: clone depth exceeds limit of 1 Expected results: operation will occur w/o creating an exception in cinder volume manager.
I've commited the following [1] upstream ... let's see how it goes. [1] https://review.opendev.org/#/c/759328
Verified on: openstack-cinder-12.0.10-21.el7ost.noarch On a preexisting system were I had previously created 5 cinder volumes, each one a clone of the prior volume: (overcloud) [stack@undercloud-0 ~]$ cinder list +--------------------------------------+-----------+-------------+------+-------------+----------+-------------+ | ID | Status | Name | Size | Volume Type | Bootable | Attached to | +--------------------------------------+-----------+-------------+------+-------------+----------+-------------+ | 01f71541-797e-48bf-8121-739c63e15cbd | available | volG_clone6 | 2 | tripleo | false | | | 12f63e6b-fd35-4679-af2c-7cd3c7fb7dee | available | volA | 2 | tripleo | false | | | 34bff18c-ed4d-41b2-8f63-162c5486e6ef | available | volB_clone1 | 2 | tripleo | false | | | 839194d8-5d50-415d-8466-cf85cf189d22 | available | volE_clone4 | 2 | tripleo | false | | | ca84ca86-bbcd-4f1d-935d-b44f364769b8 | available | volD_clone3 | 2 | tripleo | false | | | d304aa90-0409-4508-bf81-306a9f686f90 | available | volC_clone2 | 2 | tripleo | false | | | fbdd6c64-c121-45ab-b63e-00eed3996061 | available | volF_clone5 | 2 | tripleo | false | | +--------------------------------------+-----------+-------------+------+-------------+----------+-------------+ Default value was unchanged when original volumes were created. [root@controller-0 ~]# grep -irn rbd_max /var/lib/config-data/puppet-generated/cinder/etc/cinder/cinder.conf 2990:#rbd_max_clone_depth = 5 Now lets reduce rbd_max_clone_depth to 1, restart c-vol and see what happens. We shouldn't hit any tracebacks, nothing so far for a few minutes after the service re-started, good. Now lets create some new volumes (overcloud) [stack@undercloud-0 ~]$ cinder create 2 --name test +--------------------------------+--------------------------------------+ | Property | Value | +--------------------------------+--------------------------------------+ | attachments | [] | | availability_zone | nova | | bootable | false | | consistencygroup_id | None | | created_at | 2020-11-19T13:14:45.000000 | | description | None | | encrypted | False | | id | 3e132b9e-b4e8-4b3e-9549-a074cab27053 | | metadata | {} | | migration_status | None | | multiattach | False | | name | test | | os-vol-host-attr:host | hostgroup@tripleo_ceph#tripleo_ceph | | os-vol-mig-status-attr:migstat | None | | os-vol-mig-status-attr:name_id | None | | os-vol-tenant-attr:tenant_id | fe7c9e7e23104b839fef51ea7e00d1c8 | | replication_status | None | | size | 2 | | snapshot_id | None | | source_volid | None | | status | available | | updated_at | 2020-11-19T13:14:46.000000 | | user_id | 3170f966a536448abc8b20d69fa3c98d | | volume_type | tripleo | +--------------------------------+--------------------------------------+ And now a clone of this volume: (overcloud) [stack@undercloud-0 ~]$ cinder create 3 --name clone_test --source-volid 3e132b9e-b4e8-4b3e-9549-a074cab27053 +--------------------------------+--------------------------------------+ | Property | Value | +--------------------------------+--------------------------------------+ | attachments | [] | | availability_zone | nova | | bootable | false | | consistencygroup_id | None | | created_at | 2020-11-19T13:16:03.000000 | | description | None | | encrypted | False | | id | c245e78b-076d-4433-8f7e-74eb74453727 | | metadata | {} | | migration_status | None | | multiattach | False | | name | clone_test | | os-vol-host-attr:host | hostgroup@tripleo_ceph#tripleo_ceph | | os-vol-mig-status-attr:migstat | None | | os-vol-mig-status-attr:name_id | None | | os-vol-tenant-attr:tenant_id | fe7c9e7e23104b839fef51ea7e00d1c8 | | replication_status | None | | size | 3 | | snapshot_id | None | | source_volid | 3e132b9e-b4e8-4b3e-9549-a074cab27053 | | status | creating | | updated_at | 2020-11-19T13:16:03.000000 | | user_id | 3170f966a536448abc8b20d69fa3c98d | | volume_type | tripleo | +--------------------------------+--------------------------------------+ Now lets clone one of the original cloned volumes, which were created when max depth was it's default value of 5. (overcloud) [stack@undercloud-0 ~]$ cinder create 2 --name test3 --source-volid 839194d8-5d50-415d-8466-cf85cf189d22 +--------------------------------+--------------------------------------+ | Property | Value | +--------------------------------+--------------------------------------+ | attachments | [] | | availability_zone | nova | | bootable | false | | consistencygroup_id | None | | created_at | 2020-11-19T13:21:33.000000 | | description | None | | encrypted | False | | id | 62475098-70d1-459c-9bf2-491b94ba8d89 | | metadata | {} | | migration_status | None | | multiattach | False | | name | test3 | | os-vol-host-attr:host | hostgroup@tripleo_ceph#tripleo_ceph | | os-vol-mig-status-attr:migstat | None | | os-vol-mig-status-attr:name_id | None | | os-vol-tenant-attr:tenant_id | fe7c9e7e23104b839fef51ea7e00d1c8 | | replication_status | None | | size | 2 | | snapshot_id | None | | source_volid | 839194d8-5d50-415d-8466-cf85cf189d22 | | status | creating | | updated_at | 2020-11-19T13:21:33.000000 | | user_id | 3170f966a536448abc8b20d69fa3c98d | | volume_type | tripleo | +--------------------------------+--------------------------------------+ Looks fine no errors or tracebacks reported on c-vol log, all volumes are available: (overcloud) [stack@undercloud-0 ~]$ cinder list +--------------------------------------+-----------+-------------+------+-------------+----------+-------------+ | ID | Status | Name | Size | Volume Type | Bootable | Attached to | +--------------------------------------+-----------+-------------+------+-------------+----------+-------------+ | 01f71541-797e-48bf-8121-739c63e15cbd | available | volG_clone6 | 2 | tripleo | false | | | 12f63e6b-fd35-4679-af2c-7cd3c7fb7dee | available | volA | 2 | tripleo | false | | | 34bff18c-ed4d-41b2-8f63-162c5486e6ef | available | volB_clone1 | 2 | tripleo | false | | | 3e132b9e-b4e8-4b3e-9549-a074cab27053 | available | test | 2 | tripleo | false | | | 62475098-70d1-459c-9bf2-491b94ba8d89 | available | test3 | 2 | tripleo | false | | | 839194d8-5d50-415d-8466-cf85cf189d22 | available | volE_clone4 | 2 | tripleo | false | | | c245e78b-076d-4433-8f7e-74eb74453727 | available | clone_test | 3 | tripleo | false | | | ca84ca86-bbcd-4f1d-935d-b44f364769b8 | available | volD_clone3 | 2 | tripleo | false | | | d304aa90-0409-4508-bf81-306a9f686f90 | available | volC_clone2 | 2 | tripleo | false | | | fbdd6c64-c121-45ab-b63e-00eed3996061 | available | volF_clone5 | 2 | tripleo | false | | +--------------------------------------+-----------+-------------+------+-------------+----------+-------------+ Good to verify.
Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory (openstack-cinder bug fix advisory), and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://access.redhat.com/errata/RHBA-2020:5579