This bug was initially created as a copy of Bug #1889894 Description of problem: When cinder.conf:rbd_max_clone_depth=1 cinder volume fails cinder.volume.manager does not appear to have the capability to use an updated 'smaller' value of "rbd_max_clone_depth". I'm not seeing a condition calling flatten(for existing RBDs exceeding new clone_depth). In this instance, customer updated rbd_max_clone_depth = 1 # default: 5 REF: https://docs.openstack.org/cinder/queens/sample_config.html # Maximum number of nested volume clones that are taken before a flatten # occurs. Set to 0 to disable cloning. (integer value) #rbd_max_clone_depth = 5 Version-Release number of selected component (if applicable): Distro: [redhat-release] Red Hat Enterprise Linux Server release 7.8 (Maipo) [rhosp-release] Red Hat OpenStack Platform release 13.0.12 (Queens) [os-release] Red Hat Cloud Infrastructure 7.8 (Maipo) openstack-cinder-12.0.10-11.el7ost.noarch Wed Jul 22 17:30:43 2020 puppet-cinder-12.4.1-7.el7ost.noarch Wed Jul 22 17:29:05 2020 python2-cinderclient-3.5.0-2.el7ost.noarch Wed Jul 22 17:29:19 2020 python-cinder-12.0.10-11.el7ost.noarch Wed Jul 22 17:30:26 2020 How reproducible: Change #rbd_max_clone_depth = 5 to rbd_max_clone_depth = 1 Actual results: ~~~ 2020-10-20 11:22:33.012 55 ERROR cinder.volume.manager Traceback (most recent call last): 2020-10-20 11:22:33.012 55 ERROR cinder.volume.manager File "/usr/lib/python2.7/site-packages/taskflow/engines/action_engine/executor.py", line 53, in _execute_task 2020-10-20 11:22:33.012 55 ERROR cinder.volume.manager result = task.execute(**arguments) 2020-10-20 11:22:33.012 55 ERROR cinder.volume.manager File "/usr/lib/python2.7/site-packages/cinder/volume/flows/manager/create_volume.py", line 1034, in execute 2020-10-20 11:22:33.012 55 ERROR cinder.volume.manager context, volume, **volume_spec) 2020-10-20 11:22:33.012 55 ERROR cinder.volume.manager File "/usr/lib/python2.7/site-packages/cinder/volume/flows/manager/create_volume.py", line 492, in _create_from_source_volume 2020-10-20 11:22:33.012 55 ERROR cinder.volume.manager model_update = self.driver.create_cloned_volume(volume, srcvol_ref) 2020-10-20 11:22:33.012 55 ERROR cinder.volume.manager File "/usr/lib/python2.7/site-packages/cinder/volume/drivers/rbd.py", line 614, in create_cloned_volume 2020-10-20 11:22:33.012 55 ERROR cinder.volume.manager depth = self._get_clone_depth(client, src_name) 2020-10-20 11:22:33.012 55 ERROR cinder.volume.manager File "/usr/lib/python2.7/site-packages/cinder/volume/drivers/rbd.py", line 548, in _get_clone_depth 2020-10-20 11:22:33.012 55 ERROR cinder.volume.manager return self._get_clone_depth(client, parent, depth + 1) 2020-10-20 11:22:33.012 55 ERROR cinder.volume.manager File "/usr/lib/python2.7/site-packages/cinder/volume/drivers/rbd.py", line 548, in _get_clone_depth 2020-10-20 11:22:33.012 55 ERROR cinder.volume.manager return self._get_clone_depth(client, parent, depth + 1) 2020-10-20 11:22:33.012 55 ERROR cinder.volume.manager File "/usr/lib/python2.7/site-packages/cinder/volume/drivers/rbd.py", line 546, in _get_clone_depth 2020-10-20 11:22:33.012 55 ERROR cinder.volume.manager (self.configuration.rbd_max_clone_depth)) 2020-10-20 11:22:33.012 55 ERROR cinder.volume.manager Exception: clone depth exceeds limit of 1 Expected results: operation will occur w/o creating an exception in cinder volume manager.
Eric, There is more to this bz verification than just setting cinder.conf's rbd_max_clone_depth = 1 restarting c-vol and checking that I don't get error on c-vol log right? Should I also test on pre-existing clones? If I understand correctly clone depth equals the number of snapshot of a given volume, thus create a volume with more than 1 snapshot, before I reduce the default rbd_max_clone_depth vault from 5 to 1, And then look for and to hope !find any trackback messages on c-vol log. Should I re-test anything after I change the setting?
Verified on: openstack-cinder-15.3.1-5.el8ost On a preexisting system, rbd_max_clone_depth was left at default 5, I had created 6 volumes Each one cloned from a previous volume: (overcloud) [stack@undercloud-0 ~]$ cinder list +--------------------------------------+-----------+------+------+-------------+----------+-------------+ | ID | Status | Name | Size | Volume Type | Bootable | Attached to | +--------------------------------------+-----------+------+------+-------------+----------+-------------+ | 015b6c15-6134-4e3b-a747-157fc7c85ec4 | available | volB | 1 | tripleo | false | | | 2dbbdd5d-ff22-4fe9-9861-9ca8b39285d8 | available | volC | 1 | tripleo | false | | | 3f9356de-e3ab-4d0b-a2c9-7d02bc26a68c | available | volF | 1 | tripleo | false | | | 73a45018-6664-491a-b754-9c59be267807 | available | volE | 1 | tripleo | false | | | bc1896a6-6fc0-4f54-a89f-ff0e68ec4b8f | available | volD | 1 | tripleo | false | | | f135f778-fab3-44d1-901d-22c0e1330919 | available | volA | 1 | tripleo | false | | +--------------------------------------+-----------+------+------+-------------+----------+-------------+ Now lets reduce rbd_max_clone_depth to 1 and see what happens: [root@controller-2 ~]# grep max_clone /var/lib/config-data/puppet-generated/cinder/etc/cinder/cinder.conf #gpfs_max_clone_depth = 0 #rbd_max_clone_depth = 5 rbd_max_clone_depth = 1 Now lets create a test volume: (overcloud) [stack@undercloud-0 ~]$ cinder create 1 --name volG +--------------------------------+--------------------------------------+ | Property | Value | +--------------------------------+--------------------------------------+ | attachments | [] | | availability_zone | nova | | bootable | false | | consistencygroup_id | None | | created_at | 2020-11-22T13:16:46.000000 | | description | None | | encrypted | False | | id | 1789d8ca-872b-46c8-8da4-1fca32bf2bda | | metadata | {} | | migration_status | None | | multiattach | False | | name | volG | | os-vol-host-attr:host | None | | os-vol-mig-status-attr:migstat | None | | os-vol-mig-status-attr:name_id | None | | os-vol-tenant-attr:tenant_id | 23cfee0f63b44e31a44761d5da636209 | | replication_status | None | | size | 1 | | snapshot_id | None | | source_volid | None | | status | creating | | updated_at | None | | user_id | cdb4cc954bc14b1db71b5b14fe87fa2c | | volume_type | tripleo | +--------------------------------+--------------------------------------+ Lets clone above volume: (overcloud) [stack@undercloud-0 ~]$ cinder create 1 --name volH --source-volid 1789d8ca-872b-46c8-8da4-1fca32bf2bda +--------------------------------+--------------------------------------+ | Property | Value | +--------------------------------+--------------------------------------+ | attachments | [] | | availability_zone | nova | | bootable | false | | consistencygroup_id | None | | created_at | 2020-11-22T13:17:52.000000 | | description | None | | encrypted | False | | id | d529ee35-a698-49ce-ac02-1d63553f128c | | metadata | {} | | migration_status | None | | multiattach | False | | name | volH | | os-vol-host-attr:host | None | | os-vol-mig-status-attr:migstat | None | | os-vol-mig-status-attr:name_id | None | | os-vol-tenant-attr:tenant_id | 23cfee0f63b44e31a44761d5da636209 | | replication_status | None | | size | 1 | | snapshot_id | None | | source_volid | 1789d8ca-872b-46c8-8da4-1fca32bf2bda | | status | creating | | updated_at | None | | user_id | cdb4cc954bc14b1db71b5b14fe87fa2c | | volume_type | tripleo | +--------------------------------+--------------------------------------+ Now lets clone a volume that we have created before the change: (overcloud) [stack@undercloud-0 ~]$ cinder create 1 --name volI --source-volid 2dbbdd5d-ff22-4fe9-9861-9ca8b39285d8 +--------------------------------+--------------------------------------+ | Property | Value | +--------------------------------+--------------------------------------+ | attachments | [] | | availability_zone | nova | | bootable | false | | consistencygroup_id | None | | created_at | 2020-11-22T13:19:01.000000 | | description | None | | encrypted | False | | id | cf8155e6-f35a-49df-9977-c04b3f61ce61 | | metadata | {} | | migration_status | None | | multiattach | False | | name | volI | | os-vol-host-attr:host | None | | os-vol-mig-status-attr:migstat | None | | os-vol-mig-status-attr:name_id | None | | os-vol-tenant-attr:tenant_id | 23cfee0f63b44e31a44761d5da636209 | | replication_status | None | | size | 1 | | snapshot_id | None | | source_volid | 2dbbdd5d-ff22-4fe9-9861-9ca8b39285d8 | | status | creating | | updated_at | None | | user_id | cdb4cc954bc14b1db71b5b14fe87fa2c | | volume_type | tripleo | +--------------------------------+--------------------------------------+ All volumes are available, no errors or tracebacks on c-vol log, looks good to verify. (overcloud) [stack@undercloud-0 ~]$ cinder list +--------------------------------------+-----------+------+------+-------------+----------+-------------+ | ID | Status | Name | Size | Volume Type | Bootable | Attached to | +--------------------------------------+-----------+------+------+-------------+----------+-------------+ | 015b6c15-6134-4e3b-a747-157fc7c85ec4 | available | volB | 1 | tripleo | false | | | 1789d8ca-872b-46c8-8da4-1fca32bf2bda | available | volG | 1 | tripleo | false | | | 2dbbdd5d-ff22-4fe9-9861-9ca8b39285d8 | available | volC | 1 | tripleo | false | | | 3f9356de-e3ab-4d0b-a2c9-7d02bc26a68c | available | volF | 1 | tripleo | false | | | 73a45018-6664-491a-b754-9c59be267807 | available | volE | 1 | tripleo | false | | | bc1896a6-6fc0-4f54-a89f-ff0e68ec4b8f | available | volD | 1 | tripleo | false | | | cf8155e6-f35a-49df-9977-c04b3f61ce61 | available | volI | 1 | tripleo | false | | | d529ee35-a698-49ce-ac02-1d63553f128c | available | volH | 1 | tripleo | false | | | f135f778-fab3-44d1-901d-22c0e1330919 | available | volA | 1 | tripleo | false | | +--------------------------------------+-----------+------+------+-------------+----------+-------------+ Plus we also see the flattening happening: 3644:2020-11-22 13:14:35.471 7 DEBUG oslo_service.service [req-f568b256-eef5-4749-9e8d-bf2f1108c186 - - - - -] backend_defaults.rbd_max_clone_depth = 1 log_opt_values /usr/lib/python3.6/site-packages/oslo_config/cfg.py:2589 3788:2020-11-22 13:19:02.831 39 INFO cinder.volume.drivers.rbd [req-350073f0-c2e2-4ce3-9063-0ab74a96119e cdb4cc954bc14b1db71b5b14fe87fa2c 23cfee0f63b44e31a44761d5da636209 - default default] maximum clone depth (1) has been reached - flattening dest volume Good to verify.
Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory (Red Hat OpenStack Platform 16.1.3 bug fix and enhancement advisory), and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://access.redhat.com/errata/RHEA-2020:5413
The needinfo request[s] on this closed bug have been removed as they have been unresolved for 1000 days