Description of problem: creating a cloned volume from empty volume while source volume is in used ('state' = 'creating') is failing in ERROR. scenarios: 1) cinder create --display-name ba31532a-70d4-4e88-8c62-845c42f31124 2 2) cinder create --source-volid ba31532a-70d4-4e88-8c62-845c42f31124 --display-name clone-of-710cbc8e-7030-45e4-bddb-0dffd16ccac8_itter3 2 3) while creation of previous step, run additional creation of a new cloned volume: 4) cinder create --source-volid ba31532a-70d4-4e88-8c62-845c42f31124 --display-name clone-of-710cbc8e-7030-45e4-bddb-0dffd16ccac8_itter3 2 expected results: cloned volume should succeed current results: ec2923a7-0185-44a7-a854-0fed9e927dc2 | error | clone-of-710cbc8e-7030-45e4-bddb-0dffd16ccac8_itter3 | 2 | None | false | |
Created attachment 773305 [details] cinder logs.
ProcessExecutionError: Unexpected error while running command. Command: sudo cinder-rootwrap /etc/cinder/rootwrap.conf lvcreate -L 2G --name clone-snap-ba31532a-70d4-4e88-8c62-845c42f31124 --snapshot cinder-volumes/volume-ba31532a-70d4-4e88-8c62-845c42f31124 Exit code: 5 Stdout: '' Stderr: ' Logical volume "clone-snap-ba31532a-70d4-4e88-8c62-845c42f31124" already exists in volume group "cinder-volumes"\n' When cloning with the LVM driver, an LVM snapshot is created with the name clone-snap-<uuid>. This means that creating two clones simultaneously from the same volume will fail.
i tested this with gluster (fuse) and we are failing to create more than one volume: openstack-cinder-2013.2-8.el6ost.noarch 2013-12-11 19:39:54.425 3860 TRACE cinder.volume.flows.create_volume Stderr: 'Error: Trying to create an image with the same filename as the backing file\n' 2013-12-11 19:39:54.425 3860 TRACE cinder.volume.flows.create_volume 2013-12-11 19:39:54.428 3860 ERROR cinder.openstack.common.rpc.amqp [req-517c7870-adc9-4cfd-a50f-eb0a9a03ac12 674776fc1eea47718301aeacbab072b3 5cea8d9e58c841dfb03b1cda755b539d] Exception during message handling 2013-12-11 19:39:54.428 3860 TRACE cinder.openstack.common.rpc.amqp Traceback (most recent call last): 2013-12-11 19:39:54.428 3860 TRACE cinder.openstack.common.rpc.amqp File "/usr/lib/python2.6/site-packages/cinder/openstack/common/rpc/amqp.py", line 441, in _process_data 2013-12-11 19:39:54.428 3860 TRACE cinder.openstack.common.rpc.amqp **args) 2013-12-11 19:39:54.428 3860 TRACE cinder.openstack.common.rpc.amqp File "/usr/lib/python2.6/site-packages/cinder/openstack/common/rpc/dispatcher.py", line 148, in dispatch 2013-12-11 19:39:54.428 3860 TRACE cinder.openstack.common.rpc.amqp return getattr(proxyobj, method)(ctxt, **kwargs) 2013-12-11 19:39:54.428 3860 TRACE cinder.openstack.common.rpc.amqp File "/usr/lib/python2.6/site-packages/cinder/utils.py", line 809, in wrapper 2013-12-11 19:39:54.428 3860 TRACE cinder.openstack.common.rpc.amqp return func(self, *args, **kwargs) 2013-12-11 19:39:54.428 3860 TRACE cinder.openstack.common.rpc.amqp File "/usr/lib/python2.6/site-packages/cinder/volume/manager.py", line 257, in create_volume 2013-12-11 19:39:54.428 3860 TRACE cinder.openstack.common.rpc.amqp flow.run(context.elevated()) 2013-12-11 19:39:54.428 3860 TRACE cinder.openstack.common.rpc.amqp File "/usr/lib/python2.6/site-packages/cinder/taskflow/decorators.py", line 105, in wrapper 2013-12-11 19:39:54.428 3860 TRACE cinder.openstack.common.rpc.amqp return f(self, *args, **kwargs) 2013-12-11 19:39:54.428 3860 TRACE cinder.openstack.common.rpc.amqp File "/usr/lib/python2.6/site-packages/cinder/taskflow/patterns/linear_flow.py", line 232, in run 2013-12-11 19:39:54.428 3860 TRACE cinder.openstack.common.rpc.amqp run_it(r) 2013-12-11 19:39:54.428 3860 TRACE cinder.openstack.common.rpc.amqp File "/usr/lib/python2.6/site-packages/cinder/taskflow/patterns/linear_flow.py", line 212, in run_it 2013-12-11 19:39:54.428 3860 TRACE cinder.openstack.common.rpc.amqp self.rollback(context, cause) 2013-12-11 19:39:54.428 3860 TRACE cinder.openstack.common.rpc.amqp File "/usr/lib64/python2.6/contextlib.py", line 23, in __exit__ 2013-12-11 19:39:54.428 3860 TRACE cinder.openstack.common.rpc.amqp self.gen.next() 2013-12-11 19:39:54.428 3860 TRACE cinder.openstack.common.rpc.amqp File "/usr/lib/python2.6/site-packages/cinder/taskflow/patterns/linear_flow.py", line 172, in run_it 2013-12-11 19:39:54.428 3860 TRACE cinder.openstack.common.rpc.amqp result = runner(context, *args, **kwargs) 2013-12-11 19:39:54.428 3860 TRACE cinder.openstack.common.rpc.amqp File "/usr/lib/python2.6/site-packages/cinder/taskflow/utils.py", line 260, in __call__ 2013-12-11 19:39:54.428 3860 TRACE cinder.openstack.common.rpc.amqp self.result = self.task(*args, **kwargs) 2013-12-11 19:39:54.428 3860 TRACE cinder.openstack.common.rpc.amqp File "/usr/lib/python2.6/site-packages/cinder/volume/flows/create_volume/__init__.py", line 1499, in __call__ 2013-12-11 19:39:54.428 3860 TRACE cinder.openstack.common.rpc.amqp **volume_spec) 2013-12-11 19:39:54.428 3860 TRACE cinder.openstack.common.rpc.amqp File "/usr/lib/python2.6/site-packages/cinder/volume/flows/create_volume/__init__.py", line 1339, in _create_from_source_volume 2013-12-11 19:39:54.428 3860 TRACE cinder.openstack.common.rpc.amqp model_update = self.driver.create_cloned_volume(volume_ref, srcvol_ref) 2013-12-11 19:39:54.428 3860 TRACE cinder.openstack.common.rpc.amqp File "/usr/lib/python2.6/site-packages/cinder/volume/drivers/glusterfs.py", line 188, in create_cloned_volume 2013-12-11 19:39:54.428 3860 TRACE cinder.openstack.common.rpc.amqp self.create_snapshot(temp_snapshot) 2013-12-11 19:39:54.428 3860 TRACE cinder.openstack.common.rpc.amqp File "/usr/lib/python2.6/site-packages/cinder/openstack/common/lockutils.py", line 247, in inner 2013-12-11 19:39:54.428 3860 TRACE cinder.openstack.common.rpc.amqp retval = f(*args, **kwargs) 2013-12-11 19:39:54.428 3860 TRACE cinder.openstack.common.rpc.amqp File "/usr/lib/python2.6/site-packages/cinder/volume/drivers/glusterfs.py", line 460, in create_snapshot 2013-12-11 19:39:54.428 3860 TRACE cinder.openstack.common.rpc.amqp self._create_snapshot(snapshot, path_to_disk, snap_id) 2013-12-11 19:39:54.428 3860 TRACE cinder.openstack.common.rpc.amqp File "/usr/lib/python2.6/site-packages/cinder/volume/drivers/glusterfs.py", line 500, in _create_snapshot 2013-12-11 19:39:54.428 3860 TRACE cinder.openstack.common.rpc.amqp new_snap_path) 2013-12-11 19:39:54.428 3860 TRACE cinder.openstack.common.rpc.amqp File "/usr/lib/python2.6/site-packages/cinder/volume/drivers/glusterfs.py", line 478, in _create_qcow2_snap_file 2013-12-11 19:39:54.428 3860 TRACE cinder.openstack.common.rpc.amqp self._execute(*command, run_as_root=True) 2013-12-11 19:39:54.428 3860 TRACE cinder.openstack.common.rpc.amqp File "/usr/lib/python2.6/site-packages/cinder/utils.py", line 143, in execute 2013-12-11 19:39:54.428 3860 TRACE cinder.openstack.common.rpc.amqp return processutils.execute(*cmd, **kwargs) 2013-12-11 19:39:54.428 3860 TRACE cinder.openstack.common.rpc.amqp File "/usr/lib/python2.6/site-packages/cinder/openstack/common/processutils.py", line 173, in execute 2013-12-11 19:39:54.428 3860 TRACE cinder.openstack.common.rpc.amqp cmd=' '.join(cmd)) 2013-12-11 19:39:54.428 3860 TRACE cinder.openstack.common.rpc.amqp ProcessExecutionError: Unexpected error while running command. 2013-12-11 19:39:54.428 3860 TRACE cinder.openstack.common.rpc.amqp Command: sudo cinder-rootwrap /etc/cinder/rootwrap.conf qemu-img create -f qcow2 -o backing_file=/var/lib/cinder/mnt/249458a2755cd0a9f302b9d81eb3f35d/volume-c5bfe327-59f2-4936-846b-a0a88b8c6687.tmp-snap-c5bfe327-59f2-4936-846b-a0a88b8c6687 /var/lib/cinder/mnt/249458a2755cd0a9f302b9d81eb3f35d/volume-c5bfe327-59f2-4936-846b-a0a88b8c6687.tmp-snap-c5bfe327-59f2-4936-846b-a0a88b8c6687 2013-12-11 19:39:54.428 3860 TRACE cinder.openstack.common.rpc.amqp Exit code: 1 2013-12-11 19:39:54.428 3860 TRACE cinder.openstack.common.rpc.amqp Stdout: '' 2013-12-11 19:39:54.428 3860 TRACE cinder.openstack.common.rpc.amqp Stderr: 'Error: Trying to create an image with the same filename as the backing file\n' 2013-12-11 19:39:54.428 3860 TRACE cinder.openstack.common.rpc.amqp (END)
(In reply to Dafna Ron from comment #6) > i tested this with gluster (fuse) and we are failing to create more than one > volume: > > openstack-cinder-2013.2-8.el6ost.noarch > This was an LVM-specific bug related to overlapping snapshot names used in the LVM driver. Bug 1021966 had a fix which added locking to simultaneous GlusterFS driver operations.
Created attachment 835408 [details] log
[root@cougar06 ~(keystone_admin)]# cinder create --source-volid c5bfe327-59f2-4936-846b-a0a88b8c6687 --display-name dafna1 35 +---------------------+--------------------------------------+ | Property | Value | +---------------------+--------------------------------------+ | attachments | [] | | availability_zone | nova | | bootable | false | | created_at | 2013-12-11T17:39:45.867858 | | display_description | None | | display_name | dafna1 | | id | 0df32548-c20d-4323-94a9-ff2ba2759f7b | | metadata | {} | | size | 35 | | snapshot_id | None | | source_volid | c5bfe327-59f2-4936-846b-a0a88b8c6687 | | status | creating | | volume_type | None | +---------------------+--------------------------------------+ [root@cougar06 ~(keystone_admin)]# cinder create --source-volid c5bfe327-59f2-4936-846b-a0a88b8c6687 --display-name dafna2 35 +---------------------+--------------------------------------+ | Property | Value | +---------------------+--------------------------------------+ | attachments | [] | | availability_zone | nova | | bootable | false | | created_at | 2013-12-11T17:39:50.285244 | | display_description | None | | display_name | dafna2 | | id | 9094c4fb-6fc0-45a9-86ff-4ebce272ae69 | | metadata | {} | | size | 35 | | snapshot_id | None | | source_volid | c5bfe327-59f2-4936-846b-a0a88b8c6687 | | status | creating | | volume_type | None | +---------------------+--------------------------------------+ [root@cougar06 ~(keystone_admin)]# cinder create --source-volid c5bfe327-59f2-4936-846b-a0a88b8c6687 --display-name dafna3 35 +---------------------+--------------------------------------+ | Property | Value | +---------------------+--------------------------------------+ | attachments | [] | | availability_zone | nova | | bootable | false | | created_at | 2013-12-11T17:39:53.793834 | | display_description | None | | display_name | dafna3 | | id | 9285e8fe-b2d1-4787-a174-3d5c4cc4f50d | | metadata | {} | | size | 35 | | snapshot_id | None | | source_volid | c5bfe327-59f2-4936-846b-a0a88b8c6687 | | status | creating | | volume_type | None | +---------------------+--------------------------------------+ [root@cougar06 ~(keystone_admin)]# cinder list +--------------------------------------+-----------+--------------+------+-------------+----------+-------------+ | ID | Status | Display Name | Size | Volume Type | Bootable | Attached to | +--------------------------------------+-----------+--------------+------+-------------+----------+-------------+ | 0df32548-c20d-4323-94a9-ff2ba2759f7b | creating | dafna1 | 35 | None | false | | | 609d201e-f34a-4f7f-953c-3c1b5c087873 | available | vol3 | 8 | None | false | | | 8e113c6f-f30c-4f6a-b511-2af507d7e758 | available | vol5 | 6 | None | false | | | 8ff451ff-2c35-467b-9104-3a16357ab66c | available | vol1 | 10 | None | false | | | 9094c4fb-6fc0-45a9-86ff-4ebce272ae69 | error | dafna2 | 35 | None | false | | | 9285e8fe-b2d1-4787-a174-3d5c4cc4f50d | error | dafna3 | 35 | None | false | | | 9a577f89-573b-4e63-a7f1-5f2b6bb33141 | available | gluster | 12 | None | false | | | b4c7df4e-0508-4e4f-86ff-6baee6975123 | available | vol4 | 7 | None | false | | | c5bfe327-59f2-4936-846b-a0a88b8c6687 | available | dafna | 30 | None | false | | | eafb3095-5b12-4ebb-97d9-212e95762323 | available | vol2 | 9 | None | false | | +--------------------------------------+-----------+--------------+------+-------------+----------+-------------+ [
Can we move the Gluster driver issues to bug 1021966 or a new bug? They aren't related to this bugzilla.
Tested it with LVM and it is working correctly, 2 clone vloumes were created at the same time.(clone-koko1 and clone-koko2) [root@cougar08 ~(keystone_admin)]# cinder list +--------------------------------------+-----------+--------------+------+-------------+----------+-------------+ | ID | Status | Display Name | Size | Volume Type | Bootable | Attached to | +--------------------------------------+-----------+--------------+------+-------------+----------+-------------+ | 1d6c15af-26ad-4604-8b44-9e4586878cea | available | FEDORA20 | 10 | None | true | | | 407fbc66-ae0a-4c7b-a6a5-e8667bf298fd | creating | clone-koko2 | 2 | None | false | | | da47f0d8-4dad-493b-9a2e-34172332fcc1 | available | koko1 | 2 | None | false | | | e48ffed4-dcc8-4f15-83fc-1b1d9575a25a | creating | clone-koko1 | 2 | None | false | | | eea75cdb-c904-48b1-b2f6-49a75ff5f262 | error | FAST_STORAGE | 60 | None | false | | +--------------------------------------+-----------+--------------+------+-------------+----------+-------------+ [root@cougar08 ~(keystone_admin)]# cinder list +--------------------------------------+-----------+--------------+------+-------------+----------+-------------+ | ID | Status | Display Name | Size | Volume Type | Bootable | Attached to | +--------------------------------------+-----------+--------------+------+-------------+----------+-------------+ | 1d6c15af-26ad-4604-8b44-9e4586878cea | available | FEDORA20 | 10 | None | true | | | 407fbc66-ae0a-4c7b-a6a5-e8667bf298fd | available | clone-koko2 | 2 | None | false | | | da47f0d8-4dad-493b-9a2e-34172332fcc1 | available | koko1 | 2 | None | false | | | e48ffed4-dcc8-4f15-83fc-1b1d9575a25a | available | clone-koko1 | 2 | None | false | | | eea75cdb-c904-48b1-b2f6-49a75ff5f262 | error | FAST_STORAGE | 60 | None | false | | +--------------------------------------+-----------+--------------+------+-------------+----------+-------------+ [root@cougar08 ~(keystone_admin)]#
After creating a volume origin I tried soliciting this with a for loop creating 5 clones concurrently: for a in 1 2 3 4 5; do cinder create --source-volid 5c48b061-7d70-49d0-8106-1ed70b93ca77 --display_name attempt$a 1; done which completed fine (I had 5 dd processess running as expected) so I'm moving this in VERIFIED state. Using: openstack-cinder-2013.2.2-1.el6ost.noarch
Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory, and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. http://rhn.redhat.com/errata/RHBA-2014-0213.html