Description of problem: when using the ThinLVMVolumeDriver driver, Cinder fails at deleting snapshots; the traceback in volume.log is as follows: 2013-10-28 14:54:31 ERROR [cinder.openstack.common.rpc.amqp] Exception during message handling Traceback (most recent call last): File "/usr/lib/python2.6/site-packages/cinder/openstack/common/rpc/amqp.py", line 430, in _process_data rval = self.proxy.dispatch(ctxt, version, method, **args) File "/usr/lib/python2.6/site-packages/cinder/openstack/common/rpc/dispatcher.py", line 133, in dispatch return getattr(proxyobj, method)(ctxt, **kwargs) File "/usr/lib/python2.6/site-packages/cinder/volume/manager.py", line 524, in delete_snapshot {'status': 'error_deleting'}) File "/usr/lib64/python2.6/contextlib.py", line 23, in __exit__ self.gen.next() File "/usr/lib/python2.6/site-packages/cinder/volume/manager.py", line 513, in delete_snapshot self.driver.delete_snapshot(snapshot_ref) File "/usr/lib/python2.6/site-packages/cinder/volume/drivers/lvm.py", line 249, in delete_snapshot self._delete_volume(snapshot, is_snapshot=True) File "/usr/lib/python2.6/site-packages/cinder/volume/drivers/lvm.py", line 132, in _delete_volume self.clear_volume(volume, is_snapshot=is_snapshot) File "/usr/lib/python2.6/site-packages/cinder/volume/drivers/lvm.py", line 200, in clear_volume raise exception.VolumeBackendAPIException(data=msg) VolumeBackendAPIException: Bad or unexpected response from the storage volume backend API: Volume device file path /dev/mapper/cinder--volumes-_snapshot--6956b504--375e--4eec--bcf7--aae91490c 219-cow does not exist. Version-Release number of selected component (if applicable): openstack-cinder-2013.1.4-2.el6ost.noarch Steps to Reproduce: 1. configure cinder with thinlvm driver 2. create a volume 3. create a volume snapshot 4. delete the snapshot Actual results: the snapshot is not deleted and found in "error_deleting" state
Appears to be unintended consequence of upstream backport of "0a86ef8 Fix secure delete for thick LVM snapshots" into stable/grizzly. Logic changed from "if device does not exist, skip wiping" to "if device does not exist, fail" which is correct for thick LVM configuration, but breaks ThinLVM as the -cow device will never exist.
Eric, does this happen in 4.0/Havana as well?
(In reply to Ayal Baron from comment #2) > Eric, does this happen in 4.0/Havana as well? No, Havana and newer skips running the volume wipe based on the lvm_type=thin option. But this option is not present in Grizzly, so we need a fix to not run this code when volume_driver=...ThinLVMVolumeDriver.
I can help out on this if needed. I'm not yet familiar with the backport process but I'm happy to give it a go.
the bug wasn't fixed: version: openstack-cinder-2013.1.4-3.el6ost.noarch the error is the same error as before.
Yogev: Can you provide any more information? I just tried this w/ the 2013.1.4-3 pkgs and it seems to work for me. (Grizzly wipes w/ shred instead of dd for this snapshot wipe process.) 2013-11-11 10:41:08 INFO [cinder.volume.manager] snapshot snapshot-444c0fd8-fd02-4ae4-9ba9-a50421fd5053: deleting 2013-11-11 10:41:08 DEBUG [cinder.volume.manager] snapshot snapshot-444c0fd8-fd02-4ae4-9ba9-a50421fd5053: deleting 2013-11-11 10:41:08 DEBUG [cinder.utils] Running cmd (subprocess): sudo cinder-rootwrap /etc/cinder/rootwrap.conf lvdisplay cinder-volumes/_snapshot-444c0fd8-fd02-4ae4-9ba9-a50421fd5053 2013-11-11 10:41:09 INFO [cinder.volume.drivers.lvm] Performing secure delete on volume: 444c0fd8-fd02-4ae4-9ba9-a50421fd5053 2013-11-11 10:41:09 DEBUG [cinder.utils] Running cmd (subprocess): sudo cinder-rootwrap /etc/cinder/rootwrap.conf shred -n0 -z -s50MiB /dev/mapper/cinder--volumes-_snapshot--444c0fd8--fd02--4ae4--9ba9--a50421fd5053 ... 2013-11-11 10:43:29 DEBUG [cinder.utils] Running cmd (subprocess): sudo cinder-rootwrap /etc/cinder/rootwrap.conf lvremove -f cinder-volumes/_snapshot-444c0fd8-fd02-4ae4-9ba9-a50421fd5053
I installation configuration is: 2 machines: 1. VM with Cinder 2. A physical machine with all of the other components. the OS is RHEL 6.5. The cinder-volumes VG has a PV behind it (not the default loop0). if there's anything else, please ask.
For some reason, LVM on Yogev's Cinder node is not populating /dev/mapper/ with entries for Cinder snapshots, only volumes. Still looking into why. My Cinder machine: # cinder snapshot-list | 36502cdd-bcea-400d-bfab-e7fbd2400d87 | a41927d7-d801-4a02-b659-f6e87926c578 | available | None | 1 | # lvs | grep snapshot _snapshot-36502cdd-bcea-400d-bfab-e7fbd2400d87 cinder-volumes Vwi-a-tz- 1.00g cinder-volumes-pool volume-a41927d7-d801-4a02-b659-f6e87926c578 0.00 # ls /dev/mapper/*snapshot* /dev/mapper/cinder--volumes-_snapshot--36502cdd--bcea--400d--bfab--e7fbd2400d87 Your Cinder machine: # lvs | grep snapshot _snapshot-0021eb55-3a62-4f9c-b393-5f6326db3439 cinder-volumes Vwi---tz-k 1.00g cinder-volumes-pool volume-5005b4b7-06dc-4f07-883e-af0b24d9f975 _snapshot-30a621aa-0799-435a-b43e-993f3a5eebe8 cinder-volumes Vwi---tz-k 1.00g cinder-volumes-pool volume-469ba638-8d7a-4dde-be22-559f1ff5bc5c _snapshot-31443b11-4f9a-472f-b59f-03f7cf1d33de cinder-volumes Vwi---tz-k 1.00g cinder-volumes-pool volume-78ec04ac-1860-4aca-9c96-0cf0ddaa7b98 _snapshot-6d079948-a9e2-41df-a900-6a313d9ba276 cinder-volumes Vwi---tz-k 1.00g cinder-volumes-pool volume-469ba638-8d7a-4dde-be22-559f1ff5bc5c # ls /dev/mapper/*cinder* /dev/mapper/cinder--volumes-cinder--volumes--pool /dev/mapper/cinder--volumes-volume--469ba638--8d7a--4dde--be22--559f1ff5bc5c /dev/mapper/cinder--volumes-cinder--volumes--pool_tdata /dev/mapper/cinder--volumes-volume--5005b4b7--06dc--4f07--883e--af0b24d9f975 /dev/mapper/cinder--volumes-cinder--volumes--pool_tmeta /dev/mapper/cinder--volumes-volume--78ec04ac--1860--4aca--9c96--0cf0ddaa7b98 /dev/mapper/cinder--volumes-cinder--volumes--pool-tpool # ls /dev/mapper/*snapshot* ls: cannot access /dev/mapper/*snapshot*: No such file or directory
Notable item from RHEL 6.5 lvm2 changelog: Automatically flag thin snapshots to be skipped during activation.
verified on version: openstack-cinder-2013.1.4-4.el6ost.noarch
Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory, and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. http://rhn.redhat.com/errata/RHBA-2013-1510.html