Description of problem: I'm currently testing Icehouse and am failing to successfully tear down volumes (LVMISCSIDriver). My tests involve creating a volume, creating a snapshot, deleting snapshot, and deleting volume. These tests are currently failing on tearing down the volume. The same tests pass on an Ubuntu Precise instance (1:2014.1~rc2-0ubuntu1~cloud0), which leads me to believe it may be an LVM configuration or similar. When tearing down the volume, I see the following in my volume.log: 2014-04-15 12:31:44.313 2750 TRACE cinder.openstack.common.rpc.amqp VolumeBackendAPIException: Bad or unexpected response from the storage volume backend API: Volume device file path /dev/mapper/cinder--volumes-volume--9646c9ad--73c8--4eb8--ab09--519357ac997e does not exist. I believe the issue here is that volumes are not being activated upon being created, though that behaviour cannot be replicated when I run an lvcreate manually. On a side note, snapshots are explicitly activated when they're created and therefore those are torn down as expected (I did an strace against cinder-volume and only see lvchange commands run against the snaps). Version-Release number of selected component (if applicable): openstack-cinder-2014.1-0.4.b3.el6.noarch How reproducible: Using LVMISCSIDriver: 1. create volume 2. create snapshot 3. delete snapshot 4. delete volume Actual results: Snapshot deletes but volume goes into 'error_deleting' status. Expected results: Volume is deleted. Additional info: I tried enabling use_lvmetad=1 in /etc/lvm/lvm.conf and running lvm2-lvmetad which seems to resolve the issue, however I'm not entirely sure if that is the correct way to be handling this.
looks like duplicate of: https://bugzilla.redhat.com/show_bug.cgi?id=1084046
Confirmed on upstream, cinder-volume log output: 2014-04-22 19:02:10.140 ERROR cinder.volume.drivers.lvm [req-5d809256-1fd4-404a-9cd2-9a78d42a495d 0c584884e8994df8bdf7449442f25972 d5ab5c34a8534b26ae7b7a7882c51b01] Volume device file path /dev/mapper/stack--volumes-volume--3b5b8d78--e3f2--4783--8fce--2f7a8adf3201 does not exist. 2014-04-22 19:02:10.158 DEBUG cinder.openstack.common.lockutils [req-5d809256-1fd4-404a-9cd2-9a78d42a495d 0c584884e8994df8bdf7449442f25972 d5ab5c34a8534b26ae7b7a7882c51b01] Released file lock "3b5b8d78-e3f2-4783-8fce-2f7a8adf3201-delete_volume" at /opt/stack/data/cinder/cinder-3b5b8d78-e3f2-4783-8fce-2f7a8adf3201-delete_volume for method "lvo_inner2"... from (pid=1950) inner /opt/stack/cinder/cinder/openstack/common/lockutils.py:239 2014-04-22 19:02:10.158 ERROR oslo.messaging.rpc.dispatcher [req-5d809256-1fd4-404a-9cd2-9a78d42a495d 0c584884e8994df8bdf7449442f25972 d5ab5c34a8534b26ae7b7a7882c51b01] Exception during message handling: Bad or unexpected response from the storage volume backend API: Volume device file path /dev/mapper/stack--volumes-volume--3b5b8d78--e3f2--4783--8fce--2f7a8adf3201 does not exist. 2014-04-22 19:02:10.158 TRACE oslo.messaging.rpc.dispatcher Traceback (most recent call last): 2014-04-22 19:02:10.158 TRACE oslo.messaging.rpc.dispatcher File "/usr/lib/python2.7/site-packages/oslo/messaging/rpc/dispatcher.py", line 133, in _dispatch_and_reply 2014-04-22 19:02:10.158 TRACE oslo.messaging.rpc.dispatcher incoming.message)) 2014-04-22 19:02:10.158 TRACE oslo.messaging.rpc.dispatcher File "/usr/lib/python2.7/site-packages/oslo/messaging/rpc/dispatcher.py", line 176, in _dispatch 2014-04-22 19:02:10.158 TRACE oslo.messaging.rpc.dispatcher return self._do_dispatch(endpoint, method, ctxt, args) 2014-04-22 19:02:10.158 TRACE oslo.messaging.rpc.dispatcher File "/usr/lib/python2.7/site-packages/oslo/messaging/rpc/dispatcher.py", line 122, in _do_dispatch 2014-04-22 19:02:10.158 TRACE oslo.messaging.rpc.dispatcher result = getattr(endpoint, method)(ctxt, **new_args) 2014-04-22 19:02:10.158 TRACE oslo.messaging.rpc.dispatcher File "/opt/stack/cinder/cinder/volume/manager.py", line 144, in lvo_inner1 2014-04-22 19:02:10.158 TRACE oslo.messaging.rpc.dispatcher return lvo_inner2(inst, context, volume_id, **kwargs) 2014-04-22 19:02:10.158 TRACE oslo.messaging.rpc.dispatcher File "/opt/stack/cinder/cinder/openstack/common/lockutils.py", line 233, in inner 2014-04-22 19:02:10.158 TRACE oslo.messaging.rpc.dispatcher retval = f(*args, **kwargs) 2014-04-22 19:02:10.158 TRACE oslo.messaging.rpc.dispatcher File "/opt/stack/cinder/cinder/volume/manager.py", line 143, in lvo_inner2 2014-04-22 19:02:10.158 TRACE oslo.messaging.rpc.dispatcher return f(*_args, **_kwargs) 2014-04-22 19:02:10.158 TRACE oslo.messaging.rpc.dispatcher File "/opt/stack/cinder/cinder/volume/manager.py", line 416, in delete_volume 2014-04-22 19:02:10.158 TRACE oslo.messaging.rpc.dispatcher {'status': 'error_deleting'}) 2014-04-22 19:02:10.158 TRACE oslo.messaging.rpc.dispatcher File "/opt/stack/cinder/cinder/openstack/common/excutils.py", line 68, in __exit__ 2014-04-22 19:02:10.158 TRACE oslo.messaging.rpc.dispatcher six.reraise(self.type_, self.value, self.tb) 2014-04-22 19:02:10.158 TRACE oslo.messaging.rpc.dispatcher File "/opt/stack/cinder/cinder/volume/manager.py", line 405, in delete_volume 2014-04-22 19:02:10.158 TRACE oslo.messaging.rpc.dispatcher self.driver.delete_volume(volume_ref) 2014-04-22 19:02:10.158 TRACE oslo.messaging.rpc.dispatcher File "/opt/stack/cinder/cinder/volume/drivers/lvm.py", line 233, in delete_volume 2014-04-22 19:02:10.158 TRACE oslo.messaging.rpc.dispatcher self._delete_volume(volume) 2014-04-22 19:02:10.158 TRACE oslo.messaging.rpc.dispatcher File "/opt/stack/cinder/cinder/volume/drivers/lvm.py", line 128, in _delete_volume 2014-04-22 19:02:10.158 TRACE oslo.messaging.rpc.dispatcher self._clear_volume(volume, is_snapshot) 2014-04-22 19:02:10.158 TRACE oslo.messaging.rpc.dispatcher File "/opt/stack/cinder/cinder/volume/drivers/lvm.py", line 155, in _clear_volume 2014-04-22 19:02:10.158 TRACE oslo.messaging.rpc.dispatcher raise exception.VolumeBackendAPIException(data=msg) 2014-04-22 19:02:10.158 TRACE oslo.messaging.rpc.dispatcher VolumeBackendAPIException: Bad or unexpected response from the storage volume backend API: Volume device file path /dev/mapper/stack--volumes-volume--3b5b8d78--e3f2--4783--8fce--2f7a8adf3201 does not exist. 2014-04-22 19:02:10.158 TRACE oslo.messaging.rpc.dispatcher 2014-04-22 19:02:10.167 ERROR oslo.messaging._drivers.common [req-5d809256-1fd4-404a-9cd2-9a78d42a495d 0c584884e8994df8bdf7449442f25972 d5ab5c34a8534b26ae7b7a7882c51b01] Returning exception Bad or unexpected response from the storage volume backend API: Volume device file path /dev/mapper/stack--volumes-volume--3b5b8d78--e3f2--4783--8fce--2f7a8adf3201 does not exist. to caller 2014-04-22 19:02:10.167 ERROR oslo.messaging._drivers.common [req-5d809256-1fd4-404a-9cd2-9a78d42a495d 0c584884e8994df8bdf7449442f25972 d5ab5c34a8534b26ae7b7a7882c51b01] ['Traceback (most recent call last):\n', ' File "/usr/lib/python2.7/site-packages/oslo/messaging/rpc/dispatcher.py", line 133, in _dispatch_and_reply\n incoming.message))\n', ' File "/usr/lib/python2.7/site-packages/oslo/messaging/rpc/dispatcher.py", line 176, in _dispatch\n return self._do_dispatch(endpoint, method, ctxt, args)\n', ' File "/usr/lib/python2.7/site-packages/oslo/messaging/rpc/dispatcher.py", line 122, in _do_dispatch\n result = getattr(endpoint, method)(ctxt, **new_args)\n', ' File "/opt/stack/cinder/cinder/volume/manager.py", line 144, in lvo_inner1\n return lvo_inner2(inst, context, volume_id, **kwargs)\n', ' File "/opt/stack/cinder/cinder/openstack/common/lockutils.py", line 233, in inner\n retval = f(*args, **kwargs)\n', ' File "/opt/stack/cinder/cinder/volume/manager.py", line 143, in lvo_inner2\n return f(*_args, **_kwargs)\n', ' File "/opt/stack/cinder/cinder/volume/manager.py", line 416, in delete_volume\n {\'status\': \'error_deleting\'})\n', ' File "/opt/stack/cinder/cinder/openstack/common/excutils.py", line 68, in __exit__\n six.reraise(self.type_, self.value, self.tb)\n', ' File "/opt/stack/cinder/cinder/volume/manager.py", line 405, in delete_volume\n self.driver.delete_volume(volume_ref)\n', ' File "/opt/stack/cinder/cinder/volume/drivers/lvm.py", line 233, in delete_volume\n self._delete_volume(volume)\n', ' File "/opt/stack/cinder/cinder/volume/drivers/lvm.py", line 128, in _delete_volume\n self._clear_volume(volume, is_snapshot)\n', ' File "/opt/stack/cinder/cinder/volume/drivers/lvm.py", line 155, in _clear_volume\n raise exception.VolumeBackendAPIException(data=msg)\n', 'VolumeBackendAPIException: Bad or unexpected response from the storage volume backend API: Volume device file path /dev/mapper/stack--volumes-volume--3b5b8d78--e3f2--4783--8fce--2f7a8adf3201 does not exist.\n']
@matt: Your assessment of the problem is correct. When a snapshot is deleted, the origin volume is placed into an 'inactive' state. And then the volume delete fails because it's looking for the file '/dev/mapper/$volume-name' which is no longer there (since it is inactive). I'll start working on a patch, thanks for the info!
Out of curiosity I tested on Ubuntu to see the difference in behaviour there. The LVM commands issues are identical, but a snapshot delete on Fedora leaves the origin volume inactive when it remains in the active state on Ubuntu. I would think that an active volume prior to a snapshot delete operation would remain in that state. This makes me wonder if this is an LVM behaviour/configuration issue instead of a Cinder bug. Maybe it's better to fix this in LVM?
Hi Jon, I did think it was perhaps something w/ LVM itself, but what confuses me is that our Havana tests are still passing doing the same sequence of volume creation, snapshot creation, snapshot deletion, volume deletion. I've just verified and the Havana and Icehouse instances are running same package versions of LVM, kernel, etc. Please give me a shout if I can provide any further info. Kind Regards, Matt
Also, are you guys awaiting info from me? :)
Hi Matt, That is odd indeed, I will try this test as well to see if I can make any sense of it. I've got everything I need at the moment, I'll let you know if anything else comes up. Thanks for all the info so far.
Hi Jon, were you able to find anything here?
Hey Matt, sorry for the delay - I was at an openstack conference last week and I have a sizable backlog to sort through at the moment. I'll update this bug as soon as I get a chance to circle back.
The patch here: https://review.openstack.org/#/c/94828/ fixes this bug. Once upstream has completed iteration and the patch is merged, I'll pull this into our icehouse package.
Fixed In Version: openstack-cinder-2014.1-4.el7ost available in 2014-06-04.1 puddle
verified in version: openstack-cinder-2014.1-4.el7ost.noarch python-cinder-2014.1-4.el7ost.noarch python-cinderclient-1.0.8-1.el7ost.noarch
Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory, and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. http://rhn.redhat.com/errata/RHEA-2014-0852.html