Bug 1087886 - Unable to tear down volumes when using LVMISCSIDriver
Summary: Unable to tear down volumes when using LVMISCSIDriver
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: Red Hat OpenStack
Classification: Red Hat
Component: openstack-cinder
Version: 5.0 (RHEL 7)
Hardware: Unspecified
OS: Unspecified
unspecified
unspecified
Target Milestone: rc
: 5.0 (RHEL 7)
Assignee: Jon Bernard
QA Contact: Dafna Ron
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2014-04-15 14:05 UTC by Matt Thompson
Modified: 2016-04-26 22:50 UTC (History)
7 users (show)

Fixed In Version:
Doc Type: Bug Fix
Doc Text:
In previous releases, deleting a snapshot of an attached volume also incorrectly deactivated the origin volume, rendering unusable by the instance. This made it necessary for the user to manually reactivate the volume. This was caused by an incorrect routine in the commands used for volume snapshot deletion. With this release, this routine has been refined and correct, ensuring that attached volumes are no longer deactivated when any of their snapshots are deleted.
Clone Of:
Environment:
Last Closed: 2014-07-08 15:31:13 UTC
Target Upstream Version:
Embargoed:


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
Launchpad 1270192 0 None None None Never
Launchpad 1317075 0 None None None Never
OpenStack gerrit 94828 0 None None None Never
Red Hat Product Errata RHEA-2014:0852 0 normal SHIPPED_LIVE Red Hat Enterprise Linux OpenStack Platform Enhancement - Block Storage 2014-07-08 19:22:44 UTC

Description Matt Thompson 2014-04-15 14:05:12 UTC
Description of problem:

I'm currently testing Icehouse and am failing to successfully tear down volumes (LVMISCSIDriver).  My tests involve creating a volume, creating a snapshot, deleting snapshot, and deleting volume.  These tests are currently failing on tearing down the volume.  The same tests pass on an Ubuntu Precise instance (1:2014.1~rc2-0ubuntu1~cloud0), which leads me to believe it may be an LVM configuration or similar.

When tearing down the volume, I see the following in my volume.log:

2014-04-15 12:31:44.313 2750 TRACE cinder.openstack.common.rpc.amqp VolumeBackendAPIException: Bad or unexpected response from the storage volume backend API: Volume device file path /dev/mapper/cinder--volumes-volume--9646c9ad--73c8--4eb8--ab09--519357ac997e does not exist.

I believe the issue here is that volumes are not being activated upon being created, though that behaviour cannot be replicated when I run an lvcreate manually.  On a side note, snapshots are explicitly activated when they're created and therefore those are torn down as expected (I did an strace against cinder-volume and only see lvchange commands run against the snaps).

Version-Release number of selected component (if applicable):

openstack-cinder-2014.1-0.4.b3.el6.noarch

How reproducible:

Using LVMISCSIDriver:

1. create volume
2. create snapshot
3. delete snapshot
4. delete volume

Actual results:

Snapshot deletes but volume goes into 'error_deleting' status.

Expected results:

Volume is deleted.

Additional info:

I tried enabling use_lvmetad=1 in /etc/lvm/lvm.conf and running lvm2-lvmetad which seems to resolve the issue, however I'm not entirely sure if that is the correct way to be handling this.

Comment 1 Dafna Ron 2014-04-15 14:10:17 UTC
looks like duplicate of: https://bugzilla.redhat.com/show_bug.cgi?id=1084046

Comment 4 Jon Bernard 2014-04-22 19:03:44 UTC
Confirmed on upstream, cinder-volume log output: 

2014-04-22 19:02:10.140 ERROR cinder.volume.drivers.lvm [req-5d809256-1fd4-404a-9cd2-9a78d42a495d 0c584884e8994df8bdf7449442f25972 d5ab5c34a8534b26ae7b7a7882c51b01] Volume device file path /dev/mapper/stack--volumes-volume--3b5b8d78--e3f2--4783--8fce--2f7a8adf3201 does not exist.
2014-04-22 19:02:10.158 DEBUG cinder.openstack.common.lockutils [req-5d809256-1fd4-404a-9cd2-9a78d42a495d 0c584884e8994df8bdf7449442f25972 d5ab5c34a8534b26ae7b7a7882c51b01] Released file lock "3b5b8d78-e3f2-4783-8fce-2f7a8adf3201-delete_volume" at /opt/stack/data/cinder/cinder-3b5b8d78-e3f2-4783-8fce-2f7a8adf3201-delete_volume for method "lvo_inner2"... from (pid=1950) inner /opt/stack/cinder/cinder/openstack/common/lockutils.py:239
2014-04-22 19:02:10.158 ERROR oslo.messaging.rpc.dispatcher [req-5d809256-1fd4-404a-9cd2-9a78d42a495d 0c584884e8994df8bdf7449442f25972 d5ab5c34a8534b26ae7b7a7882c51b01] Exception during message handling: Bad or unexpected response from the storage volume backend API: Volume device file path /dev/mapper/stack--volumes-volume--3b5b8d78--e3f2--4783--8fce--2f7a8adf3201 does not exist.
2014-04-22 19:02:10.158 TRACE oslo.messaging.rpc.dispatcher Traceback (most recent call last):
2014-04-22 19:02:10.158 TRACE oslo.messaging.rpc.dispatcher   File "/usr/lib/python2.7/site-packages/oslo/messaging/rpc/dispatcher.py", line 133, in _dispatch_and_reply
2014-04-22 19:02:10.158 TRACE oslo.messaging.rpc.dispatcher     incoming.message))
2014-04-22 19:02:10.158 TRACE oslo.messaging.rpc.dispatcher   File "/usr/lib/python2.7/site-packages/oslo/messaging/rpc/dispatcher.py", line 176, in _dispatch
2014-04-22 19:02:10.158 TRACE oslo.messaging.rpc.dispatcher     return self._do_dispatch(endpoint, method, ctxt, args)
2014-04-22 19:02:10.158 TRACE oslo.messaging.rpc.dispatcher   File "/usr/lib/python2.7/site-packages/oslo/messaging/rpc/dispatcher.py", line 122, in _do_dispatch
2014-04-22 19:02:10.158 TRACE oslo.messaging.rpc.dispatcher     result = getattr(endpoint, method)(ctxt, **new_args)
2014-04-22 19:02:10.158 TRACE oslo.messaging.rpc.dispatcher   File "/opt/stack/cinder/cinder/volume/manager.py", line 144, in lvo_inner1
2014-04-22 19:02:10.158 TRACE oslo.messaging.rpc.dispatcher     return lvo_inner2(inst, context, volume_id, **kwargs)
2014-04-22 19:02:10.158 TRACE oslo.messaging.rpc.dispatcher   File "/opt/stack/cinder/cinder/openstack/common/lockutils.py", line 233, in inner
2014-04-22 19:02:10.158 TRACE oslo.messaging.rpc.dispatcher     retval = f(*args, **kwargs)
2014-04-22 19:02:10.158 TRACE oslo.messaging.rpc.dispatcher   File "/opt/stack/cinder/cinder/volume/manager.py", line 143, in lvo_inner2
2014-04-22 19:02:10.158 TRACE oslo.messaging.rpc.dispatcher     return f(*_args, **_kwargs)
2014-04-22 19:02:10.158 TRACE oslo.messaging.rpc.dispatcher   File "/opt/stack/cinder/cinder/volume/manager.py", line 416, in delete_volume
2014-04-22 19:02:10.158 TRACE oslo.messaging.rpc.dispatcher     {'status': 'error_deleting'})
2014-04-22 19:02:10.158 TRACE oslo.messaging.rpc.dispatcher   File "/opt/stack/cinder/cinder/openstack/common/excutils.py", line 68, in __exit__
2014-04-22 19:02:10.158 TRACE oslo.messaging.rpc.dispatcher     six.reraise(self.type_, self.value, self.tb)
2014-04-22 19:02:10.158 TRACE oslo.messaging.rpc.dispatcher   File "/opt/stack/cinder/cinder/volume/manager.py", line 405, in delete_volume
2014-04-22 19:02:10.158 TRACE oslo.messaging.rpc.dispatcher     self.driver.delete_volume(volume_ref)
2014-04-22 19:02:10.158 TRACE oslo.messaging.rpc.dispatcher   File "/opt/stack/cinder/cinder/volume/drivers/lvm.py", line 233, in delete_volume
2014-04-22 19:02:10.158 TRACE oslo.messaging.rpc.dispatcher     self._delete_volume(volume)
2014-04-22 19:02:10.158 TRACE oslo.messaging.rpc.dispatcher   File "/opt/stack/cinder/cinder/volume/drivers/lvm.py", line 128, in _delete_volume
2014-04-22 19:02:10.158 TRACE oslo.messaging.rpc.dispatcher     self._clear_volume(volume, is_snapshot)
2014-04-22 19:02:10.158 TRACE oslo.messaging.rpc.dispatcher   File "/opt/stack/cinder/cinder/volume/drivers/lvm.py", line 155, in _clear_volume
2014-04-22 19:02:10.158 TRACE oslo.messaging.rpc.dispatcher     raise exception.VolumeBackendAPIException(data=msg)
2014-04-22 19:02:10.158 TRACE oslo.messaging.rpc.dispatcher VolumeBackendAPIException: Bad or unexpected response from the storage volume backend API: Volume device file path /dev/mapper/stack--volumes-volume--3b5b8d78--e3f2--4783--8fce--2f7a8adf3201 does not exist.
2014-04-22 19:02:10.158 TRACE oslo.messaging.rpc.dispatcher 
2014-04-22 19:02:10.167 ERROR oslo.messaging._drivers.common [req-5d809256-1fd4-404a-9cd2-9a78d42a495d 0c584884e8994df8bdf7449442f25972 d5ab5c34a8534b26ae7b7a7882c51b01] Returning exception Bad or unexpected response from the storage volume backend API: Volume device file path /dev/mapper/stack--volumes-volume--3b5b8d78--e3f2--4783--8fce--2f7a8adf3201 does not exist. to caller
2014-04-22 19:02:10.167 ERROR oslo.messaging._drivers.common [req-5d809256-1fd4-404a-9cd2-9a78d42a495d 0c584884e8994df8bdf7449442f25972 d5ab5c34a8534b26ae7b7a7882c51b01] ['Traceback (most recent call last):\n', '  File "/usr/lib/python2.7/site-packages/oslo/messaging/rpc/dispatcher.py", line 133, in _dispatch_and_reply\n    incoming.message))\n', '  File "/usr/lib/python2.7/site-packages/oslo/messaging/rpc/dispatcher.py", line 176, in _dispatch\n    return self._do_dispatch(endpoint, method, ctxt, args)\n', '  File "/usr/lib/python2.7/site-packages/oslo/messaging/rpc/dispatcher.py", line 122, in _do_dispatch\n    result = getattr(endpoint, method)(ctxt, **new_args)\n', '  File "/opt/stack/cinder/cinder/volume/manager.py", line 144, in lvo_inner1\n    return lvo_inner2(inst, context, volume_id, **kwargs)\n', '  File "/opt/stack/cinder/cinder/openstack/common/lockutils.py", line 233, in inner\n    retval = f(*args, **kwargs)\n', '  File "/opt/stack/cinder/cinder/volume/manager.py", line 143, in lvo_inner2\n    return f(*_args, **_kwargs)\n', '  File "/opt/stack/cinder/cinder/volume/manager.py", line 416, in delete_volume\n    {\'status\': \'error_deleting\'})\n', '  File "/opt/stack/cinder/cinder/openstack/common/excutils.py", line 68, in __exit__\n    six.reraise(self.type_, self.value, self.tb)\n', '  File "/opt/stack/cinder/cinder/volume/manager.py", line 405, in delete_volume\n    self.driver.delete_volume(volume_ref)\n', '  File "/opt/stack/cinder/cinder/volume/drivers/lvm.py", line 233, in delete_volume\n    self._delete_volume(volume)\n', '  File "/opt/stack/cinder/cinder/volume/drivers/lvm.py", line 128, in _delete_volume\n    self._clear_volume(volume, is_snapshot)\n', '  File "/opt/stack/cinder/cinder/volume/drivers/lvm.py", line 155, in _clear_volume\n    raise exception.VolumeBackendAPIException(data=msg)\n', 'VolumeBackendAPIException: Bad or unexpected response from the storage volume backend API: Volume device file path /dev/mapper/stack--volumes-volume--3b5b8d78--e3f2--4783--8fce--2f7a8adf3201 does not exist.\n']

Comment 5 Jon Bernard 2014-04-22 19:26:43 UTC
@matt: Your assessment of the problem is correct.

When a snapshot is deleted, the origin volume is placed into an 'inactive' state.  And then the volume delete fails because it's looking for the file '/dev/mapper/$volume-name' which is no longer there (since it is inactive).

I'll start working on a patch, thanks for the info!

Comment 7 Jon Bernard 2014-04-24 19:16:12 UTC
Out of curiosity I tested on Ubuntu to see the difference in behaviour there.  The LVM commands issues are identical, but a snapshot delete on Fedora leaves the origin volume inactive when it remains in the active state on Ubuntu.  I would think that an active volume prior to a snapshot delete operation would remain in that state.  This makes me wonder if this is an LVM behaviour/configuration issue instead of a Cinder bug.  Maybe it's better to fix this in LVM?

Comment 8 Matt Thompson 2014-04-24 19:52:38 UTC
Hi Jon,

I did think it was perhaps something w/ LVM itself, but what confuses me is that our Havana tests are still passing doing the same sequence of volume creation, snapshot creation, snapshot deletion, volume deletion.  I've just verified and the Havana and Icehouse instances are running same package versions of LVM, kernel, etc.

Please give me a shout if I can provide any further info.

Kind Regards,
Matt

Comment 9 Matt Thompson 2014-04-24 19:52:59 UTC
Also, are you guys awaiting info from me?  :)

Comment 10 Jon Bernard 2014-04-24 20:16:43 UTC
Hi Matt, 

That is odd indeed, I will try this test as well to see if I can make any sense of it.  I've got everything I need at the moment, I'll let you know if anything else comes up.  Thanks for all the info so far.

Comment 11 Matt Thompson 2014-05-16 09:27:58 UTC
Hi Jon, were you able to find anything here?

Comment 12 Jon Bernard 2014-05-19 15:04:41 UTC
Hey Matt, sorry for the delay - I was at an openstack conference last week and I have a sizable backlog to sort through at the moment.  I'll update this bug as soon as I get a chance to circle back.

Comment 13 Jon Bernard 2014-05-22 20:11:44 UTC
The patch here:

    https://review.openstack.org/#/c/94828/

fixes this bug.  Once upstream has completed iteration and the patch is merged, I'll pull this into our icehouse package.

Comment 14 Sergey Gotliv 2014-06-05 14:31:22 UTC
Fixed In Version: openstack-cinder-2014.1-4.el7ost
available in 2014-06-04.1 puddle

Comment 15 Yogev Rabl 2014-06-09 07:16:12 UTC
verified in version:
openstack-cinder-2014.1-4.el7ost.noarch
python-cinder-2014.1-4.el7ost.noarch
python-cinderclient-1.0.8-1.el7ost.noarch

Comment 19 errata-xmlrpc 2014-07-08 15:31:13 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

http://rhn.redhat.com/errata/RHEA-2014-0852.html


Note You need to log in before you can comment on or make changes to this bug.