Created attachment 808971 [details] logs Description of problem: we had an outage and gluster storage went down after the storage came back we still had a problem mounting the storage because of a problem in gluster. since I did not know we still had a problem I sent detach of volume while the storage was unavailable. even after we fixed the issue in the storage, the detach was still stuck only after I terminated the instance the detach completed. Version-Release number of selected component (if applicable): openstack-cinder-2013.2-0.9.b3.el6ost.noarch How reproducible: Steps to Reproduce: 1. configure cinder to work with glance 2. boot instance from volume 3. hard shut down gluster so that mount will fail 4. send detach command from UI 5. start gluster Actual results: detach is stuck until we destroy the instance Expected results: we should fail the detach until we can actually send the command or be able to continue the detach once the storage is mounted again. Additional info: logs [root@cougar06 ~(keystone_admin)]# cinder list +--------------------------------------+-----------+--------------+------+-------------+----------+--------------------------------------+ | ID | Status | Display Name | Size | Volume Type | Bootable | Attached to | +--------------------------------------+-----------+--------------+------+-------------+----------+--------------------------------------+ | 1560fa00-752b-4d7b-a747-3ef9bf483692 | available | new | 1 | None | True | | | 22c3e84c-1d9b-4a45-9244-06b3ab6c401a | creating | bla | 10 | None | False | | | aadc9c04-17ab-42c4-8bce-c2f63cd287fa | detaching | image_new | 1 | None | True | 5a225901-5394-42f2-b974-538257ef2818 | +--------------------------------------+-----------+--------------+------+-------------+----------+--------------------------------------+ [root@cougar06 ~(keystone_admin)]# less /var/log/nova/compute.log [root@cougar06 ~(keystone_admin)]# cinder list +--------------------------------------+-----------+--------------+------+-------------+----------+-------------+ | ID | Status | Display Name | Size | Volume Type | Bootable | Attached to | +--------------------------------------+-----------+--------------+------+-------------+----------+-------------+ | 1560fa00-752b-4d7b-a747-3ef9bf483692 | available | new | 1 | None | True | | | 22c3e84c-1d9b-4a45-9244-06b3ab6c401a | creating | bla | 10 | None | False | | | aadc9c04-17ab-42c4-8bce-c2f63cd287fa | available | image_new | 1 | None | True | | +--------------------------------------+-----------+--------------+------+-------------+----------+-------------+ [root@cougar06 ~(keystone_admin)]# mkdir /tmp/cinder
https://bugs.launchpad.net/cinder/+bug/1236482
Sounds pretty similar to bug 1016224.
Indeed, it's pretty similar to the other bugs. Hopefully, the patch will land soon upstream.
This is working as expected with the latest build
verified on: python-novaclient-2.17.0-2.el6ost.noarch openstack-cinder-2014.1.1-1.el6ost.noarch python-nova-2014.1.1-2.el6ost.noarch openstack-nova-compute-2014.1.1-2.el6ost.noarch openstack-nova-scheduler-2014.1.1-2.el6ost.noarch openstack-nova-novncproxy-2014.1.1-2.el6ost.noarch openstack-nova-cert-2014.1.1-2.el6ost.noarch openstack-nova-api-2014.1.1-2.el6ost.noarch openstack-nova-conductor-2014.1.1-2.el6ost.noarch openstack-nova-console-2014.1.1-2.el6ost.noarch openstack-nova-common-2014.1.1-2.el6ost.noarch openstack-nova-network-2014.1.1-2.el6ost.noarch python-cinderclient-1.0.9-1.el6ost.noarch python-cinder-2014.1.1-1.el6ost.noarch
Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory, and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. http://rhn.redhat.com/errata/RHEA-2014-0955.html