Bug 1017340 - cinder: fail to destroy an instance booted from volume when the storage is inaccessible
Summary: cinder: fail to destroy an instance booted from volume when the storage is in...
Keywords:
Status: CLOSED UPSTREAM
Alias: None
Product: Red Hat OpenStack
Classification: Red Hat
Component: openstack-cinder
Version: 4.0
Hardware: x86_64
OS: Linux
unspecified
medium
Target Milestone: ---
: 5.0 (RHEL 7)
Assignee: Eric Harney
QA Contact: Dafna Ron
URL:
Whiteboard: storage
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2013-10-09 16:42 UTC by Dafna Ron
Modified: 2016-04-26 23:33 UTC (History)
3 users (show)

Fixed In Version:
Doc Type: Bug Fix
Doc Text:
Clone Of:
Environment:
Last Closed: 2014-05-29 15:54:21 UTC
Target Upstream Version:
Embargoed:


Attachments (Terms of Use)
logs (8.91 KB, application/x-gzip)
2013-10-10 11:41 UTC, Dafna Ron
no flags Details


Links
System ID Private Priority Status Summary Last Updated
Launchpad 1237477 0 None None None Never

Description Dafna Ron 2013-10-09 16:42:11 UTC
Description of problem:

I installed cinder with gluster backened (so is glance) and booted an instance from a volume. 
at some point my gluster storage crashed (server destroyed) and I cannot destroy the instance even though the libvirt/qemu no longer exists. 

Version-Release number of selected component (if applicable):

openstack-cinder-2013.2-0.9.b3.el6ost.noarch
openstack-glance-2013.2-0.11.b3.el6ost.noarch

How reproducible:

100%

Steps to Reproduce:
1. install openstack using packstack with cinder configured to work with gluster
2. boot an instance from volume 
3. shut down the gluster servers
4. try to destroy the instance 

Actual results:

we fail to destroy the instance although the qemu no longer exists  

Expected results:

we should be able to destroy the instance 

Additional info:


[root@cougar06 ~(keystone_admin)]# nova list 
+--------------------------------------+------+--------+------------+-------------+--------------------------+
| ID                                   | Name | Status | Task State | Power State | Networks                 |
+--------------------------------------+------+--------+------------+-------------+--------------------------+
| e7d99f1b-518f-4742-9d98-ab3053c4c806 | test | ACTIVE | deleting   | NOSTATE     | novanetwork=192.168.32.2 |
+--------------------------------------+------+--------+------------+-------------+--------------------------+
[root@cougar06 ~(keystone_admin)]# nova list 
+--------------------------------------+------+--------+------------+-------------+--------------------------+
| ID                                   | Name | Status | Task State | Power State | Networks                 |
+--------------------------------------+------+--------+------------+-------------+--------------------------+
| e7d99f1b-518f-4742-9d98-ab3053c4c806 | test | ACTIVE | None       | NOSTATE     | novanetwork=192.168.32.2 |
+--------------------------------------+------+--------+------------+-------------+--------------------------+
[root@cougar06 ~(keystone_admin)]# virsh -r list 
 Id    Name                           State
----------------------------------------------------

[root@cougar06 ~(keystone_admin)]# nova show e7d99f1b-518f-4742-9d98-ab3053c4c806
+--------------------------------------+----------------------------------------------------------+
| Property                             | Value                                                    |
+--------------------------------------+----------------------------------------------------------+
| status                               | ACTIVE                                                   |
| updated                              | 2013-10-09T15:32:59Z                                     |
| OS-EXT-STS:task_state                | None                                                     |
| OS-EXT-SRV-ATTR:host                 | cougar06.scl.lab.tlv.redhat.com                          |
| key_name                             | None                                                     |
| image                                | Attempt to boot from volume - no image supplied          |
| hostId                               | 42aee086631922deee420d708c6833a79caf713221604e1d5883d194 |
| OS-EXT-STS:vm_state                  | active                                                   |
| OS-EXT-SRV-ATTR:instance_name        | instance-0000000d                                        |
| OS-SRV-USG:launched_at               | 2013-10-08T16:52:42.000000                               |
| OS-EXT-SRV-ATTR:hypervisor_hostname  | cougar06.scl.lab.tlv.redhat.com                          |
| flavor                               | m1.tiny (1)                                              |
| id                                   | e7d99f1b-518f-4742-9d98-ab3053c4c806                     |
| security_groups                      | [{u'name': u'default'}]                                  |
| OS-SRV-USG:terminated_at             | None                                                     |
| user_id                              | c02995f25ba44cfab1a3cbd419f045a1                         |
| name                                 | test                                                     |
| created                              | 2013-10-08T16:52:34Z                                     |
| tenant_id                            | c77235c29fd0431a8e6628ef6d18e07f                         |
| OS-DCF:diskConfig                    | MANUAL                                                   |
| metadata                             | {}                                                       |
| novanetwork network                  | 192.168.32.2                                             |
| os-extended-volumes:volumes_attached | [{u'id': u'e78978af-0f46-4caf-948b-218afb5de6ef'}]       |
| accessIPv4                           |                                                          |
| accessIPv6                           |                                                          |
| progress                             | 0                                                        |
| OS-EXT-STS:power_state               | 0                                                        |
| OS-EXT-AZ:availability_zone          | nova                                                     |
| config_drive                         |                                                          |
+--------------------------------------+----------------------------------------------------------+
[root@cougar06 ~(keystone_admin)]# mount 
/dev/mapper/vg0-lv_root on / type ext4 (rw)
proc on /proc type proc (rw)
sysfs on /sys type sysfs (rw)
devpts on /dev/pts type devpts (rw,gid=5,mode=620)
tmpfs on /dev/shm type tmpfs (rw,rootcontext="system_u:object_r:tmpfs_t:s0")
/dev/sda1 on /boot type ext4 (rw)
/srv/loopback-device/device1 on /srv/node/device1 type ext4 (rw,noatime,nodiratime,nobarrier,user_xattr,nobarrier,loop=/dev/loop0)
none on /proc/sys/fs/binfmt_misc type binfmt_misc (rw)
10.35.2.44:/Dafna_glance on /var/lib/glance/images type fuse.glusterfs (rw,default_permissions,default_permissions,allow_other,max_read=131072,allow_other,max_read=131072)
10.35.2.44:/Dafna_rhos on /var/lib/cinder/mnt/4a31bc6e5fb9244971075aa23d364364 type fuse.glusterfs (rw,default_permissions,allow_other,max_read=131072)
10.35.2.44:/Dafna_rhos on /var/lib/nova/mnt/4a31bc6e5fb9244971075aa23d364364 type fuse.glusterfs (rw,default_permissions,allow_other,max_read=131072)
[root@cougar06 ~(keystone_admin)]# ls -l /var/lib/nova/mnt/
ls: cannot access /var/lib/nova/mnt/4a31bc6e5fb9244971075aa23d364364: Transport endpoint is not connected
total 0
d????????? ? ? ? ?            ? 4a31bc6e5fb9244971075aa23d364364
[root@cougar06 ~(keystone_admin)]# ls -l /var/lib/cinder/mnt/
ls: cannot access /var/lib/cinder/mnt/4a31bc6e5fb9244971075aa23d364364: Transport endpoint is not connected
total 0
d????????? ? ? ? ?            ? 4a31bc6e5fb9244971075aa23d364364
[root@cougar06 ~(keystone_admin)]# ls -l /var/lib/glance/
ls: cannot access /var/lib/glance/images: Transport endpoint is not connected
total 0
d????????? ? ? ? ?            ? images


2013-10-09 18:32:55.300 7297 ERROR cinder.openstack.common.rpc.common [req-27488935-1665-4f20-a001-a14924e91e87 c02995f25ba44cfab1a3cbd419f045a1 c77235c29fd0431a8e6628ef6d18e07f] ['Traceback (most recent call last):\n', '  File "/usr/lib
/python2.6/site-packages/cinder/openstack/common/rpc/amqp.py", line 441, in _process_data\n    **args)\n', '  File "/usr/lib/python2.6/site-packages/cinder/openstack/common/rpc/dispatcher.py", line 148, in dispatch\n    return getattr(pr
oxyobj, method)(ctxt, **kwargs)\n', '  File "/usr/lib/python2.6/site-packages/cinder/volume/manager.py", line 502, in detach_volume\n    self.driver.ensure_export(context, volume)\n', '  File "/usr/lib/python2.6/site-packages/cinder/volu
me/drivers/glusterfs.py", line 816, in ensure_export\n    self._ensure_share_mounted(volume[\'provider_location\'])\n', '  File "/usr/lib/python2.6/site-packages/cinder/volume/drivers/glusterfs.py", line 973, in _ensure_share_mounted\n  
  self._mount_glusterfs(glusterfs_share, mount_path, ensure=True)\n', '  File "/usr/lib/python2.6/site-packages/cinder/volume/drivers/glusterfs.py", line 1041, in _mount_glusterfs\n    self._execute(\'mkdir\', \'-p\', mount_path)\n', '  
File "/usr/lib/python2.6/site-packages/cinder/utils.py", line 142, in execute\n    return processutils.execute(*cmd, **kwargs)\n', '  File "/usr/lib/python2.6/site-packages/cinder/openstack/common/processutils.py", line 173, in execute\n
    cmd=\' \'.join(cmd))\n', 'ProcessExecutionError: Unexpected error while running command.\nCommand: mkdir -p /var/lib/cinder/mnt/4a31bc6e5fb9244971075aa23d364364\nExit code: 1\nStdout: \'\'\nStderr: "mkdir: cannot create directory `/v
ar/lib/cinder/mnt/4a31bc6e5fb9244971075aa23d364364\': File exists\\n"\n']
2013-10-09 18:32:59.505 7297 ERROR cinder.openstack.common.rpc.amqp [req-ffbbfc8f-af00-4bdb-b5c8-2b1654892c77 c02995f25ba44cfab1a3cbd419f045a1 c77235c29fd0431a8e6628ef6d18e07f] Exception during message handling
2013-10-09 18:32:59.505 7297 TRACE cinder.openstack.common.rpc.amqp Traceback (most recent call last):
2013-10-09 18:32:59.505 7297 TRACE cinder.openstack.common.rpc.amqp   File "/usr/lib/python2.6/site-packages/cinder/openstack/common/rpc/amqp.py", line 441, in _process_data
2013-10-09 18:32:59.505 7297 TRACE cinder.openstack.common.rpc.amqp     **args)
2013-10-09 18:32:59.505 7297 TRACE cinder.openstack.common.rpc.amqp   File "/usr/lib/python2.6/site-packages/cinder/openstack/common/rpc/dispatcher.py", line 148, in dispatch
2013-10-09 18:32:59.505 7297 TRACE cinder.openstack.common.rpc.amqp     return getattr(proxyobj, method)(ctxt, **kwargs)
2013-10-09 18:32:59.505 7297 TRACE cinder.openstack.common.rpc.amqp   File "/usr/lib/python2.6/site-packages/cinder/volume/manager.py", line 502, in detach_volume
2013-10-09 18:32:59.505 7297 TRACE cinder.openstack.common.rpc.amqp     self.driver.ensure_export(context, volume)
2013-10-09 18:32:59.505 7297 TRACE cinder.openstack.common.rpc.amqp   File "/usr/lib/python2.6/site-packages/cinder/volume/drivers/glusterfs.py", line 816, in ensure_export
2013-10-09 18:32:59.505 7297 TRACE cinder.openstack.common.rpc.amqp     self._ensure_share_mounted(volume['provider_location'])
2013-10-09 18:32:59.505 7297 TRACE cinder.openstack.common.rpc.amqp   File "/usr/lib/python2.6/site-packages/cinder/volume/drivers/glusterfs.py", line 973, in _ensure_share_mounted
2013-10-09 18:32:59.505 7297 TRACE cinder.openstack.common.rpc.amqp     self._mount_glusterfs(glusterfs_share, mount_path, ensure=True)
2013-10-09 18:32:59.505 7297 TRACE cinder.openstack.common.rpc.amqp   File "/usr/lib/python2.6/site-packages/cinder/volume/drivers/glusterfs.py", line 1041, in _mount_glusterfs
2013-10-09 18:32:59.505 7297 TRACE cinder.openstack.common.rpc.amqp     self._execute('mkdir', '-p', mount_path)
2013-10-09 18:32:59.505 7297 TRACE cinder.openstack.common.rpc.amqp   File "/usr/lib/python2.6/site-packages/cinder/utils.py", line 142, in execute
2013-10-09 18:32:59.505 7297 TRACE cinder.openstack.common.rpc.amqp     return processutils.execute(*cmd, **kwargs)
2013-10-09 18:32:59.505 7297 TRACE cinder.openstack.common.rpc.amqp   File "/usr/lib/python2.6/site-packages/cinder/openstack/common/processutils.py", line 173, in execute
2013-10-09 18:32:59.505 7297 TRACE cinder.openstack.common.rpc.amqp     cmd=' '.join(cmd))
2013-10-09 18:32:59.505 7297 TRACE cinder.openstack.common.rpc.amqp ProcessExecutionError: Unexpected error while running command.
2013-10-09 18:32:59.505 7297 TRACE cinder.openstack.common.rpc.amqp Command: mkdir -p /var/lib/cinder/mnt/4a31bc6e5fb9244971075aa23d364364
2013-10-09 18:32:59.505 7297 TRACE cinder.openstack.common.rpc.amqp Exit code: 1
2013-10-09 18:32:59.505 7297 TRACE cinder.openstack.common.rpc.amqp Stdout: ''
2013-10-09 18:32:59.505 7297 TRACE cinder.openstack.common.rpc.amqp Stderr: "mkdir: cannot create directory `/var/lib/cinder/mnt/4a31bc6e5fb9244971075aa23d364364': File exists\n"
2013-10-09 18:32:59.505 7297 TRACE cinder.openstack.common.rpc.amqp 
2013-10-09 18:32:59.507 7297 ERROR cinder.openstack.common.rpc.common [req-ffbbfc8f-af00-4bdb-b5c8-2b1654892c77 c02995f25ba44cfab1a3cbd419f045a1 c77235c29fd0431a8e6628ef6d18e07f] Returning exception Unexpected error while running command
.

Comment 1 Dafna Ron 2013-10-10 11:41:32 UTC
Created attachment 810474 [details]
logs

Comment 2 Eric Harney 2013-11-20 21:46:34 UTC
Happens because /var/lib/cinder/mnt/4a31bc6e5fb9244971075aa23d364364 already exists but I/O on it fails.


Note You need to log in before you can comment on or make changes to this bug.