Bug 1023065 - GlusterFS: cannot boot an instance from cloned volume
Summary: GlusterFS: cannot boot an instance from cloned volume
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: Red Hat OpenStack
Classification: Red Hat
Component: openstack-cinder
Version: 4.0
Hardware: x86_64
OS: Linux
unspecified
urgent
Target Milestone: beta
: 4.0
Assignee: Eric Harney
QA Contact: Dafna Ron
URL:
Whiteboard: storage
: 1023040 (view as bug list)
Depends On:
Blocks: 976905
TreeView+ depends on / blocked
 
Reported: 2013-10-24 14:27 UTC by Dafna Ron
Modified: 2016-04-26 16:56 UTC (History)
6 users (show)

Fixed In Version: openstack-cinder-2013.2-2.el6ost
Doc Type: Bug Fix
Doc Text:
Cause: Bug in Cinder GlusterFS driver Consequence: File backing a GlusterFS volume created by cloning another volume has the wrong file name, causing subsequent operations to fail. Fix: Patch Cinder GlusterFS driver to use the correct filename when cloning a volume Result: Cloned volumes operate normally.
Clone Of:
Environment:
Last Closed: 2013-12-20 00:32:19 UTC
Target Upstream Version:
Embargoed:


Attachments (Terms of Use)
logs (15.26 KB, application/x-gzip)
2013-10-24 14:35 UTC, Dafna Ron
no flags Details


Links
System ID Private Priority Status Summary Last Updated
Launchpad 1244238 0 None None None Never
OpenStack gerrit 53735 0 None None None Never
Red Hat Bugzilla 1023065 0 unspecified CLOSED GlusterFS: cannot boot an instance from cloned volume 2021-02-22 00:41:40 UTC
Red Hat Product Errata RHEA-2013:1859 0 normal SHIPPED_LIVE Red Hat Enterprise Linux OpenStack Platform Enhancement Advisory 2013-12-21 00:01:48 UTC

Internal Links: 1023065 1045428

Description Dafna Ron 2013-10-24 14:27:33 UTC
Description of problem:

I tried to boot an instance from a cloned volume and am getting a rootwrap error. 
it seems that if we look under mnt we cannot see the volume but cinder list reports it as available. 
when we try to boot we are getting a qemu error with rootwrap that the volume does not exists (which its not...). 

Version-Release number of selected component (if applicable):

openstack-cinder-2013.2-0.11.rc1.el6ost.noarch
fuse-2.8.3-4.el6.x86_64
fuse-libs-2.8.3-4.el6.x86_64
glusterfs-fuse-3.4.0.33rhs-1.el6rhs.x86_64
libvirt-0.10.2-27.el6.x86_64
libvirt-client-0.10.2-27.el6.x86_64
libvirt-python-0.10.2-27.el6.x86_64
qemu-img-rhev-0.12.1.2-2.413.el6.x86_64
qemu-kvm-rhev-0.12.1.2-2.413.el6.x86_64

How reproducible:

100%

Steps to Reproduce:
1.install gluster as cinder backend with 2 computes using packstack
2. create a volume from an image 
3. clone a volume from the volume 
4. boot thr volume from the instance 

Actual results:

we fail to boot the instance and volumes log shows rootwrap error

Expected results:

we should be able to boot from the volume 

Additional info:

2013-10-24 16:44:00.413 2483 ERROR cinder.openstack.common.rpc.common [req-26521926-e9cb-4308-aacd-425ba2a1932a a660044c9b074450aaa45fba0d641fcc e27aae2598b94dca88cd0408406e0848] ['Traceback (most recent call last):\n', '  File "/usr/lib/python2.6/site-packages/cinder/openstack/common/rpc/amqp.py", line 441, in _process_data\n    **args)\n', '  File "/usr/lib/python2.6/site-packages/cinder/openstack/common/rpc/dispatcher.py", line 148, in dispatch\n    return getattr(proxyobj, method)(ctxt, **kwargs)\n', '  File "/usr/lib/python2.6/site-packages/cinder/utils.py", line 808, in wrapper\n    return func(self, *args, **kwargs)\n', '  File "/usr/lib/python2.6/site-packages/cinder/volume/manager.py", line 605, in initialize_connection\n    conn_info = self.driver.initialize_connection(volume, connector)\n', '  File "/usr/lib/python2.6/site-packages/cinder/volume/drivers/glusterfs.py", line 851, in initialize_connection\n    info = self._qemu_img_info(path)\n', '  File "/usr/lib/python2.6/site-packages/cinder/volume/drivers/glusterfs.py", line 132, in _qemu_img_info\n    info = image_utils.qemu_img_info(path)\n', '  File "/usr/lib/python2.6/site-packages/cinder/image/image_utils.py", line 191, in qemu_img_info\n    out, err = utils.execute(*cmd, run_as_root=True)\n', '  File "/usr/lib/python2.6/site-packages/cinder/utils.py", line 142, in execute\n    return processutils.execute(*cmd, **kwargs)\n', '  File "/usr/lib/python2.6/site-packages/cinder/openstack/common/processutils.py", line 173, in execute\n    cmd=\' \'.join(cmd))\n', 'ProcessExecutionError: Unexpected error while running command.\nCommand: sudo cinder-rootwrap /etc/cinder/rootwrap.conf env LC_ALL=C LANG=C qemu-img info /var/lib/cinder/mnt/792e7ed79ec67a83b6a55e1479a7c82f/volume-0466799b-0810-4c69-a894-0f395fe89452\nExit code: 1\nStdout: \'\'\nStderr: "Could not open \'/var/lib/cinder/mnt/792e7ed79ec67a83b6a55e1479a7c82f/volume-0466799b-0810-4c69-a894-0f395fe89452\': No such file or directory\\n"\n']
2013-10-24 16:44:04.763 2483 ERROR cinder.openstack.common.rpc.amqp [req-53eb21ec-fdee-44f4-85fe-37ea71ebf1a1 a660044c9b074450aaa45fba0d641fcc e27aae2598b94dca88cd0408406e0848] Exception during message handling
2013-10-24 16:44:04.763 2483 TRACE cinder.openstack.common.rpc.amqp Traceback (most recent call last):
2013-10-24 16:44:04.763 2483 TRACE cinder.openstack.common.rpc.amqp   File "/usr/lib/python2.6/site-packages/cinder/openstack/common/rpc/amqp.py", line 441, in _process_data
2013-10-24 16:44:04.763 2483 TRACE cinder.openstack.common.rpc.amqp     **args)
2013-10-24 16:44:04.763 2483 TRACE cinder.openstack.common.rpc.amqp   File "/usr/lib/python2.6/site-packages/cinder/openstack/common/rpc/dispatcher.py", line 148, in dispatch
2013-10-24 16:44:04.763 2483 TRACE cinder.openstack.common.rpc.amqp     return getattr(proxyobj, method)(ctxt, **kwargs)
2013-10-24 16:44:04.763 2483 TRACE cinder.openstack.common.rpc.amqp   File "/usr/lib/python2.6/site-packages/cinder/utils.py", line 808, in wrapper
2013-10-24 16:44:04.763 2483 TRACE cinder.openstack.common.rpc.amqp     return func(self, *args, **kwargs)
2013-10-24 16:44:04.763 2483 TRACE cinder.openstack.common.rpc.amqp   File "/usr/lib/python2.6/site-packages/cinder/volume/manager.py", line 605, in initialize_connection
2013-10-24 16:44:04.763 2483 TRACE cinder.openstack.common.rpc.amqp     conn_info = self.driver.initialize_connection(volume, connector)
2013-10-24 16:44:04.763 2483 TRACE cinder.openstack.common.rpc.amqp   File "/usr/lib/python2.6/site-packages/cinder/volume/drivers/glusterfs.py", line 851, in initialize_connection
2013-10-24 16:44:04.763 2483 TRACE cinder.openstack.common.rpc.amqp     info = self._qemu_img_info(path)
2013-10-24 16:44:04.763 2483 TRACE cinder.openstack.common.rpc.amqp   File "/usr/lib/python2.6/site-packages/cinder/volume/drivers/glusterfs.py", line 132, in _qemu_img_info
2013-10-24 16:44:04.763 2483 TRACE cinder.openstack.common.rpc.amqp     info = image_utils.qemu_img_info(path)
2013-10-24 16:44:04.763 2483 TRACE cinder.openstack.common.rpc.amqp   File "/usr/lib/python2.6/site-packages/cinder/image/image_utils.py", line 191, in qemu_img_info
2013-10-24 16:44:04.763 2483 TRACE cinder.openstack.common.rpc.amqp     out, err = utils.execute(*cmd, run_as_root=True)
2013-10-24 16:44:04.763 2483 TRACE cinder.openstack.common.rpc.amqp   File "/usr/lib/python2.6/site-packages/cinder/utils.py", line 142, in execute
2013-10-24 16:44:04.763 2483 TRACE cinder.openstack.common.rpc.amqp     return processutils.execute(*cmd, **kwargs)
2013-10-24 16:44:04.763 2483 TRACE cinder.openstack.common.rpc.amqp   File "/usr/lib/python2.6/site-packages/cinder/openstack/common/processutils.py", line 173, in execute
2013-10-24 16:44:04.763 2483 TRACE cinder.openstack.common.rpc.amqp     cmd=' '.join(cmd))
2013-10-24 16:44:04.763 2483 TRACE cinder.openstack.common.rpc.amqp ProcessExecutionError: Unexpected error while running command.
2013-10-24 16:44:04.763 2483 TRACE cinder.openstack.common.rpc.amqp Command: sudo cinder-rootwrap /etc/cinder/rootwrap.conf env LC_ALL=C LANG=C qemu-img info /var/lib/cinder/mnt/792e7ed79ec67a83b6a55e1479a7c82f/volume-0466799b-0810-4c69-a894-0f395fe89452
2013-10-24 16:44:04.763 2483 TRACE cinder.openstack.common.rpc.amqp Exit code: 1
2013-10-24 16:44:04.763 2483 TRACE cinder.openstack.common.rpc.amqp Stdout: ''
2013-10-24 16:44:04.763 2483 TRACE cinder.openstack.common.rpc.amqp Stderr: "Could not open '/var/lib/cinder/mnt/792e7ed79ec67a83b6a55e1479a7c82f/volume-0466799b-0810-4c69-a894-0f395fe89452': No such file or directory\n"
2013-10-24 16:44:04.763 2483 TRACE cinder.openstack.common.rpc.amqp 
2013-10-24 16:44:04.765 2483 ERROR cinder.openstack.common.rpc.common [req-53eb21ec-fdee-44f4-85fe-37ea71ebf1a1 a660044c9b074450aaa45fba0d641fcc e27aae2598b94dca88cd0408406e0848] Returning exception Unexpected error while running command.
Command: sudo cinder-rootwrap /etc/cinder/rootwrap.conf env LC_ALL=C LANG=C qemu-img info /var/lib/cinder/mnt/792e7ed79ec67a83b6a55e1479a7c82f/volume-0466799b-0810-4c69-a894-0f395fe89452
Exit code: 1
Stdout: ''


command to create the volume: 

cinder create 10 --source-volid f7416ba6-af45-47d3-a333-478447a1ab54 --display-name from_vol2

[root@cougar06 /(keystone_admin)]# cinder list 
/usr/lib/python2.6/site-packages/babel/__init__.py:33: UserWarning: Module backports was already imported from /usr/lib64/python2.6/site-packages/backports/__init__.pyc, but /usr/lib/python2.6/site-packages is being added to sys.path
  from pkg_resources import get_distribution, ResolutionError
+--------------------------------------+-----------+--------------+------+-------------+----------+-------------+
|                  ID                  |   Status  | Display Name | Size | Volume Type | Bootable | Attached to |
+--------------------------------------+-----------+--------------+------+-------------+----------+-------------+
| 0466799b-0810-4c69-a894-0f395fe89452 | available |   from_vol   |  10  |     None    |   true   |             |
| 1e36f3ac-27ef-46ea-b5fa-686c4da9f449 | available |     test     |  10  |     None    |  false   |             |
| 5d658297-5037-4203-9482-b072a2bc7526 | available |  from_vol1   |  10  |     None    |   true   |             |
| b9b40188-5a1d-4ee8-bb2d-17fef5a24e00 | available |  from_vol2   |  10  |     None    |   true   |             |
| f7416ba6-af45-47d3-a333-478447a1ab54 | available |   from_img   |  10  |     None    |   true   |             |
| f9d6b98f-8394-4a01-9424-f23897382d87 | available |    dafna     |  10  |     None    |  false   |             |
+--------------------------------------+-----------+--------------+------+-------------+----------+-------------+
[root@cougar06 /(keystone_admin)]# 


this is the compute error: 

2013-10-24 16:44:07.900 2689 ERROR nova.compute.manager [req-171b2e6c-7c07-43f8-81b2-89052706e6b7 a660044c9b074450aaa45fba0d641fcc e27aae2598b94dca88cd0408406e0848] [instance: 4dc50e69-9d84-4b19-b7ec-4bf0628d751b] Error: The server has e
ither erred or is incapable of performing the requested operation. (HTTP 500) (Request-ID: req-53eb21ec-fdee-44f4-85fe-37ea71ebf1a1)
2013-10-24 16:44:07.900 2689 TRACE nova.compute.manager [instance: 4dc50e69-9d84-4b19-b7ec-4bf0628d751b] Traceback (most recent call last):
2013-10-24 16:44:07.900 2689 TRACE nova.compute.manager [instance: 4dc50e69-9d84-4b19-b7ec-4bf0628d751b]   File "/usr/lib/python2.6/site-packages/nova/compute/manager.py", line 1028, in _build_instance
2013-10-24 16:44:07.900 2689 TRACE nova.compute.manager [instance: 4dc50e69-9d84-4b19-b7ec-4bf0628d751b]     context, instance, bdms)
2013-10-24 16:44:07.900 2689 TRACE nova.compute.manager [instance: 4dc50e69-9d84-4b19-b7ec-4bf0628d751b]   File "/usr/lib/python2.6/site-packages/nova/compute/manager.py", line 1393, in _prep_block_device
2013-10-24 16:44:07.900 2689 TRACE nova.compute.manager [instance: 4dc50e69-9d84-4b19-b7ec-4bf0628d751b]     instance=instance)
2013-10-24 16:44:07.900 2689 TRACE nova.compute.manager [instance: 4dc50e69-9d84-4b19-b7ec-4bf0628d751b]   File "/usr/lib/python2.6/site-packages/nova/compute/manager.py", line 1371, in _prep_block_device
2013-10-24 16:44:07.900 2689 TRACE nova.compute.manager [instance: 4dc50e69-9d84-4b19-b7ec-4bf0628d751b]     self._await_block_device_map_created) +
2013-10-24 16:44:07.900 2689 TRACE nova.compute.manager [instance: 4dc50e69-9d84-4b19-b7ec-4bf0628d751b]   File "/usr/lib/python2.6/site-packages/nova/virt/block_device.py", line 283, in attach_block_devices
2013-10-24 16:44:07.900 2689 TRACE nova.compute.manager [instance: 4dc50e69-9d84-4b19-b7ec-4bf0628d751b]     block_device_mapping)
2013-10-24 16:44:07.900 2689 TRACE nova.compute.manager [instance: 4dc50e69-9d84-4b19-b7ec-4bf0628d751b]   File "/usr/lib/python2.6/site-packages/nova/virt/block_device.py", line 170, in attach
2013-10-24 16:44:07.900 2689 TRACE nova.compute.manager [instance: 4dc50e69-9d84-4b19-b7ec-4bf0628d751b]     connector)
2013-10-24 16:44:07.900 2689 TRACE nova.compute.manager [instance: 4dc50e69-9d84-4b19-b7ec-4bf0628d751b]   File "/usr/lib/python2.6/site-packages/nova/volume/cinder.py", line 176, in wrapper
2013-10-24 16:44:07.900 2689 TRACE nova.compute.manager [instance: 4dc50e69-9d84-4b19-b7ec-4bf0628d751b]     res = method(self, ctx, volume_id, *args, **kwargs)
2013-10-24 16:44:07.900 2689 TRACE nova.compute.manager [instance: 4dc50e69-9d84-4b19-b7ec-4bf0628d751b]   File "/usr/lib/python2.6/site-packages/nova/volume/cinder.py", line 274, in initialize_connection
2013-10-24 16:44:07.900 2689 TRACE nova.compute.manager [instance: 4dc50e69-9d84-4b19-b7ec-4bf0628d751b]     connector)
2013-10-24 16:44:07.900 2689 TRACE nova.compute.manager [instance: 4dc50e69-9d84-4b19-b7ec-4bf0628d751b]   File "/usr/lib/python2.6/site-packages/cinderclient/v1/volumes.py", line 306, in initialize_connection
2013-10-24 16:44:07.900 2689 TRACE nova.compute.manager [instance: 4dc50e69-9d84-4b19-b7ec-4bf0628d751b]     {'connector': connector})[1]['connection_info']
2013-10-24 16:44:07.900 2689 TRACE nova.compute.manager [instance: 4dc50e69-9d84-4b19-b7ec-4bf0628d751b]   File "/usr/lib/python2.6/site-packages/cinderclient/v1/volumes.py", line 237, in _action
2013-10-24 16:44:07.900 2689 TRACE nova.compute.manager [instance: 4dc50e69-9d84-4b19-b7ec-4bf0628d751b]     return self.api.client.post(url, body=body)
2013-10-24 16:44:07.900 2689 TRACE nova.compute.manager [instance: 4dc50e69-9d84-4b19-b7ec-4bf0628d751b]   File "/usr/lib/python2.6/site-packages/cinderclient/client.py", line 210, in post
2013-10-24 16:44:07.900 2689 TRACE nova.compute.manager [instance: 4dc50e69-9d84-4b19-b7ec-4bf0628d751b]     return self._cs_request(url, 'POST', **kwargs)
2013-10-24 16:44:07.900 2689 TRACE nova.compute.manager [instance: 4dc50e69-9d84-4b19-b7ec-4bf0628d751b]   File "/usr/lib/python2.6/site-packages/cinderclient/client.py", line 174, in _cs_request
2013-10-24 16:44:07.900 2689 TRACE nova.compute.manager [instance: 4dc50e69-9d84-4b19-b7ec-4bf0628d751b]     **kwargs)
2013-10-24 16:44:07.900 2689 TRACE nova.compute.manager [instance: 4dc50e69-9d84-4b19-b7ec-4bf0628d751b]   File "/usr/lib/python2.6/site-packages/cinderclient/client.py", line 157, in request
2013-10-24 16:44:07.900 2689 TRACE nova.compute.manager [instance: 4dc50e69-9d84-4b19-b7ec-4bf0628d751b]     raise exceptions.from_response(resp, body)
2013-10-24 16:44:07.900 2689 TRACE nova.compute.manager [instance: 4dc50e69-9d84-4b19-b7ec-4bf0628d751b] ClientException: The server has either erred or is incapable of performing the requested operation. (HTTP 500) (Request-ID: req-53eb
21ec-fdee-44f4-85fe-37ea71ebf1a1)
2013-10-24 16:44:07.900 2689 TRACE nova.compute.manager [instance: 4dc50e69-9d84-4b19-b7ec-4bf0628d751b]

Comment 1 Dafna Ron 2013-10-24 14:35:06 UTC
Created attachment 815815 [details]
logs

Comment 2 Eric Harney 2013-10-24 21:05:14 UTC
The root of this problem is in the GlusterFS driver's clone operation.  A simpler test is probably volume clone and then create snapshot/clone again from the cloned volume.

Comment 3 Eric Harney 2013-10-28 19:52:00 UTC
*** Bug 1023040 has been marked as a duplicate of this bug. ***

Comment 5 Dafna Ron 2013-11-18 14:12:43 UTC
verified on openstack-cinder-2013.2-2.el6ost.noarch

Comment 8 errata-xmlrpc 2013-12-20 00:32:19 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

http://rhn.redhat.com/errata/RHEA-2013-1859.html


Note You need to log in before you can comment on or make changes to this bug.