Bug 1038260 - GlusterFS volume attach fails with Nova exception due to unexpected share config format
Summary: GlusterFS volume attach fails with Nova exception due to unexpected share con...
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: Red Hat OpenStack
Classification: Red Hat
Component: openstack-cinder
Version: 4.0
Hardware: Unspecified
OS: Unspecified
unspecified
unspecified
Target Milestone: z1
: 4.0
Assignee: Eric Harney
QA Contact: Yogev Rabl
URL:
Whiteboard: storage
Depends On:
Blocks: 1038537 1045196
TreeView+ depends on / blocked
 
Reported: 2013-12-04 18:16 UTC by Eric Harney
Modified: 2019-09-09 16:43 UTC (History)
14 users (show)

Fixed In Version: openstack-cinder-2013.2.1-4.el6ost
Doc Type: Bug Fix
Doc Text:
Previously, specifying GlusterFS shares using an invalid format would result in Python exceptions occurring in Block Storage and Compute. This update addresses the issue by ignoring shares specified using an unexpected format. A warning message now appears at initialization time, rather than when the share is first accessed.
Clone Of: 1020979
Environment:
Last Closed: 2014-01-23 14:23:44 UTC
Target Upstream Version:
Embargoed:


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
OpenStack gerrit 65540 0 None MERGED NFS/GlusterFS: Skip incorrectly formatted shares 2020-09-23 09:56:17 UTC
Red Hat Product Errata RHBA-2014:0046 0 normal SHIPPED_LIVE Red Hat Enterprise Linux OpenStack Platform 4 Bug Fix and Enhancement Advisory 2014-01-23 00:51:59 UTC

Description Eric Harney 2013-12-04 18:16:03 UTC
There appears to be an error generated if you write a Cinder glusterfs_shares_config file with improperly formatted entries.

Both Cinder and Nova should do more robust checking of these config entries.


+++ This bug was initially created as a clone of Bug #1020979 +++

Description of problem:
...

--- Additional comment from shilpa on 2013-11-27 02:04:10 EST ---

Adding to c#9, "error : virDomainDiskDefParseXML:3719 : XML error: missing port for host" was seen during my previous tests reported in comment #3.. 

I reinstalled RHOS and retested it yesterday and today with setting source_ports to ['24007'] or [''] or [None], it did *not* produce the error "missing port for host" again.

--- Additional comment from shilpa on 2013-12-04 01:28:15 EST ---

I should have mentioned this in my earlier comment #10. What I meant to say was I don't see the "missing port for host" anymore, but attaching cinder volume still fails.

--- Additional comment from Xavier Queralt on 2013-12-04 02:57:01 EST ---

Any clue on the error that prevents the volume from being attached after the port problem is fixed? Is there anything relevant in compute's or libvirt's logs?

--- Additional comment from shilpa on 2013-12-04 04:26:04 EST ---

Yes, with port being set to ['24007'], I see relevant errors in the compute logs:


1] [instance: 7e5e0171-940e-4d86-97cc-01a061390f50] Attaching volume 503fec29-e6d3-4c66-b0d0-9afe3bbde02c to /dev/vdb
2013-12-04 14:53:38.185 14535 INFO urllib3.connectionpool [-] Starting new HTTP connection (1): 10.70.36.32
2013-12-04 14:53:38.571 14535 ERROR nova.compute.manager [req-a474cb47-fe55-481c-b12f-5db24cebf803 3a64469ca7064f2fa5222470e912af73 c2cdad15f08d40f9a3a12d16ed2d212
1] [instance: 7e5e0171-940e-4d86-97cc-01a061390f50] Failed to attach volume 503fec29-e6d3-4c66-b0d0-9afe3bbde02c at /dev/vdb
2013-12-04 14:53:38.571 14535 TRACE nova.compute.manager [instance: 7e5e0171-940e-4d86-97cc-01a061390f50] Traceback (most recent call last):
2013-12-04 14:53:38.571 14535 TRACE nova.compute.manager [instance: 7e5e0171-940e-4d86-97cc-01a061390f50]   File "/usr/lib/python2.6/site-packages/nova/compute/man
ager.py", line 3669, in _attach_volume
2013-12-04 14:53:38.571 14535 TRACE nova.compute.manager [instance: 7e5e0171-940e-4d86-97cc-01a061390f50]     encryption=encryption)
2013-12-04 14:53:38.571 14535 TRACE nova.compute.manager [instance: 7e5e0171-940e-4d86-97cc-01a061390f50]   File "/usr/lib/python2.6/site-packages/nova/virt/libvir
t/driver.py", line 1071, in attach_volume
2013-12-04 14:53:38.571 14535 TRACE nova.compute.manager [instance: 7e5e0171-940e-4d86-97cc-01a061390f50]     disk_info)
2013-12-04 14:53:38.571 14535 TRACE nova.compute.manager [instance: 7e5e0171-940e-4d86-97cc-01a061390f50]   File "/usr/lib/python2.6/site-packages/nova/virt/libvir
t/driver.py", line 1030, in volume_driver_method
2013-12-04 14:53:38.571 14535 TRACE nova.compute.manager [instance: 7e5e0171-940e-4d86-97cc-01a061390f50]     return method(connection_info, *args, **kwargs)
2013-12-04 14:53:38.571 14535 TRACE nova.compute.manager [instance: 7e5e0171-940e-4d86-97cc-01a061390f50]   File "/usr/lib/python2.6/site-packages/nova/virt/libvir
t/volume.py", line 829, in connect_volume
2013-12-04 14:53:38.571 14535 TRACE nova.compute.manager [instance: 7e5e0171-940e-4d86-97cc-01a061390f50]     vol_name = data['export'].split('/')[1]
2013-12-04 14:53:38.571 14535 TRACE nova.compute.manager [instance: 7e5e0171-940e-4d86-97cc-01a061390f50] IndexError: list index out of range




In libvirt.log I see a few errors but with a different timestamp. So I am guessing they are not relevant:

2013-12-04 09:17:14.362+0000: 12197: error : virNetSocketReadWire:1194 : End of file while reading data: Input/output error
2013-12-04 09:17:14.363+0000: 12197: error : virNetSocketReadWire:1194 : End of file while reading data: Input/output error
2013-12-04 09:21:20.260+0000: 12197: error : virNetSocketReadWire:1194 : End of file while reading data: Input/output error

--- Additional comment from Eric Harney on 2013-12-04 08:53:15 EST ---

That error makes it look like there is something unexpected in the connection_info object... like it doesn't have a "/" in the share configured in the Cinder gluster_shares file.

Can you post your Cinder gluster_shares file contents?

Comment 2 shilpa 2013-12-05 07:26:49 UTC
As Eric suggested, after adding a '/' against gluster volume in the cinder gluster_shares file and setting the port to '24007', I can successfully attach volume. 

Changes that were made while testing:

vi /etc/cinder/shares.conf
10.70.x.x:/cinder-vol

nova/virt/libvirt/volume.py

 if 'gluster' in CONF.qemu_allowed_storage_drivers:
            vol_name = data['export'].split('/')[1]
            source_host = data['export'].split('/')[0][:-1]

            conf.source_ports = ['24007']
            conf.source_type = 'network'
            conf.source_protocol = 'gluster'
            conf.source_hosts = [source_host]



We see volume being attached to instance.

# cinder list
+--------------------------------------+--------+--------------+------+-------------+----------+--------------------------------------+
|                  ID                  | Status | Display Name | Size | Volume Type | Bootable |             Attached to              |
+--------------------------------------+--------+--------------+------+-------------+----------+--------------------------------------+
| 93069bb9-9c94-4c44-8f9b-a98a9009795c | in-use |     vol1     |  1   |     None    |  false   | 6040bc43-944f-4cb7-b45e-c596a074fb86 |
+--------------------------------------+--------+--------------+------+-------------+----------+--------------------------------------+

Comment 6 Yogev Rabl 2014-01-15 12:27:23 UTC
verified on: 
python-cinderclient-1.0.7-2.el6ost.noarch
python-cinder-2013.2.1-4.el6ost.noarch
openstack-cinder-2013.2.1-4.el6ost.noarch

Comment 9 Lon Hohberger 2014-02-04 17:20:20 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://rhn.redhat.com/errata/RHBA-2014-0046.html


Note You need to log in before you can comment on or make changes to this bug.