Bug 1206699 - FC volume fails to attach to instance on RHEL 7
Summary: FC volume fails to attach to instance on RHEL 7
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: Red Hat OpenStack
Classification: Red Hat
Component: openstack-nova
Version: 6.0 (Juno)
Hardware: x86_64
OS: Linux
urgent
urgent
Target Milestone: async
: 6.0 (Juno)
Assignee: Lee Yarwood
QA Contact: nlevinki
URL:
Whiteboard:
Depends On:
Blocks: 1286112
TreeView+ depends on / blocked
 
Reported: 2015-03-27 19:24 UTC by wdaniel
Modified: 2023-02-22 23:02 UTC (History)
22 users (show)

Fixed In Version: openstack-nova-2014.2.3-34.el7ost
Doc Type: Bug Fix
Doc Text:
When using multipath-backed volumes via Object Storage (cinder), attach attempts failed without error. The handling of device identifiers has been updated and volumes can now be attached.
Clone Of:
: 1286112 (view as bug list)
Environment:
Last Closed: 2016-01-07 20:48:32 UTC
Target Upstream Version:
Embargoed:


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
Launchpad 1401799 0 None None None Never
OpenStack gerrit 169873 0 None MERGED Fix multipath device discovery when UFN is enabled. 2020-12-11 02:40:02 UTC
Red Hat Issue Tracker OSP-16700 0 None None None 2022-07-09 08:03:48 UTC
Red Hat Knowledge Base (Solution) 1401233 0 None None None Never
Red Hat Product Errata RHSA-2016:0013 0 normal SHIPPED_LIVE Moderate: openstack-nova security and bug fix advisory 2016-01-08 01:47:38 UTC

Description wdaniel 2015-03-27 19:24:02 UTC
Description of problem:

When using 3Par volumes via cinder, attach attempts seem to fail without error. From the upstream bug:

---
While trying to attach HP 3PAR FC/iSCSI volumes to instance running on KVM(RHEL 7.0), libvirt fails with the below message.

-----------------------------
if ret == -1: raise libvirtError ('virDomainAttachDeviceFlags() failed', dom=self)
libvirtError: Failed to open file '/dev/mapper/360002ac000000000000001280000943e': No such file or directory
-----------------------------

Find attached the compute log.

The below call from attach_volume(nova/virt/libvirt/driver.py) call fails.
            virt_dom.attachDeviceFlags(conf.to_xml(), flags)

Further debugging the problem, i observe that "attachDeviceFlags" to libvirt is returns -1.
---

Version-Release number of selected component (if applicable):

openstack-nova-compute-2014.2.1-14.el7ost.noarch 
RHEL 7
RHOSP 6

How reproducible:

Very

Steps to Reproduce:
1. Create cinder volume backed by HP 3Par
2. Attach volume to instance

Actual results:

Volume is listed in cinder as not attached to anything, no failures reported by the nova API

Expected results:

Volume attaches successfully and can be used in instances
Additional info:

Comment 7 Kurt Martin 2015-03-30 23:54:57 UTC
The Nova code that all cinder Fibre Channel drivers are calling during attachment is in the connect_volume method starting on L1301 (https://github.com/openstack/nova/blob/master/nova/virt/libvirt/volume.py#L1301) This should be the area of focus as to what is being called at attach time.

Comment 23 Kurt Martin 2015-04-01 17:01:06 UTC
I have looked at the recent log files that were posted to the dropbox account and as you can see by the following stack trace in the nova log:

2015-03-31 22:36:14.246 18828 DEBUG nova.openstack.common.processutils [req-7368ca0c-dce9-49e3-9544-f64d644eb5dd None] Running cmd (subprocess): sudo nova-rootwrap /etc/nova/rootwrap.conf multipath -l /dev/sdb execute /usr/lib/python2.7/site-packages/nova/openstack/common/processutils.py:161
2015-03-31 22:36:14.317 18828 DEBUG nova.openstack.common.processutils [req-7368ca0c-dce9-49e3-9544-f64d644eb5dd None] Result was 0 execute /usr/lib/python2.7/site-packages/nova/openstack/common/processutils.py:195
2015-03-31 22:36:14.317 18828 DEBUG nova.storage.linuxscsi [req-7368ca0c-dce9-49e3-9544-f64d644eb5dd None] Found multipath device = /dev/mapper/360002ac000000000000000de00003b48 find_multipath_device /usr/lib/python2.7/site-packages/nova/storage/linuxscsi.py:124
2015-03-31 22:36:14.318 18828 DEBUG nova.virt.libvirt.volume [req-7368ca0c-dce9-49e3-9544-f64d644eb5dd None] Multipath device discovered /dev/mapper/360002ac000000000000000de00003b48 connect_volume /usr/lib/python2.7/site-packages/nova/virt/libvirt/volume.py:1016
2015-03-31 22:36:14.320 18828 DEBUG nova.openstack.common.lockutils [req-7368ca0c-dce9-49e3-9544-f64d644eb5dd ] Releasing semaphore "connect_volume" lock /usr/lib/python2.7/site-packages/nova/openstack/common/lockutils.py:238
2015-03-31 22:36:14.320 18828 DEBUG nova.openstack.common.lockutils [req-7368ca0c-dce9-49e3-9544-f64d644eb5dd ] Semaphore / lock released "connect_volume" inner /usr/lib/python2.7/site-packages/nova/openstack/common/lockutils.py:275
2015-03-31 22:36:14.322 18828 DEBUG nova.virt.libvirt.config [req-7368ca0c-dce9-49e3-9544-f64d644eb5dd None] Generated XML ('<disk type="block" device="disk">\n  <driver name="qemu" type="raw" cache="none"/>\n  <source dev="/dev/mapper/360002ac000000000000000de00003b48"/>\n  <target bus="virtio" dev="vdb"/>\n  <serial>27ab3eb8-d040-4503-8fa1-d115b0b3b9cb</serial>\n</disk>\n',)  to_xml /usr/lib/python2.7/site-packages/nova/virt/libvirt/config.py:82
2015-03-31 22:36:14.324 18828 ERROR nova.virt.libvirt.driver [req-7368ca0c-dce9-49e3-9544-f64d644eb5dd None] [instance: fe28f0d6-9b34-4991-9199-17f02e54cbc7] Failed to attach volume at mountpoint: /dev/vdb
2015-03-31 22:36:14.324 18828 TRACE nova.virt.libvirt.driver [instance: fe28f0d6-9b34-4991-9199-17f02e54cbc7] Traceback (most recent call last):
2015-03-31 22:36:14.324 18828 TRACE nova.virt.libvirt.driver [instance: fe28f0d6-9b34-4991-9199-17f02e54cbc7]   File "/usr/lib/python2.7/site-packages/nova/virt/libvirt/driver.py", line 1402, in attach_volume
2015-03-31 22:36:14.324 18828 TRACE nova.virt.libvirt.driver [instance: fe28f0d6-9b34-4991-9199-17f02e54cbc7]     virt_dom.attachDeviceFlags(conf.to_xml(), flags)
2015-03-31 22:36:14.324 18828 TRACE nova.virt.libvirt.driver [instance: fe28f0d6-9b34-4991-9199-17f02e54cbc7]   File "/usr/lib/python2.7/site-packages/eventlet/tpool.py", line 183, in doit
2015-03-31 22:36:14.324 18828 TRACE nova.virt.libvirt.driver [instance: fe28f0d6-9b34-4991-9199-17f02e54cbc7]     result = proxy_call(self._autowrap, f, *args, **kwargs)
2015-03-31 22:36:14.324 18828 TRACE nova.virt.libvirt.driver [instance: fe28f0d6-9b34-4991-9199-17f02e54cbc7]   File "/usr/lib/python2.7/site-packages/eventlet/tpool.py", line 141, in proxy_call
2015-03-31 22:36:14.324 18828 TRACE nova.virt.libvirt.driver [instance: fe28f0d6-9b34-4991-9199-17f02e54cbc7]     rv = execute(f, *args, **kwargs)
2015-03-31 22:36:14.324 18828 TRACE nova.virt.libvirt.driver [instance: fe28f0d6-9b34-4991-9199-17f02e54cbc7]   File "/usr/lib/python2.7/site-packages/eventlet/tpool.py", line 122, in execute
2015-03-31 22:36:14.324 18828 TRACE nova.virt.libvirt.driver [instance: fe28f0d6-9b34-4991-9199-17f02e54cbc7]     six.reraise(c, e, tb)
2015-03-31 22:36:14.324 18828 TRACE nova.virt.libvirt.driver [instance: fe28f0d6-9b34-4991-9199-17f02e54cbc7]   File "/usr/lib/python2.7/site-packages/eventlet/tpool.py", line 80, in tworker
2015-03-31 22:36:14.324 18828 TRACE nova.virt.libvirt.driver [instance: fe28f0d6-9b34-4991-9199-17f02e54cbc7]     rv = meth(*args, **kwargs)
2015-03-31 22:36:14.324 18828 TRACE nova.virt.libvirt.driver [instance: fe28f0d6-9b34-4991-9199-17f02e54cbc7]   File "/usr/lib64/python2.7/site-packages/libvirt.py", line 439, in attachDeviceFlags
2015-03-31 22:36:14.324 18828 TRACE nova.virt.libvirt.driver [instance: fe28f0d6-9b34-4991-9199-17f02e54cbc7]     if ret == -1: raise libvirtError ('virDomainAttachDeviceFlags() failed', dom=self)
2015-03-31 22:36:14.324 18828 TRACE nova.virt.libvirt.driver [instance: fe28f0d6-9b34-4991-9199-17f02e54cbc7] libvirtError: Failed to open file '/dev/mapper/360002ac000000000000000de00003b48': No such file or directory
2015-03-31 22:36:14.324 18828 TRACE nova.virt.libvirt.driver [instance: fe28f0d6-9b34-4991-9199-17f02e54cbc7] 
2015-03-31 22:36:14.340 18828 DEBUG nova.openstack.common.lockutils [req-7368ca0c-dce9-49e3-9544-f64d644eb5dd ] Created new semaphore "connect_volume" internal_lock /usr/lib/python2.7/site-packages/nova/openstack/common/lockutils.py:206

It clear that the xml that libvirt is trying to apply to the instance is invalid because it can't located /dev/mapper/360002ac000000000000000de00003b48 on the system.  The "Found multipath device = /dev/mapper/360002ac000000000000000de00003b48 find_multipath_device /usr/lib/python2.7/site-packages/nova/storage/linuxscsi.py:124" is just a message with a string that is constructed and if you examine the code above line #124 of linuxscsi.py.

So, the question remains why is /dev/mapper/360002ac000000000000000de00003b48 file not present on this RHEL7 system?

Comment 24 Lee Yarwood 2015-04-01 17:07:26 UTC
(In reply to Kurt Martin from comment #23)
> So, the question remains why is
> /dev/mapper/360002ac000000000000000de00003b48 file not present on this RHEL7
> system?

Hello Kurt, 

This appears to be simply due to the use of user_friendly_names (UFN) by multipath in the environment. 

The logic to parse the output of `multipath -l /dev/${path}` within find_multipath_device within Nova returned the above `/dev/mapper/${WWID}` path even though UFN is being used and thus this path is not valid. We'd expect to use a `/dev/mapper/mpath${n}` path in this case for example.

Disabling UFN will workaround this for now while we correct the logic in find_multipath_device.

Comment 25 Lee Yarwood 2015-04-02 10:34:14 UTC
I've pushed a change upstream for Nova that should workaround this, please feel free to add comments, concerns etc there.

[1] https://review.openstack.org/#/c/169873/

Comment 32 nlevinki 2016-01-07 10:25:08 UTC
the code is in and automation passed

Comment 34 errata-xmlrpc 2016-01-07 20:48:32 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://rhn.redhat.com/errata/RHSA-2016-0013.html


Note You need to log in before you can comment on or make changes to this bug.