Bug 1432315 - [Backport Request] "Fix wrong path used in iscsi "multipath -l"
Summary: [Backport Request] "Fix wrong path used in iscsi "multipath -l"
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: Red Hat OpenStack
Classification: Red Hat
Component: python-os-brick
Version: 9.0 (Mitaka)
Hardware: Unspecified
OS: Unspecified
urgent
urgent
Target Milestone: ---
: 9.0 (Mitaka)
Assignee: Gorka Eguileor
QA Contact: Avi Avraham
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2017-03-15 05:29 UTC by Shinobu KINJO
Modified: 2020-12-14 08:20 UTC (History)
17 users (show)

Fixed In Version: python-os-brick-1.1.0-5.el7ost
Doc Type: Bug Fix
Doc Text:
Cause: When a multipath is not found and a single path is used instead, we have a symlink instead of a real device, which is not recognized by the multipath CLI. Consequence: We get an unexpected error: can't get udev device Fix: Always use a real path instead of the symlink. Result: We no longer get the unexpected error.
Clone Of:
Environment:
Last Closed: 2017-06-19 14:48:32 UTC
Target Upstream Version:
Embargoed:


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
Red Hat Product Errata RHBA-2017:1502 0 normal SHIPPED_LIVE Red Hat OpenStack Platform 9 Bug Fix and Enhancement Advisory 2017-06-19 18:46:17 UTC

Comment 8 Paul Grist 2017-03-30 01:09:51 UTC
In further testing last week, some key additional fixes were identified for this collection and Gorka is the process of getting those ready to post upstream. We don't have a specific ETA, but we will get the BZs updated once the patches are ready.

Comment 12 lkuchlan 2017-06-14 14:25:26 UTC
Tested using:
python-os-brick-1.1.0-5.el7ost.noarch

Verification flow:

* I configured "iscsi_use_multipath=True" in nova.conf on compute node

[stack@undercloud-0 ~]$ nova list
+--------------------------------------+------+--------+------------+-------------+-------------------+
| ID                                   | Name | Status | Task State | Power State | Networks          |
+--------------------------------------+------+--------+------------+-------------+-------------------+
| 44a831bd-78d2-40e0-a92c-12cf22920a12 | vm   | ACTIVE | -          | Running     | public=10.0.0.214 |
+--------------------------------------+------+--------+------------+-------------+-------------------+

[stack@undercloud-0 ~]$ cinder list
+--------------------------------------+-----------+------+------+-------------+----------+-------------+
|                  ID                  |   Status  | Name | Size | Volume Type | Bootable | Attached to |
+--------------------------------------+-----------+------+------+-------------+----------+-------------+
| e6c2fe2b-4954-4d65-ab95-80ec3b12c425 | available |  -   |  1   |      -      |  false   |             |
+--------------------------------------+-----------+------+------+-------------+----------+-------------+

* Try attach the volume to the instance

[stack@undercloud-0 ~]$ nova volume-attach 44a831bd-78d2-40e0-a92c-12cf22920a12 e6c2fe2b-4954-4d65-ab95-80ec3b12c425
+----------+--------------------------------------+
| Property | Value                                |
+----------+--------------------------------------+
| device   | /dev/vdb                             |
| id       | e6c2fe2b-4954-4d65-ab95-80ec3b12c425 |
| serverId | 44a831bd-78d2-40e0-a92c-12cf22920a12 |
| volumeId | e6c2fe2b-4954-4d65-ab95-80ec3b12c425 |
+----------+--------------------------------------+

Result:
=======

* The volume isn't attached to the instance and its status stays available

[stack@undercloud-0 ~]$ cinder list
+--------------------------------------+-----------+------+------+-------------+----------+-------------+
|                  ID                  |   Status  | Name | Size | Volume Type | Bootable | Attached to |
+--------------------------------------+-----------+------+------+-------------+----------+-------------+
| e6c2fe2b-4954-4d65-ab95-80ec3b12c425 | available |  -   |  1   |      -      |  false   |             |
+--------------------------------------+-----------+------+------+-------------+----------+-------------+
 
From nova-compute.log:
----------------------

2017-06-14 14:23:55.073 2549 ERROR oslo_messaging.rpc.dispatcher ProcessExecutionError: Unexpected error while running command.
2017-06-14 14:23:55.073 2549 ERROR oslo_messaging.rpc.dispatcher Command: sudo nova-rootwrap /etc/nova/rootwrap.conf multipathd show status
2017-06-14 14:23:55.073 2549 ERROR oslo_messaging.rpc.dispatcher Exit code: 1
2017-06-14 14:23:55.073 2549 ERROR oslo_messaging.rpc.dispatcher Stdout: u''
2017-06-14 14:23:55.073 2549 ERROR oslo_messaging.rpc.dispatcher Stderr: u'ux_socket_connect: Connection refused\n'
2017-06-14 14:23:55.073 2549 ERROR oslo_messaging.rpc.dispatcher

Comment 13 lkuchlan 2017-06-14 14:26:52 UTC
Hi Gorka,
Please review the verification flow

Comment 18 Avi Avraham 2017-06-15 19:54:53 UTC
The fix was verified

#rpm -q python-os-brick
python-os-brick-1.1.0-5.el7ost.noarch

A multi path setup was set 
Done regression manual tests.

Comment 20 errata-xmlrpc 2017-06-19 14:48:32 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHBA-2017:1502

Comment 21 Gorka Eguileor 2017-06-21 12:24:06 UTC
*** Bug 1462346 has been marked as a duplicate of this bug. ***


Note You need to log in before you can comment on or make changes to this bug.