Bug 1648337 - Multiattach does not work with LVM+LIO target
Summary: Multiattach does not work with LVM+LIO target
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: Red Hat OpenStack
Classification: Red Hat
Component: openstack-cinder
Version: 15.0 (Stein)
Hardware: x86_64
OS: Linux
urgent
urgent
Target Milestone: Upstream M2
: 15.0 (Stein)
Assignee: Lee Yarwood
QA Contact: Tzach Shefi
Kim Nylander
URL:
Whiteboard:
Depends On:
Blocks: 1650426 1650429
TreeView+ depends on / blocked
 
Reported: 2018-11-09 12:52 UTC by Lee Yarwood
Modified: 2019-09-26 10:46 UTC (History)
2 users (show)

Fixed In Version: openstack-cinder-14.0.1-0.20190607000407.23d1a72.el8ost
Doc Type: No Doc Update
Doc Text:
Clone Of:
: 1650426 (view as bug list)
Environment:
Last Closed: 2019-09-21 11:19:23 UTC
Target Upstream Version:


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
Launchpad 1786327 0 None None None 2018-11-09 12:52:29 UTC
OpenStack gerrit 616212 0 None MERGED lvm: Avoid premature calls to terminate_connection for muiltiattach vols 2020-05-21 21:46:56 UTC
Red Hat Product Errata RHEA-2019:2811 0 None None None 2019-09-21 11:19:49 UTC

Description Lee Yarwood 2018-11-09 12:52:30 UTC
Description of problem:

As documented in the following upstream bug the lioadm target driver for the lvm volume driver currently does not support volume multiattach due to a bug causing terminate_connection to be called prematurely when multiple attachments are active from a single host.

multiattach does not work with LVM+LIO target
https://bugs.launchpad.net/cinder/+bug/1786327

Version-Release number of selected component (if applicable):
15(S) / 14(R)

How reproducible:
Always

Steps to Reproduce:
1. Using the LVM volume driver and lioadm target driver attach a volume to multiple instances on the same host.
2. Detach the volume from one of the instances.

Actual results:
The underlying ACL for the compute host is removed while an instance is still attempting to use it on said host.

Expected results:
The underlying ACL for the compute host is not removed and the remaining instance can still access the volume.

Additional info:

Comment 6 Tzach Shefi 2019-07-04 13:25:30 UTC
Verified on:
openstack-cinder-14.0.1-0.20190607000407.23d1a72.el8ost.noarch

Change tgt with LIO
On Cinder.conf  target_helper = tgtadm -> target_helper = lioadm
Restart docker. 


Created a MA LVM(LIO) backed volume, attached to 3 instances

(overcloud) [stack@undercloud-0 ~]$ cinder list
+--------------------------------------+--------+------+------+-------------+----------+----------------------------------------------------------------------------------------------------------------+
| ID                                   | Status | Name | Size | Volume Type | Bootable | Attached to                                                                                                    |
+--------------------------------------+--------+------+------+-------------+----------+----------------------------------------------------------------------------------------------------------------+
| f6612ed6-9ae5-47cd-8e9f-beb2aa921b1a | in-use | -    | 1    | LIO-MA      | false    | 85212e47-c683-4fb1-bceb-4cc573df6436,83a0071a-69f7-4019-9d28-79f49e3629fd,d75265fa-aea5-41c4-9aec-6b38725490ad |
+--------------------------------------+--------+------+------+-------------+----------+----------------------------------------------------------------------------------------------------------------+

Isnt1 and 3 are on same host

(overcloud) [stack@undercloud-0 ~]$ nova show inst1
+--------------------------------------+----------------------------------------------------------------------------------+
| Property                             | Value                                                                            |
+--------------------------------------+----------------------------------------------------------------------------------+
| OS-DCF:diskConfig                    | MANUAL                                                                           |
| OS-EXT-AZ:availability_zone          | nova                                                                             |
| OS-EXT-SRV-ATTR:host                 | compute-0.localdomain                                                            |
| OS-EXT-SRV-ATTR:hostname             | inst1                                                                            |
| OS-EXT-SRV-ATTR:hypervisor_hostname  | compute-0.localdomain                                                            |
| OS-EXT-SRV-ATTR:instance_name        | instance-00000005                                                                |

| id                                   | d75265fa-aea5-41c4-9aec-6b38725490ad 


(overcloud) [stack@undercloud-0 ~]$ nova show inst3
+--------------------------------------+----------------------------------------------------------------------------------+
| Property                             | Value                                                                            |
+--------------------------------------+----------------------------------------------------------------------------------+
| OS-DCF:diskConfig                    | MANUAL                                                                           |
| OS-EXT-AZ:availability_zone          | nova                                                                             |
| OS-EXT-SRV-ATTR:host                 | compute-0.localdomain                                                            |
| OS-EXT-SRV-ATTR:hostname             | inst3                                                                            |
| OS-EXT-SRV-ATTR:hypervisor_hostname  | compute-0.localdomain                                                            |
| OS-EXT-SRV-ATTR:instance_name        | instance-0000000b                                                                |

| id                                   | 85212e47-c683-4fb1-bceb-4cc573df6436     


We have a MA LVM LIO volume attached to three instances, two of which reside on same compute-0 host. 

Now lets try to detach volume from isnt1
(overcloud) [stack@undercloud-0 ~]$ nova volume-detach inst1 f6612ed6-9ae5-47cd-8e9f-beb2aa921b1a
(overcloud) [stack@undercloud-0 ~]$ cinder list
+--------------------------------------+--------+------+------+-------------+----------+---------------------------------------------------------------------------+
| ID                                   | Status | Name | Size | Volume Type | Bootable | Attached to                                                               |
+--------------------------------------+--------+------+------+-------------+----------+---------------------------------------------------------------------------+
| f6612ed6-9ae5-47cd-8e9f-beb2aa921b1a | in-use | -    | 1    | LIO-MA      | false    | 85212e47-c683-4fb1-bceb-4cc573df6436,83a0071a-69f7-4019-9d28-79f49e3629fd |
+--------------------------------------+--------+------+------+-------------+----------+---------------------------------------------------------------------------+

We managed to detach vol from inst1, only two instances remain attached. 

Now lets try to detach inst3 (the other instance on same compute-0 host)

(overcloud) [stack@undercloud-0 ~]$ nova volume-detach inst3 f6612ed6-9ae5-47cd-8e9f-beb2aa921b1a
(overcloud) [stack@undercloud-0 ~]$ cinder list
+--------------------------------------+--------+------+------+-------------+----------+--------------------------------------+
| ID                                   | Status | Name | Size | Volume Type | Bootable | Attached to                          |
+--------------------------------------+--------+------+------+-------------+----------+--------------------------------------+
| f6612ed6-9ae5-47cd-8e9f-beb2aa921b1a | in-use | -    | 1    | LIO-MA      | false    | 83a0071a-69f7-4019-9d28-79f49e3629fd |
+--------------------------------------+--------+------+------+-------------+----------+--------------------------------------+


Also worked managed to detach MA LVM LIO volume from both instances running on same compute-0 host. 
Okay to verify! 

Lets also detach the third instance just to cleanup :)
This last detach also worked fine

(overcloud) [stack@undercloud-0 ~]$ nova volume-detach inst2 f6612ed6-9ae5-47cd-8e9f-beb2aa921b1a
(overcloud) [stack@undercloud-0 ~]$ cinder list
+--------------------------------------+-----------+------+------+-------------+----------+-------------+
| ID                                   | Status    | Name | Size | Volume Type | Bootable | Attached to |
+--------------------------------------+-----------+------+------+-------------+----------+-------------+
| f6612ed6-9ae5-47cd-8e9f-beb2aa921b1a | available | -    | 1    | LIO-MA      | false    |             |
+--------------------------------------+-----------+------+------+-------------+----------+-------------+

Good to verify.

Comment 10 errata-xmlrpc 2019-09-21 11:19:23 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHEA-2019:2811


Note You need to log in before you can comment on or make changes to this bug.