Bug 1648337

Summary: Multiattach does not work with LVM+LIO target
Product: Red Hat OpenStack Reporter: Lee Yarwood <lyarwood>
Component: openstack-cinderAssignee: Lee Yarwood <lyarwood>
Status: CLOSED ERRATA QA Contact: Tzach Shefi <tshefi>
Severity: urgent Docs Contact: Kim Nylander <knylande>
Priority: urgent    
Version: 15.0 (Stein)CC: tenobreg, tshefi
Target Milestone: Upstream M2Keywords: Triaged
Target Release: 15.0 (Stein)   
Hardware: x86_64   
OS: Linux   
Whiteboard:
Fixed In Version: openstack-cinder-14.0.1-0.20190607000407.23d1a72.el8ost Doc Type: No Doc Update
Doc Text:
Story Points: ---
Clone Of:
: 1650426 (view as bug list) Environment:
Last Closed: 2019-09-21 11:19:23 UTC Type: Bug
Regression: --- Mount Type: ---
Documentation: --- CRM:
Verified Versions: Category: ---
oVirt Team: --- RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: --- Target Upstream Version:
Embargoed:
Bug Depends On:    
Bug Blocks: 1650426, 1650429    

Description Lee Yarwood 2018-11-09 12:52:30 UTC
Description of problem:

As documented in the following upstream bug the lioadm target driver for the lvm volume driver currently does not support volume multiattach due to a bug causing terminate_connection to be called prematurely when multiple attachments are active from a single host.

multiattach does not work with LVM+LIO target
https://bugs.launchpad.net/cinder/+bug/1786327

Version-Release number of selected component (if applicable):
15(S) / 14(R)

How reproducible:
Always

Steps to Reproduce:
1. Using the LVM volume driver and lioadm target driver attach a volume to multiple instances on the same host.
2. Detach the volume from one of the instances.

Actual results:
The underlying ACL for the compute host is removed while an instance is still attempting to use it on said host.

Expected results:
The underlying ACL for the compute host is not removed and the remaining instance can still access the volume.

Additional info:

Comment 6 Tzach Shefi 2019-07-04 13:25:30 UTC
Verified on:
openstack-cinder-14.0.1-0.20190607000407.23d1a72.el8ost.noarch

Change tgt with LIO
On Cinder.conf  target_helper = tgtadm -> target_helper = lioadm
Restart docker. 


Created a MA LVM(LIO) backed volume, attached to 3 instances

(overcloud) [stack@undercloud-0 ~]$ cinder list
+--------------------------------------+--------+------+------+-------------+----------+----------------------------------------------------------------------------------------------------------------+
| ID                                   | Status | Name | Size | Volume Type | Bootable | Attached to                                                                                                    |
+--------------------------------------+--------+------+------+-------------+----------+----------------------------------------------------------------------------------------------------------------+
| f6612ed6-9ae5-47cd-8e9f-beb2aa921b1a | in-use | -    | 1    | LIO-MA      | false    | 85212e47-c683-4fb1-bceb-4cc573df6436,83a0071a-69f7-4019-9d28-79f49e3629fd,d75265fa-aea5-41c4-9aec-6b38725490ad |
+--------------------------------------+--------+------+------+-------------+----------+----------------------------------------------------------------------------------------------------------------+

Isnt1 and 3 are on same host

(overcloud) [stack@undercloud-0 ~]$ nova show inst1
+--------------------------------------+----------------------------------------------------------------------------------+
| Property                             | Value                                                                            |
+--------------------------------------+----------------------------------------------------------------------------------+
| OS-DCF:diskConfig                    | MANUAL                                                                           |
| OS-EXT-AZ:availability_zone          | nova                                                                             |
| OS-EXT-SRV-ATTR:host                 | compute-0.localdomain                                                            |
| OS-EXT-SRV-ATTR:hostname             | inst1                                                                            |
| OS-EXT-SRV-ATTR:hypervisor_hostname  | compute-0.localdomain                                                            |
| OS-EXT-SRV-ATTR:instance_name        | instance-00000005                                                                |

| id                                   | d75265fa-aea5-41c4-9aec-6b38725490ad 


(overcloud) [stack@undercloud-0 ~]$ nova show inst3
+--------------------------------------+----------------------------------------------------------------------------------+
| Property                             | Value                                                                            |
+--------------------------------------+----------------------------------------------------------------------------------+
| OS-DCF:diskConfig                    | MANUAL                                                                           |
| OS-EXT-AZ:availability_zone          | nova                                                                             |
| OS-EXT-SRV-ATTR:host                 | compute-0.localdomain                                                            |
| OS-EXT-SRV-ATTR:hostname             | inst3                                                                            |
| OS-EXT-SRV-ATTR:hypervisor_hostname  | compute-0.localdomain                                                            |
| OS-EXT-SRV-ATTR:instance_name        | instance-0000000b                                                                |

| id                                   | 85212e47-c683-4fb1-bceb-4cc573df6436     


We have a MA LVM LIO volume attached to three instances, two of which reside on same compute-0 host. 

Now lets try to detach volume from isnt1
(overcloud) [stack@undercloud-0 ~]$ nova volume-detach inst1 f6612ed6-9ae5-47cd-8e9f-beb2aa921b1a
(overcloud) [stack@undercloud-0 ~]$ cinder list
+--------------------------------------+--------+------+------+-------------+----------+---------------------------------------------------------------------------+
| ID                                   | Status | Name | Size | Volume Type | Bootable | Attached to                                                               |
+--------------------------------------+--------+------+------+-------------+----------+---------------------------------------------------------------------------+
| f6612ed6-9ae5-47cd-8e9f-beb2aa921b1a | in-use | -    | 1    | LIO-MA      | false    | 85212e47-c683-4fb1-bceb-4cc573df6436,83a0071a-69f7-4019-9d28-79f49e3629fd |
+--------------------------------------+--------+------+------+-------------+----------+---------------------------------------------------------------------------+

We managed to detach vol from inst1, only two instances remain attached. 

Now lets try to detach inst3 (the other instance on same compute-0 host)

(overcloud) [stack@undercloud-0 ~]$ nova volume-detach inst3 f6612ed6-9ae5-47cd-8e9f-beb2aa921b1a
(overcloud) [stack@undercloud-0 ~]$ cinder list
+--------------------------------------+--------+------+------+-------------+----------+--------------------------------------+
| ID                                   | Status | Name | Size | Volume Type | Bootable | Attached to                          |
+--------------------------------------+--------+------+------+-------------+----------+--------------------------------------+
| f6612ed6-9ae5-47cd-8e9f-beb2aa921b1a | in-use | -    | 1    | LIO-MA      | false    | 83a0071a-69f7-4019-9d28-79f49e3629fd |
+--------------------------------------+--------+------+------+-------------+----------+--------------------------------------+


Also worked managed to detach MA LVM LIO volume from both instances running on same compute-0 host. 
Okay to verify! 

Lets also detach the third instance just to cleanup :)
This last detach also worked fine

(overcloud) [stack@undercloud-0 ~]$ nova volume-detach inst2 f6612ed6-9ae5-47cd-8e9f-beb2aa921b1a
(overcloud) [stack@undercloud-0 ~]$ cinder list
+--------------------------------------+-----------+------+------+-------------+----------+-------------+
| ID                                   | Status    | Name | Size | Volume Type | Bootable | Attached to |
+--------------------------------------+-----------+------+------+-------------+----------+-------------+
| f6612ed6-9ae5-47cd-8e9f-beb2aa921b1a | available | -    | 1    | LIO-MA      | false    |             |
+--------------------------------------+-----------+------+------+-------------+----------+-------------+

Good to verify.

Comment 10 errata-xmlrpc 2019-09-21 11:19:23 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHEA-2019:2811