Bug 1650426 - [OSP14]Multiattach does not work with LVM+LIO target
Summary: [OSP14]Multiattach does not work with LVM+LIO target
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: Red Hat OpenStack
Classification: Red Hat
Component: openstack-cinder
Version: 14.0 (Rocky)
Hardware: x86_64
OS: Linux
urgent
urgent
Target Milestone: ---
: 14.0 (Rocky)
Assignee: Lee Yarwood
QA Contact: Avi Avraham
Kim Nylander
URL:
Whiteboard:
Depends On: 1648337
Blocks: 1650429
TreeView+ depends on / blocked
 
Reported: 2018-11-16 07:00 UTC by Lee Yarwood
Modified: 2019-09-18 11:31 UTC (History)
6 users (show)

Fixed In Version: openstack-cinder-13.0.3-0.20190118014305.44c5314.el7ost
Doc Type: If docs needed, set a value
Doc Text:
Clone Of: 1648337
: 1650429 (view as bug list)
Environment:
Last Closed: 2019-03-18 12:56:24 UTC
Target Upstream Version:
Embargoed:


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
Launchpad 1786327 0 None None None 2018-11-16 07:00:04 UTC
OpenStack gerrit 618472 0 None MERGED lvm: Avoid premature calls to terminate_connection for muiltiattach vols 2020-05-21 03:37:42 UTC
Red Hat Product Errata RHBA-2019:0586 0 None None None 2019-03-18 12:56:29 UTC

Description Lee Yarwood 2018-11-16 07:00:04 UTC
+++ This bug was initially created as a clone of Bug #1648337 +++

Description of problem:

As documented in the following upstream bug the lioadm target driver for the lvm volume driver currently does not support volume multiattach due to a bug causing terminate_connection to be called prematurely when multiple attachments are active from a single host.

multiattach does not work with LVM+LIO target
https://bugs.launchpad.net/cinder/+bug/1786327

Version-Release number of selected component (if applicable):
15(S) / 14(R)

How reproducible:
Always

Steps to Reproduce:
1. Using the LVM volume driver and lioadm target driver attach a volume to multiple instances on the same host.
2. Detach the volume from one of the instances.

Actual results:
The underlying ACL for the compute host is removed while an instance is still attempting to use it on said host.

Expected results:
The underlying ACL for the compute host is not removed and the remaining instance can still access the volume.

Additional info:

Comment 7 Tzach Shefi 2019-03-11 13:57:19 UTC
Verified on:
openstack-cinder-13.0.3-0.20190118014305.44c5314.el7ost.noarch


1. Bring up an LVM system, on cinder.conf 
/var/lib/config-data/puppet-generated/cinder/etc/cinder/cinder.conf
target_helper=lioadm
yum install python-rtslib

2. restart docker


3. Create multiattach type
#cinder type-create multiattach
#cinder type-key multiattach set multiattach="<is> True"

4. Create a multiattach volume 
#cinder create 1 --volume-type multiattach --name mattachvol
..
+--------------------------------+---------------------------------------+
| Property                       | Value                                 |
+--------------------------------+---------------------------------------+
| attachments                    | []                                    |
| availability_zone              | nova                                  |
| bootable                       | false                                 |
| consistencygroup_id            | None                                  |
| created_at                     | 2019-03-11T12:51:45.000000            |
| description                    | None                                  |
| encrypted                      | False                                 |
| id                             | 17888cba-a5b3-422b-ab4f-db3dd64549de  |
| metadata                       | {}                                    |
| migration_status               | None                                  |
| multiattach                    | True                                  |
| name                           | mattachvol                            |
| os-vol-host-attr:host          | hostgroup@tripleo_iscsi#tripleo_iscsi |
| os-vol-mig-status-attr:migstat | None                                  |
| os-vol-mig-status-attr:name_id | None                                  |
| os-vol-tenant-attr:tenant_id   | 9d6a66636fc34e3691541c56e6218ce6      |
| replication_status             | None                                  |
| size                           | 1                                     |
| snapshot_id                    | None                                  |
| source_volid                   | None                                  |
| status                         | available                             |
| updated_at                     | 2019-03-11T12:51:46.000000            |
| user_id                        | cc053b0dcf134eaaaf54e6eb24f9208d      |
| volume_type                    | multiattach                           |
+--------------------------------+---------------------------------------+


5. Booted two instance:
#nova list
..
------------+-------+--------+------------+-------------+-----------------------------------+
| ID                                   | Name  | Status | Task State | Power State | Networks                          |
+--------------------------------------+-------+--------+------------+-------------+-----------------------------------+
| 92993c99-a5f5-4037-a40d-85e3feddf0ca | inst1 | ACTIVE | -          | Running     | internal=192.168.0.25, 10.0.0.238 |
| a22303f0-5455-416e-8207-c044a01ef3b5 | inst2 | ACTIVE | -          | Running     | internal=192.168.0.13, 10.0.0.220 |
+--------------------------------------+-------+--------+------------+-------------+-----------------------------------+


6. Attach same volume to both instances

#nova volume-attach 92993c99-a5f5-4037-a40d-85e3feddf0ca 17888cba-a5b3-422b-ab4f-db3dd64549de auto
+----------+--------------------------------------+
| Property | Value                                |
+----------+--------------------------------------+
| device   | /dev/vdb                             |
| id       | 17888cba-a5b3-422b-ab4f-db3dd64549de |
| serverId | 92993c99-a5f5-4037-a40d-85e3feddf0ca |
| volumeId | 17888cba-a5b3-422b-ab4f-db3dd64549de |
+----------+--------------------------------------+

nova volume-attach a22303f0-5455-416e-8207-c044a01ef3b5 17888cba-a5b3-422b-ab4f-db3dd64549de auto
+----------+--------------------------------------+
| Property | Value                                |
+----------+--------------------------------------+
| device   | /dev/vdb                             |
| id       | 17888cba-a5b3-422b-ab4f-db3dd64549de |
| serverId | a22303f0-5455-416e-8207-c044a01ef3b5 |
| volumeId | 17888cba-a5b3-422b-ab4f-db3dd64549de |
+----------+--------------------------------------+

7. Migrate one of the server to same compute host, forgot to predeploy to same compute
#nova migrate  92993c99-a5f5-4037-a40d-85e3feddf0ca --poll
nova resize-confirm  92993c99-a5f5-4037-a40d-85e3feddf0ca

8. Verify that both instance run on same compute 
nova show 92993c99-a5f5-4037-a40d-85e3feddf0ca | grep host
| OS-EXT-SRV-ATTR:host                 | compute-0.localdomain

nova show a22303f0-5455-416e-8207-c044a01ef3b5  | grep host
| OS-EXT-SRV-ATTR:host                 | compute-0.localdomain 


9. Just recheck cinder list to see vol still attached to both instances
#cinder list
+--------------------------------------+--------+------------+------+-------------+----------+---------------------------------------------------------------------------+
| ID                                   | Status | Name       | Size | Volume Type | Bootable | Attached to                                                               |
+--------------------------------------+--------+------------+------+-------------+----------+---------------------------------------------------------------------------+
| 17888cba-a5b3-422b-ab4f-db3dd64549de | in-use | mattachvol | 1    | multiattach | false    | 92993c99-a5f5-4037-a40d-85e3feddf0ca,a22303f0-5455-416e-8207-c044a01ef3b5 |
+--------------------------------------+--------+------------+------+-------------+----------+---------------------------------------------------------------------------+

10. Open console to both instance,
From one instance create a file on device, mount it 
On the second instance also mount and look for file. 
ignore the input/output error as ext4 mounted from two instances.
But I see the file so the volume is attached to both instance. 

11. Detach volume from second instance
#nova volume-detach a22303f0-5455-416e-8207-c044a01ef3b5 17888cba-a5b3-422b-ab4f-db3dd64549de

12. Check first instance the volume should remain attached. 

Cinder list also show only one instances attached to said volume
+--------------------------------------+--------+------------+------+-------------+----------+--------------------------------------+
| ID                                   | Status | Name       | Size | Volume Type | Bootable | Attached to                          |
+--------------------------------------+--------+------------+------+-------------+----------+--------------------------------------+
| 17888cba-a5b3-422b-ab4f-db3dd64549de | in-use | mattachvol | 1    | multiattach | false    | 92993c99-a5f5-4037-a40d-85e3feddf0ca |
+--------------------------------------+--------+------------+------+-------------+----------+--------------------------------------+

As expected the first instance (929..) still has access to volume, I can see my test file. 

# ls kuku/
lost+found  tshefi.txt
# lsblk
NAME   MAJ:MIN RM    SIZE RO TYPE MOUNTPOINT
vda    253:0    0      1G  0 disk 
`-vda1 253:1    0 1011.9M  0 part /
vdb    253:16   0      1G  0 disk /root/kuku


Where as on the second instance the volume has vanished
# lsblk
NAME   MAJ:MIN RM    SIZE RO TYPE MOUNTPOINT
vda    253:0    0      1G  0 disk 
`-vda1 253:1    0 1011.9M  0 part /



Verified, looks fine.

Comment 11 errata-xmlrpc 2019-03-18 12:56:24 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHBA-2019:0586


Note You need to log in before you can comment on or make changes to this bug.