Bug 1650429 - [OSP13]Multiattach does not work with LVM+LIO target
Summary: [OSP13]Multiattach does not work with LVM+LIO target
Keywords:
Status: CLOSED CURRENTRELEASE
Alias: None
Product: Red Hat OpenStack
Classification: Red Hat
Component: openstack-cinder
Version: 13.0 (Queens)
Hardware: x86_64
OS: Linux
urgent
urgent
Target Milestone: async
: 13.0 (Queens)
Assignee: Lee Yarwood
QA Contact: Avi Avraham
Kim Nylander
URL:
Whiteboard:
Depends On: 1648337 1650426
Blocks:
TreeView+ depends on / blocked
 
Reported: 2018-11-16 07:05 UTC by Lee Yarwood
Modified: 2019-09-18 11:31 UTC (History)
6 users (show)

Fixed In Version: openstack-cinder-12.0.4-8.el7ost
Doc Type: If docs needed, set a value
Doc Text:
Clone Of: 1650426
Environment:
Last Closed: 2019-03-15 10:33:57 UTC
Target Upstream Version:


Attachments (Terms of Use)


Links
System ID Priority Status Summary Last Updated
Launchpad 1786327 None None None 2018-11-16 07:05:41 UTC
OpenStack gerrit 618473 None stable/queens: MERGED cinder: lvm: Avoid premature calls to terminate_connection for muiltiattach vols (Ib5aa1b7578f7d3200185566ff5f8634dd519d... 2019-03-07 20:18:59 UTC

Description Lee Yarwood 2018-11-16 07:05:42 UTC
+++ This bug was initially created as a clone of Bug #1650426 +++

+++ This bug was initially created as a clone of Bug #1648337 +++

Description of problem:

As documented in the following upstream bug the lioadm target driver for the lvm volume driver currently does not support volume multiattach due to a bug causing terminate_connection to be called prematurely when multiple attachments are active from a single host.

multiattach does not work with LVM+LIO target
https://bugs.launchpad.net/cinder/+bug/1786327

Version-Release number of selected component (if applicable):
15(S) / 14(R)

How reproducible:
Always

Steps to Reproduce:
1. Using the LVM volume driver and lioadm target driver attach a volume to multiple instances on the same host.
2. Detach the volume from one of the instances.

Actual results:
The underlying ACL for the compute host is removed while an instance is still attempting to use it on said host.

Expected results:
The underlying ACL for the compute host is not removed and the remaining instance can still access the volume.

Additional info:

Comment 1 Keigo Noha 2018-12-06 00:58:16 UTC
Hello Lee-san,

Upstream queens branch merged the fix. Could you please proceed backport process into RHOSP13?

Best Regards,
Keigo Noha

Comment 2 Lee Yarwood 2018-12-19 11:07:00 UTC
(In reply to Keigo Noha from comment #1)
> Hello Lee-san,
> 
> Upstream queens branch merged the fix. Could you please proceed backport
> process into RHOSP13?
> 
> Best Regards,
> Keigo Noha

Shortly, once OSP 14 is released and we can cherry-pick the upstream fix downstream.

Comment 7 Tzach Shefi 2019-03-12 07:14:41 UTC
Verified on:
openstack-cinder-12.0.4-8.el7ost

1. Bring up system with LVM, replace\enable on cinder.conf
target_helper lioadm
check for\install   python-rtslib
restart c-vol docker. 

2. Create multiattach type
cinder type-create multiattach
cinder type-key multiattach set multiattach="<is> True"

3.Create multiattach volume
openstack volume create multiattachvol --size 1 --type multiattach
+---------------------+--------------------------------------+
| Field               | Value                                |
+---------------------+--------------------------------------+
| attachments         | []                                   |
| availability_zone   | nova                                 |
| bootable            | false                                |
| consistencygroup_id | None                                 |
| created_at          | 2019-03-12T06:43:27.000000           |
| description         | None                                 |
| encrypted           | False                                |
| id                  | 4868362c-1497-4465-bd40-4dfa1f5f040f |
| migration_status    | None                                 |
| multiattach         | True                                 |
| name                | multiattachvol                       |
| properties          |                                      |
| replication_status  | None                                 |
| size                | 1                                    |
| snapshot_id         | None                                 |
| source_volid        | None                                 |
| status              | creating                             |
| type                | multiattach                          |
| updated_at          | None                                 |
| user_id             | 393dccc016254e34ab3d188b840de10a     |
+---------------------+--------------------------------------+



4. Boot two instance on same host:
for i in $(openstack server list | awk '{print$2}'| grep -v ID); do openstack server show $i | grep -e name -e hostname -e status; done
| OS-EXT-SRV-ATTR:hypervisor_hostname | compute-0.localdomain                                    |
| OS-EXT-SRV-ATTR:instance_name       | instance-00000005                                        |
| key_name                            | None                                                     |
| name                                | inst2                                                    |
| security_groups                     | name='inst1-sg'                                          |
| status                              | ACTIVE                                                   |

| OS-EXT-SRV-ATTR:hypervisor_hostname | compute-0.localdomain                                    |
| OS-EXT-SRV-ATTR:instance_name       | instance-00000002                                        |
| key_name                            | None                                                     |
| name                                | inst1                                                    |
| security_groups                     | name='inst1-sg'                                          |
| status                              | ACTIVE   


5. Attach created volume to both instances:
#nova volume-attach inst1 4868362c-1497-4465-bd40-4dfa1f5f040f auto
+----------+--------------------------------------+
| Property | Value                                |
+----------+--------------------------------------+
| device   | /dev/vdb                             |
| id       | 4868362c-1497-4465-bd40-4dfa1f5f040f |
| serverId | acb47778-e625-4fb3-bf4f-8a77f75f772e |
| volumeId | 4868362c-1497-4465-bd40-4dfa1f5f040f |
+----------+--------------------------------------+

#nova volume-attach inst2 4868362c-1497-4465-bd40-4dfa1f5f040f auto
+----------+--------------------------------------+
| Property | Value                                |
+----------+--------------------------------------+
| device   | /dev/vdb                             |
| id       | 4868362c-1497-4465-bd40-4dfa1f5f040f |
| serverId | 14a6c520-7a0a-4e57-83b0-06e9c5160f58 |
| volumeId | 4868362c-1497-4465-bd40-4dfa1f5f040f |
+----------+--------------------------------------+

6. ssh into both instance, on one of them create fs on volume , mount and write some data 

On insta1-> 
# mkfs.ext4 /dev/vdb
# mkdir inst1
# mount /dev/vdb inst1/
# vi inst1/tshef.txt
# ls inst1/
lost+found  tshef.txt

On insta2-> 
# ls isnt2/
ls: isnt2/tshef.txt: Input/output error
lost+found

Great both instances see the same volume/file. 

7. Detach volume from insta1
#nova volume-detach inst1 4868362c-1497-4465-bd40-4dfa1f5f040f

Check that volume is only connected to insta2
#openstack volume list
+--------------------------------------+----------------+--------+------+--------------------------------+
| ID                                   | Name           | Status | Size | Attached to                    |
+--------------------------------------+----------------+--------+------+--------------------------------+
| 4868362c-1497-4465-bd40-4dfa1f5f040f | multiattachvol | in-use |    1 | Attached to inst2 on /dev/vdb  |
+--------------------------------------+----------------+--------+------+--------------------------------+

 
8. Back at insta2, we see that the volume is still attached and we still see the file I created on insta1(no unattached to this vol).
# lsblk
NAME   MAJ:MIN RM    SIZE RO TYPE MOUNTPOINT
vda    253:0    0      1G  0 disk 
`-vda1 253:1    0 1011.9M  0 part /
vdb    253:16   0      1G  0 disk /root/isnt2

# ls isnt2/
ls: isnt2/tshef.txt: Input/output error


And insta1 doesn't have this device any more
# lsblk
NAME   MAJ:MIN RM    SIZE RO TYPE MOUNTPOINT
vda    253:0    0      1G  0 disk 
`-vda1 253:1    0 1011.9M  0 part /


We were able (using lioadm) to attach a multi attached volume to two instance residing on same compute host.
Then successfully detached the volume from insta1 while volume remains connected to inst2 -> good to verify.

Comment 10 Lon Hohberger 2019-03-15 10:33:57 UTC
According to our records, this should be resolved by openstack-cinder-12.0.4-8.el7ost.  This build is available now.


Note You need to log in before you can comment on or make changes to this bug.