+++ This bug was initially created as a clone of Bug #1650426 +++ +++ This bug was initially created as a clone of Bug #1648337 +++ Description of problem: As documented in the following upstream bug the lioadm target driver for the lvm volume driver currently does not support volume multiattach due to a bug causing terminate_connection to be called prematurely when multiple attachments are active from a single host. multiattach does not work with LVM+LIO target https://bugs.launchpad.net/cinder/+bug/1786327 Version-Release number of selected component (if applicable): 15(S) / 14(R) How reproducible: Always Steps to Reproduce: 1. Using the LVM volume driver and lioadm target driver attach a volume to multiple instances on the same host. 2. Detach the volume from one of the instances. Actual results: The underlying ACL for the compute host is removed while an instance is still attempting to use it on said host. Expected results: The underlying ACL for the compute host is not removed and the remaining instance can still access the volume. Additional info:
Hello Lee-san, Upstream queens branch merged the fix. Could you please proceed backport process into RHOSP13? Best Regards, Keigo Noha
(In reply to Keigo Noha from comment #1) > Hello Lee-san, > > Upstream queens branch merged the fix. Could you please proceed backport > process into RHOSP13? > > Best Regards, > Keigo Noha Shortly, once OSP 14 is released and we can cherry-pick the upstream fix downstream.
Verified on: openstack-cinder-12.0.4-8.el7ost 1. Bring up system with LVM, replace\enable on cinder.conf target_helper lioadm check for\install python-rtslib restart c-vol docker. 2. Create multiattach type cinder type-create multiattach cinder type-key multiattach set multiattach="<is> True" 3.Create multiattach volume openstack volume create multiattachvol --size 1 --type multiattach +---------------------+--------------------------------------+ | Field | Value | +---------------------+--------------------------------------+ | attachments | [] | | availability_zone | nova | | bootable | false | | consistencygroup_id | None | | created_at | 2019-03-12T06:43:27.000000 | | description | None | | encrypted | False | | id | 4868362c-1497-4465-bd40-4dfa1f5f040f | | migration_status | None | | multiattach | True | | name | multiattachvol | | properties | | | replication_status | None | | size | 1 | | snapshot_id | None | | source_volid | None | | status | creating | | type | multiattach | | updated_at | None | | user_id | 393dccc016254e34ab3d188b840de10a | +---------------------+--------------------------------------+ 4. Boot two instance on same host: for i in $(openstack server list | awk '{print$2}'| grep -v ID); do openstack server show $i | grep -e name -e hostname -e status; done | OS-EXT-SRV-ATTR:hypervisor_hostname | compute-0.localdomain | | OS-EXT-SRV-ATTR:instance_name | instance-00000005 | | key_name | None | | name | inst2 | | security_groups | name='inst1-sg' | | status | ACTIVE | | OS-EXT-SRV-ATTR:hypervisor_hostname | compute-0.localdomain | | OS-EXT-SRV-ATTR:instance_name | instance-00000002 | | key_name | None | | name | inst1 | | security_groups | name='inst1-sg' | | status | ACTIVE 5. Attach created volume to both instances: #nova volume-attach inst1 4868362c-1497-4465-bd40-4dfa1f5f040f auto +----------+--------------------------------------+ | Property | Value | +----------+--------------------------------------+ | device | /dev/vdb | | id | 4868362c-1497-4465-bd40-4dfa1f5f040f | | serverId | acb47778-e625-4fb3-bf4f-8a77f75f772e | | volumeId | 4868362c-1497-4465-bd40-4dfa1f5f040f | +----------+--------------------------------------+ #nova volume-attach inst2 4868362c-1497-4465-bd40-4dfa1f5f040f auto +----------+--------------------------------------+ | Property | Value | +----------+--------------------------------------+ | device | /dev/vdb | | id | 4868362c-1497-4465-bd40-4dfa1f5f040f | | serverId | 14a6c520-7a0a-4e57-83b0-06e9c5160f58 | | volumeId | 4868362c-1497-4465-bd40-4dfa1f5f040f | +----------+--------------------------------------+ 6. ssh into both instance, on one of them create fs on volume , mount and write some data On insta1-> # mkfs.ext4 /dev/vdb # mkdir inst1 # mount /dev/vdb inst1/ # vi inst1/tshef.txt # ls inst1/ lost+found tshef.txt On insta2-> # ls isnt2/ ls: isnt2/tshef.txt: Input/output error lost+found Great both instances see the same volume/file. 7. Detach volume from insta1 #nova volume-detach inst1 4868362c-1497-4465-bd40-4dfa1f5f040f Check that volume is only connected to insta2 #openstack volume list +--------------------------------------+----------------+--------+------+--------------------------------+ | ID | Name | Status | Size | Attached to | +--------------------------------------+----------------+--------+------+--------------------------------+ | 4868362c-1497-4465-bd40-4dfa1f5f040f | multiattachvol | in-use | 1 | Attached to inst2 on /dev/vdb | +--------------------------------------+----------------+--------+------+--------------------------------+ 8. Back at insta2, we see that the volume is still attached and we still see the file I created on insta1(no unattached to this vol). # lsblk NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT vda 253:0 0 1G 0 disk `-vda1 253:1 0 1011.9M 0 part / vdb 253:16 0 1G 0 disk /root/isnt2 # ls isnt2/ ls: isnt2/tshef.txt: Input/output error And insta1 doesn't have this device any more # lsblk NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT vda 253:0 0 1G 0 disk `-vda1 253:1 0 1011.9M 0 part / We were able (using lioadm) to attach a multi attached volume to two instance residing on same compute host. Then successfully detached the volume from insta1 while volume remains connected to inst2 -> good to verify.
According to our records, this should be resolved by openstack-cinder-12.0.4-8.el7ost. This build is available now.