Bug 1701172
Summary: | Detaching second instance from a multiattached LVM volume leaves volume in detaching state | ||||||
---|---|---|---|---|---|---|---|
Product: | Red Hat OpenStack | Reporter: | Tzach Shefi <tshefi> | ||||
Component: | openstack-cinder | Assignee: | Eric Harney <eharney> | ||||
Status: | CLOSED ERRATA | QA Contact: | Tzach Shefi <tshefi> | ||||
Severity: | high | Docs Contact: | Tana <tberry> | ||||
Priority: | high | ||||||
Version: | 13.0 (Queens) | CC: | abishop, dasmith, eharney, jhakimra, jjoyce, kchamart, knoha, lyarwood, mbooth, pgrist, sbauza, sgordon, shdunne | ||||
Target Milestone: | z7 | Keywords: | TestBlocker, Triaged, ZStream | ||||
Target Release: | 13.0 (Queens) | ||||||
Hardware: | Unspecified | ||||||
OS: | Unspecified | ||||||
Whiteboard: | |||||||
Fixed In Version: | openstack-cinder-12.0.7-2.el7ost | Doc Type: | If docs needed, set a value | ||||
Doc Text: | Story Points: | --- | |||||
Clone Of: | Environment: | ||||||
Last Closed: | 2019-07-10 13:00:41 UTC | Type: | Bug | ||||
Regression: | --- | Mount Type: | --- | ||||
Documentation: | --- | CRM: | |||||
Verified Versions: | Category: | --- | |||||
oVirt Team: | --- | RHEL 7.3 requirements from Atomic Host: | |||||
Cloudforms Team: | --- | Target Upstream Version: | |||||
Embargoed: | |||||||
Bug Depends On: | 1721361 | ||||||
Bug Blocks: | 1624971, 1692542 | ||||||
Attachments: |
|
Description
Tzach Shefi
2019-04-18 09:51:28 UTC
FYI, Limited to OSP13 only, Retested on OSP14/15 LVM, detaching worked flawlessly. (In reply to Tzach Shefi from comment #3) > FYI, > Limited to OSP13 only, > Retested on OSP14/15 LVM, detaching worked flawlessly. I can actually reproduce this upstream against master. Did you ensure that the mountpoint was different for each instance? I've been attaching another volume to one of the instances to ensure this happens: $ cinder create --allow-multiattach 1 [..] | id | 14ce03c0-e2fa-4ab3-8702-cddeb654ef73 | [..] $ cinder create 1 [..] | id | e6bc2ea0-1b37-4442-a00e-886ebcc700ca | [..] $ nova boot --flavor 1 --image cirros-0.4.0-x86_64-disk --nic net-id=ac3eacd9-9d97-4b0f-9a6c-31575247d6fb test-1 $ nova boot --flavor 1 --image cirros-0.4.0-x86_64-disk --nic net-id=ac3eacd9-9d97-4b0f-9a6c-31575247d6fb test-2 $ nova volume-attach test-1 e6bc2ea0-1b37-4442-a00e-886ebcc700ca $ nova volume-attach test-1 14ce03c0-e2fa-4ab3-8702-cddeb654ef73 $ nova volume-attach test-2 14ce03c0-e2fa-4ab3-8702-cddeb654ef73 $ sudo targetcli ls o- / ......................................................................................................................... [...] [..] o- iscsi ............................................................................................................ [Targets: 2] | o- iqn.2010-10.org.openstack:volume-14ce03c0-e2fa-4ab3-8702-cddeb654ef73 ............................................. [TPGs: 1] | | o- tpg1 .......................................................................................... [no-gen-acls, auth per-acl] | | o- acls .......................................................................................................... [ACLs: 1] | | | o- iqn.1994-05.com.redhat:381c8a2dcf5f ...................................................... [1-way auth, Mapped LUNs: 1] | | | o- mapped_lun0 ................. [lun0 block/iqn.2010-10.org.openstack:volume-14ce03c0-e2fa-4ab3-8702-cddeb654ef73 (rw)] | | o- luns .......................................................................................................... [LUNs: 1] | | | o- lun0 [block/iqn.2010-10.org.openstack:volume-14ce03c0-e2fa-4ab3-8702-cddeb654ef73 (/dev/stack-volumes-lvmdriver-1/volume-14ce03c0-e2fa-4ab3-8702-cddeb654ef73) (default_tg_pt_gp)] | | o- portals .................................................................................................... [Portals: 1] | | o- 192.168.122.199:3260 ............................................................................................. [OK] [..] $ nova volume-detach test-2 14ce03c0-e2fa-4ab3-8702-cddeb654ef73 $ sudo targetcli ls o- / ......................................................................................................................... [...] [..] o- iscsi ............................................................................................................ [Targets: 2] | o- iqn.2010-10.org.openstack:volume-14ce03c0-e2fa-4ab3-8702-cddeb654ef73 ............................................. [TPGs: 1] | | o- tpg1 .......................................................................................... [no-gen-acls, auth per-acl] | | o- acls .......................................................................................................... [ACLs: 0] | | o- luns .......................................................................................................... [LUNs: 1] | | | o- lun0 [block/iqn.2010-10.org.openstack:volume-14ce03c0-e2fa-4ab3-8702-cddeb654ef73 (/dev/stack-volumes-lvmdriver-1/volume-14ce03c0-e2fa-4ab3-8702-cddeb654ef73) (default_tg_pt_gp)] | | o- portals .................................................................................................... [Portals: 1] | | o- 192.168.122.199:3260 ............................................................................................. [OK] [..] $ nova volume-detach test-1 14ce03c0-e2fa-4ab3-8702-cddeb654ef73 $ cinder list +--------------------------------------+--------+------+------+-------------+----------+--------------------------------------+ | ID | Status | Name | Size | Volume Type | Bootable | Attached to | +--------------------------------------+--------+------+------+-------------+----------+--------------------------------------+ | 14ce03c0-e2fa-4ab3-8702-cddeb654ef73 | in-use | - | 1 | lvmdriver-1 | false | e4a94972-f2b9-4edd-b1bd-f955120b285e | | e6bc2ea0-1b37-4442-a00e-886ebcc700ca | in-use | - | 1 | lvmdriver-1 | false | e4a94972-f2b9-4edd-b1bd-f955120b285e | +--------------------------------------+--------+------+------+-------------+----------+--------------------------------------+ Set depends on flag failing to attach lvm volumes to instance. 13 -p 2019-06-20.1 includes a pre-fixed-in of: openstack-cinder-12.0.6-3.el7ost.noarch Waiting for "latest" deployment to complete, in order to check if fixed-in landed. Verified on: openstack-cinder-12.0.7-2.el7ost.noarch Create LVM backed multi-attach type (overcloud) [stack@undercloud-0 ~]$ cinder type-create lvm-ma +--------------------------------------+--------+-------------+-----------+ | ID | Name | Description | Is_Public | +--------------------------------------+--------+-------------+-----------+ | b41a5dc4-09e0-4b04-8741-338e0778454f | lvm-ma | - | True | +--------------------------------------+--------+-------------+-----------+ (overcloud) [stack@undercloud-0 ~]$ cinder type-key lvm-ma set multiattach="<is> True" (overcloud) [stack@undercloud-0 ~]$ cinder type-key lvm-ma set volume_backend_name=tripleo_iscsi (overcloud) [stack@undercloud-0 ~]$ cinder extra-specs-list +--------------------------------------+--------+----------------------------------------------------------------------+ | ID | Name | extra_specs | +--------------------------------------+--------+----------------------------------------------------------------------+ | b41a5dc4-09e0-4b04-8741-338e0778454f | lvm-ma | {'volume_backend_name': 'tripleo_iscsi', 'multiattach': '<is> True'} | +--------------------------------------+--------+----------------------------------------------------------------------+ Create a multi-attach volume: (overcloud) [stack@undercloud-0 ~]$ cinder create 2 --volume-type lvm-ma --name lvm-ma-vol +--------------------------------+---------------------------------------+ | Property | Value | +--------------------------------+---------------------------------------+ | attachments | [] | | availability_zone | nova | | bootable | false | | consistencygroup_id | None | | created_at | 2019-06-24T12:07:54.000000 | | description | None | | encrypted | False | | id | ee31add8-fca4-464d-b660-32e7c58410fc | | metadata | {} | | migration_status | None | | multiattach | True | | name | lvm-ma-vol | | os-vol-host-attr:host | hostgroup@tripleo_iscsi#tripleo_iscsi | | os-vol-mig-status-attr:migstat | None | | os-vol-mig-status-attr:name_id | None | | os-vol-tenant-attr:tenant_id | 9a6b53a2a6834d2daa58810b25819610 | | replication_status | None | | size | 2 | | snapshot_id | None | | source_volid | None | | status | creating | | updated_at | 2019-06-24T12:07:54.000000 | | user_id | ef188d6713974ec79d848afb0f33adb0 | | volume_type | lvm-ma | +--------------------------------+---------------------------------------+ Boot three instance, two of them will land on same compute node (overcloud) [stack@undercloud-0 ~]$ nova show vm1 +--------------------------------------+----------------------------------------------------------+ | Property | Value | +--------------------------------------+----------------------------------------------------------+ | OS-DCF:diskConfig | MANUAL | | OS-EXT-AZ:availability_zone | nova | | OS-EXT-SRV-ATTR:host | compute-1.localdomain | | OS-EXT-SRV-ATTR:hostname | vm1 | | OS-EXT-STS:vm_state | active (overcloud) [stack@undercloud-0 ~]$ nova show vm2 +--------------------------------------+----------------------------------------------------------+ | Property | Value | +--------------------------------------+----------------------------------------------------------+ | OS-DCF:diskConfig | MANUAL | | OS-EXT-AZ:availability_zone | nova | | OS-EXT-SRV-ATTR:host | compute-1.localdomain | | OS-EXT-SRV-ATTR:hostname | vm2 | | OS-EXT-STS:vm_state | active (overcloud) [stack@undercloud-0 ~]$ nova show vm3 +--------------------------------------+----------------------------------------------------------+ | Property | Value | +--------------------------------------+----------------------------------------------------------+ | OS-DCF:diskConfig | MANUAL | | OS-EXT-AZ:availability_zone | nova | | OS-EXT-SRV-ATTR:host | compute-1.localdomain | | OS-EXT-SRV-ATTR:hostname | vm3 | | OS-EXT-STS:vm_state | active | Odd I have two computes and all three instances landed on same compute-1, I'll check this later maybe resource issue. Any way they are all on same compute (need for this verification any way) lets attach the multi-attach volume to all three. First attempt attach to two VMs each with other mount point. (overcloud) [stack@undercloud-0 ~]$ nova volume-attach vm1 ee31add8-fca4-464d-b660-32e7c58410fc auto +----------+--------------------------------------+ | Property | Value | +----------+--------------------------------------+ | device | /dev/vdb | | id | ee31add8-fca4-464d-b660-32e7c58410fc | | serverId | 86801778-5256-4171-94f0-e7e7af6aa92c | | volumeId | ee31add8-fca4-464d-b660-32e7c58410fc | +----------+--------------------------------------+ (overcloud) [stack@undercloud-0 ~]$ nova volume-attach vm2 ee31add8-fca4-464d-b660-32e7c58410fc /dev/vdc +----------+--------------------------------------+ | Property | Value | +----------+--------------------------------------+ | device | /dev/vdb | | id | ee31add8-fca4-464d-b660-32e7c58410fc | | serverId | 0e394775-3cb5-40e8-b95a-9839bc63ad21 | | volumeId | ee31add8-fca4-464d-b660-32e7c58410fc | +----------+--------------------------------------+ Okay notice ignoring my request for /dev/vdc (probably cirros lack support) both instance got the volume attach to same mount point vdb. Here is cinder showing both VM1/2 attached to same volume (overcloud) [stack@undercloud-0 ~]$ cinder list +--------------------------------------+--------+------------+------+-------------+----------+---------------------------------------------------------------------------+ | ID | Status | Name | Size | Volume Type | Bootable | Attached to | +--------------------------------------+--------+------------+------+-------------+----------+---------------------------------------------------------------------------+ | ee31add8-fca4-464d-b660-32e7c58410fc | in-use | lvm-ma-vol | 2 | lvm-ma | false | 86801778-5256-4171-94f0-e7e7af6aa92c,0e394775-3cb5-40e8-b95a-9839bc63ad21 | +--------------------------------------+--------+------------+------+-------------+----------+---------------------------------------------------------------------------+ Detach vol from first instance: (overcloud) [stack@undercloud-0 ~]$ nova volume-detach 86801778-5256-4171-94f0-e7e7af6aa92c ee31add8-fca4-464d-b660-32e7c58410fc Volume-detached fine: (overcloud) [stack@undercloud-0 ~]$ cinder list +--------------------------------------+--------+------------+------+-------------+----------+--------------------------------------+ | ID | Status | Name | Size | Volume Type | Bootable | Attached to | +--------------------------------------+--------+------------+------+-------------+----------+--------------------------------------+ | ee31add8-fca4-464d-b660-32e7c58410fc | in-use | lvm-ma-vol | 2 | lvm-ma | false | 0e394775-3cb5-40e8-b95a-9839bc63ad21 | +--------------------------------------+--------+------------+------+-------------+----------+--------------------------------------+ Detach second vm from volume: (overcloud) [stack@undercloud-0 ~]$ nova volume-detach 0e394775-3cb5-40e8-b95a-9839bc63ad21 ee31add8-fca4-464d-b660-32e7c58410fc (overcloud) [stack@undercloud-0 ~]$ cinder list +--------------------------------------+-----------+------------+------+-------------+----------+--------------------------------------+ | ID | Status | Name | Size | Volume Type | Bootable | Attached to | +--------------------------------------+-----------+------------+------+-------------+----------+--------------------------------------+ | ee31add8-fca4-464d-b660-32e7c58410fc | detaching | lvm-ma-vol | 2 | lvm-ma | false | 0e394775-3cb5-40e8-b95a-9839bc63ad21 | +--------------------------------------+-----------+------------+------+-------------+----------+--------------------------------------+ So I was a bit worried about he detaching but having waited a few more seconds we see volume is now unattached to any of the instances: (overcloud) [stack@undercloud-0 ~]$ cinder list +--------------------------------------+-----------+------------+------+-------------+----------+-------------+ | ID | Status | Name | Size | Volume Type | Bootable | Attached to | +--------------------------------------+-----------+------------+------+-------------+----------+-------------+ | ee31add8-fca4-464d-b660-32e7c58410fc | available | lvm-ma-vol | 2 | lvm-ma | false | | +--------------------------------------+-----------+------------+------+-------------+----------+-------------+ Great, now lets test three instance and make sure each has a unique mount point: vm1 (overcloud) [stack@undercloud-0 ~]$ nova volume-attach vm1 ee31add8-fca4-464d-b660-32e7c58410fc auto +----------+--------------------------------------+ | Property | Value | +----------+--------------------------------------+ | device | /dev/vdb | | id | ee31add8-fca4-464d-b660-32e7c58410fc | | serverId | 86801778-5256-4171-94f0-e7e7af6aa92c | | volumeId | ee31add8-fca4-464d-b660-32e7c58410fc | +----------+--------------------------------------+ Create three none multi-attach volumes, just so mount point won't be same. (overcloud) [stack@undercloud-0 ~]$ cinder list +--------------------------------------+-----------+------------+------+-------------+----------+--------------------------------------+ | ID | Status | Name | Size | Volume Type | Bootable | Attached to | +--------------------------------------+-----------+------------+------+-------------+----------+--------------------------------------+ | 161b7f4c-2499-4308-82d5-e307343316e0 | available | lvm-vol3 | 1 | - | false | | | 27f6a68a-9ebc-43f9-bf13-3ac74d1eb620 | available | lvm-vol1 | 1 | - | false | | | c04a404b-33fd-4c02-8a09-1c54a029e66e | available | lvm-vol2 | 1 | - | false | | | ee31add8-fca4-464d-b660-32e7c58410fc | in-use | lvm-ma-vol | 2 | lvm-ma | false | 86801778-5256-4171-94f0-e7e7af6aa92c | +--------------------------------------+-----------+------------+------+-------------+----------+--------------------------------------+ Attach on of these new volumes to vm2 (overcloud) [stack@undercloud-0 ~]$ nova volume-attach vm2 27f6a68a-9ebc-43f9-bf13-3ac74d1eb620 auto +----------+--------------------------------------+ | Property | Value | +----------+--------------------------------------+ | device | /dev/vdb | | id | 27f6a68a-9ebc-43f9-bf13-3ac74d1eb620 | | serverId | 0e394775-3cb5-40e8-b95a-9839bc63ad21 | | volumeId | 27f6a68a-9ebc-43f9-bf13-3ac74d1eb620 | +----------+--------------------------------------+ Now attach the multi-attach volume to vm2 it should get mounted under vdc (overcloud) [stack@undercloud-0 ~]$ nova volume-attach vm2 ee31add8-fca4-464d-b660-32e7c58410fc auto +----------+--------------------------------------+ | Property | Value | +----------+--------------------------------------+ | device | /dev/vdc | | id | ee31add8-fca4-464d-b660-32e7c58410fc | | serverId | 0e394775-3cb5-40e8-b95a-9839bc63ad21 | | volumeId | ee31add8-fca4-464d-b660-32e7c58410fc | +----------+--------------------------------------+ Now attach the remaining none multi-attach volumes to vm3, then attach the multi-attach volume to vm3 (overcloud) [stack@undercloud-0 ~]$ nova volume-attach vm3 c04a404b-33fd-4c02-8a09-1c54a029e66e auto +----------+--------------------------------------+ | Property | Value | +----------+--------------------------------------+ | device | /dev/vdb | | id | c04a404b-33fd-4c02-8a09-1c54a029e66e | | serverId | 876c24d3-21e4-4ea3-8c9e-4626fceb1287 | | volumeId | c04a404b-33fd-4c02-8a09-1c54a029e66e | +----------+--------------------------------------+ (overcloud) [stack@undercloud-0 ~]$ nova volume-attach vm3 161b7f4c-2499-4308-82d5-e307343316e0 auto +----------+--------------------------------------+ | Property | Value | +----------+--------------------------------------+ | device | /dev/vdc | | id | 161b7f4c-2499-4308-82d5-e307343316e0 | | serverId | 876c24d3-21e4-4ea3-8c9e-4626fceb1287 | | volumeId | 161b7f4c-2499-4308-82d5-e307343316e0 | +----------+--------------------------------------+ (overcloud) [stack@undercloud-0 ~]$ nova volume-attach vm3 ee31add8-fca4-464d-b660-32e7c58410fc auto +----------+--------------------------------------+ | Property | Value | +----------+--------------------------------------+ | device | /dev/vdd | | id | ee31add8-fca4-464d-b660-32e7c58410fc | | serverId | 876c24d3-21e4-4ea3-8c9e-4626fceb1287 | | volumeId | ee31add8-fca4-464d-b660-32e7c58410fc | +----------+--------------------------------------+ Lets review cinder attachments (overcloud) [stack@undercloud-0 ~]$ cinder list +--------------------------------------+--------+------------+------+-------------+----------+----------------------------------------------------------------------------------------------------------------+ | ID | Status | Name | Size | Volume Type | Bootable | Attached to | +--------------------------------------+--------+------------+------+-------------+----------+----------------------------------------------------------------------------------------------------------------+ | 161b7f4c-2499-4308-82d5-e307343316e0 | in-use | lvm-vol3 | 1 | - | false | 876c24d3-21e4-4ea3-8c9e-4626fceb1287 | | 27f6a68a-9ebc-43f9-bf13-3ac74d1eb620 | in-use | lvm-vol1 | 1 | - | false | 0e394775-3cb5-40e8-b95a-9839bc63ad21 | | c04a404b-33fd-4c02-8a09-1c54a029e66e | in-use | lvm-vol2 | 1 | - | false | 876c24d3-21e4-4ea3-8c9e-4626fceb1287 | | ee31add8-fca4-464d-b660-32e7c58410fc | in-use | lvm-ma-vol | 2 | lvm-ma | false | 86801778-5256-4171-94f0-e7e7af6aa92c,876c24d3-21e4-4ea3-8c9e-4626fceb1287,0e394775-3cb5-40e8-b95a-9839bc63ad21 | +--------------------------------------+--------+------------+------+-------------+----------+----------------------------------------------------------------------------------------------------------------+ Looks good a multi-attach volume is attach to 3 VMs each on a different mount point. Detach vm2 from MA volume (overcloud) [stack@undercloud-0 ~]$ nova volume-detach vm2 ee31add8-fca4-464d-b660-32e7c58410fc Volume detached from vm2, remains attach to vm1/3 | ee31add8-fca4-464d-b660-32e7c58410fc | in-use | lvm-ma-vol | 2 | lvm-ma | false | 86801778-5256-4171-94f0-e7e7af6aa92c,876c24d3-21e4-4ea3-8c9e-4626fceb1287 | Detach volume from vm1 (overcloud) [stack@undercloud-0 ~]$ nova volume-detach vm1 ee31add8-fca4-464d-b660-32e7c58410fc | ee31add8-fca4-464d-b660-32e7c58410fc | in-use | lvm-ma-vol | 2 | lvm-ma | false | 876c24d3-21e4-4ea3-8c9e-4626fceb1287 | Now detach vm3 from MA volume (overcloud) [stack@undercloud-0 ~]$ nova volume-detach vm3 ee31add8-fca4-464d-b660-32e7c58410fc | ee31add8-fca4-464d-b660-32e7c58410fc | available | lvm-ma-vol | 2 | lvm-ma | false | | Looking good we managed to detach a multi-attach volume from 3 instance on two cycles: First attempt where same mount point was used on all three VMs. On the second cycle we used unique mount point per each vm, again all three VMs successfully detached. Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory, and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://access.redhat.com/errata/RHBA-2019:1732 |