Bug 1701172 - Detaching second instance from a multiattached LVM volume leaves volume in detaching state
Summary: Detaching second instance from a multiattached LVM volume leaves volume in de...
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: Red Hat OpenStack
Classification: Red Hat
Component: openstack-cinder
Version: 13.0 (Queens)
Hardware: Unspecified
OS: Unspecified
high
high
Target Milestone: z7
: 13.0 (Queens)
Assignee: Eric Harney
QA Contact: Tzach Shefi
Tana
URL:
Whiteboard:
Depends On: 1721361
Blocks: 1624971 1692542
TreeView+ depends on / blocked
 
Reported: 2019-04-18 09:51 UTC by Tzach Shefi
Modified: 2019-09-09 14:48 UTC (History)
13 users (show)

Fixed In Version: openstack-cinder-12.0.7-2.el7ost
Doc Type: If docs needed, set a value
Doc Text:
Clone Of:
Environment:
Last Closed: 2019-07-10 13:00:41 UTC
Target Upstream Version:
Embargoed:


Attachments (Terms of Use)
Controller and compute Cinder and Nova logs (1.18 MB, application/gzip)
2019-04-18 09:51 UTC, Tzach Shefi
no flags Details


Links
System ID Private Priority Status Summary Last Updated
Launchpad 1825957 0 None None None 2019-04-24 13:59:45 UTC
OpenStack gerrit 653837 0 'None' MERGED lvm: Only use initiators when comparing connector dicts 2020-11-26 23:49:21 UTC
Red Hat Product Errata RHBA-2019:1732 0 None None None 2019-07-10 13:01:06 UTC

Description Tzach Shefi 2019-04-18 09:51:28 UTC
Created attachment 1556119 [details]
Controller and compute Cinder and Nova logs

Description of problem: While verifying backport of Cinder multi attach support to OSP13(bz1692542) detach of isnt1 from a mulitattach volume succeeded, however detaching second instance (isnt2) from same volume fails, volume remains in "detaching" status. 

| 82abc6b9-6fbc-484b-9859-46bc52c019d7 | detaching | lvmMultiAttach | 1    | lvm         | false    | f7a6ae96-dfd7-4051-9acd-ff85922530df |


Version-Release number of selected component (if applicable):
python-cinder-12.0.6-2.el7ost.noarch
openstack-cinder-12.0.6-2.el7ost.noarch
puppet-cinder-12.4.1-4.el7ost.noarch
python2-cinderclient-3.5.0-1.el7ost.noarch

python2-os-brick-2.3.5-1.el7ost.noarch

openstack-nova-console-17.0.9-9.el7ost.noarch
openstack-nova-compute-17.0.9-9.el7ost.noarch
python2-novaclient-10.1.0-1.el7ost.noarch
puppet-nova-12.4.0-17.el7ost.noarch
python-nova-17.0.9-9.el7ost.noarch
openstack-nova-scheduler-17.0.9-9.el7ost.noarch
openstack-nova-conductor-17.0.9-9.el7ost.noarch
openstack-nova-api-17.0.9-9.el7ost.noarch
openstack-nova-common-17.0.9-9.el7ost.noarch
openstack-nova-placement-api-17.0.9-9.el7ost.noarch


How reproducible:
Unsure 

Steps to Reproduce:
1. Create a multi attached volume

cinder type-create lvm
openstack volume type set lvm --property volume_backend_name=tripleo_iscsi
cinder type-key lvm set multiattach="<is> True"
cinder create 1 --volume-type lvm --name lvmMultiAttach  -> 
vol id - >  82abc6b9-6fbc-484b-9859-46bc52c019d7 

2. Boot two instances
(overcloud) [stack@undercloud-0 ~]$ nova list

| 177f0636-331d-4559-a0d8-ddea2852ff83 | inst1 | ACTIVE | -          | Running     | internal=192.168.0.18, 10.0.0.221 |

| f7a6ae96-dfd7-4051-9acd-ff85922530df | inst2 | ACTIVE | -          | Running     | internal=192.168.0.16             |


3. Attach volume to both instances




| 82abc6b9-6fbc-484b-9859-46bc52c019d7 | in-use | lvmMultiAttach | 1    | lvm         | false    | f7a6ae96-dfd7-4051-9acd-ff85922530df,177f0636-331d-4559-a0d8-ddea2852ff83 |




4. Detach first instance (no idea if the order of who I detach first matters) 

(overcloud) [stack@undercloud-0 ~]$ nova volume-detach 177f0636-331d-4559-a0d8-ddea2852ff83  82abc6b9-6fbc-484b-9859-46bc52c019d7
(overcloud) [stack@undercloud-0 ~]$ cinder list

| 82abc6b9-6fbc-484b-9859-46bc52c019d7 | in-use | lvmMultiAttach | 1    | lvm         | false    | f7a6ae96-dfd7-4051-9acd-ff85922530df |

Volume detached successfully from inst1, as we see above it only remains attached to isnt2 (status in-use).  


5. Detach second instance 
(overcloud) [stack@undercloud-0 ~]$ nova volume-detach f7a6ae96-dfd7-4051-9acd-ff85922530df  82abc6b9-6fbc-484b-9859-46bc52c019d7

| 82abc6b9-6fbc-484b-9859-46bc52c019d7 | detaching | lvmMultiAttach | 1    | lvm         | false    | f7a6ae96-dfd7-4051-9acd-ff85922530df |

Even after a few minutes volume status remains in detaching state. 
This could be a Cinder issue but I suspect Nova first as it handles attach/detaching of volumes. 

Some (lunch) time later I noticed volume returned to in-use status,
figured id try to detach again reached "detaching" status. 

I'd been watching compute.log when I noticed the below 

2019-04-18 09:35:10.932 1 DEBUG oslo_concurrency.lockutils [req-518f12cb-8f56-426f-b5bf-a8ce3328f9e8 e71c92c3bd2d4fc5b90d90d346db0b75 b08bd7143c3847bf9a741c3c18a55c87 - default default] Lock "f7a6ae96-dfd7-4051-9acd-ff85922530df" released by "nova.compute.manager.do_detach_volume" :: held 61.096s inner /usr/lib/python2.7/site-packages/oslo_concurrency/lockutils.py:285
2019-04-18 09:35:10.970 1 ERROR oslo_messaging.rpc.server [req-518f12cb-8f56-426f-b5bf-a8ce3328f9e8 e71c92c3bd2d4fc5b90d90d346db0b75 b08bd7143c3847bf9a741c3c18a55c87 - default default] Exception during message handling: ProcessExecutionError: Unexpected error while running command.
Command: blockdev --flushbufs /dev/sdb
Exit code: 1
Stdout: u''
Stderr: u'blockdev: cannot open /dev/sdb: No such device or address\n'
2019-04-18 09:35:10.970 1 ERROR oslo_messaging.rpc.server Traceback (most recent call last):
2019-04-18 09:35:10.970 1 ERROR oslo_messaging.rpc.server   File "/usr/lib/python2.7/site-packages/oslo_messaging/rpc/server.py", line 163, in _process_incoming
2019-04-18 09:35:10.970 1 ERROR oslo_messaging.rpc.server     res = self.dispatcher.dispatch(message)
2019-04-18 09:35:10.970 1 ERROR oslo_messaging.rpc.server   File "/usr/lib/python2.7/site-packages/oslo_messaging/rpc/dispatcher.py", line 220, in dispatch
2019-04-18 09:35:10.970 1 ERROR oslo_messaging.rpc.server     return self._do_dispatch(endpoint, method, ctxt, args)
2019-04-18 09:35:10.970 1 ERROR oslo_messaging.rpc.server   File "/usr/lib/python2.7/site-packages/oslo_messaging/rpc/dispatcher.py", line 190, in _do_dispatch
2019-04-18 09:35:10.970 1 ERROR oslo_messaging.rpc.server     result = func(ctxt, **new_args)
2019-04-18 09:35:10.970 1 ERROR oslo_messaging.rpc.server   File "/usr/lib/python2.7/site-packages/nova/exception_wrapper.py", line 76, in wrapped
2019-04-18 09:35:10.970 1 ERROR oslo_messaging.rpc.server     function_name, call_dict, binary)
2019-04-18 09:35:10.970 1 ERROR oslo_messaging.rpc.server   File "/usr/lib/python2.7/site-packages/oslo_utils/excutils.py", line 220, in __exit__
2019-04-18 09:35:10.970 1 ERROR oslo_messaging.rpc.server     self.force_reraise()
2019-04-18 09:35:10.970 1 ERROR oslo_messaging.rpc.server   File "/usr/lib/python2.7/site-packages/oslo_utils/excutils.py", line 196, in force_reraise
2019-04-18 09:35:10.970 1 ERROR oslo_messaging.rpc.server     six.reraise(self.type_, self.value, self.tb)
2019-04-18 09:35:10.970 1 ERROR oslo_messaging.rpc.server   File "/usr/lib/python2.7/site-packages/nova/exception_wrapper.py", line 67, in wrapped
2019-04-18 09:35:10.970 1 ERROR oslo_messaging.rpc.server     return f(self, context, *args, **kw)
2019-04-18 09:35:10.970 1 ERROR oslo_messaging.rpc.server   File "/usr/lib/python2.7/site-packages/nova/compute/utils.py", line 977, in decorated_function
2019-04-18 09:35:10.970 1 ERROR oslo_messaging.rpc.server     return function(self, context, *args, **kwargs)
2019-04-18 09:35:10.970 1 ERROR oslo_messaging.rpc.server   File "/usr/lib/python2.7/site-packages/nova/compute/manager.py", line 214, in decorated_function
2019-04-18 09:35:10.970 1 ERROR oslo_messaging.rpc.server     kwargs['instance'], e, sys.exc_info())
2019-04-18 09:35:10.970 1 ERROR oslo_messaging.rpc.server   File "/usr/lib/python2.7/site-packages/oslo_utils/excutils.py", line 220, in __exit__
2019-04-18 09:35:10.970 1 ERROR oslo_messaging.rpc.server     self.force_reraise()
2019-04-18 09:35:10.970 1 ERROR oslo_messaging.rpc.server   File "/usr/lib/python2.7/site-packages/oslo_utils/excutils.py", line 196, in force_reraise
2019-04-18 09:35:10.970 1 ERROR oslo_messaging.rpc.server     six.reraise(self.type_, self.value, self.tb)
2019-04-18 09:35:10.970 1 ERROR oslo_messaging.rpc.server   File "/usr/lib/python2.7/site-packages/nova/compute/manager.py", line 202, in decorated_function
2019-04-18 09:35:10.970 1 ERROR oslo_messaging.rpc.server     return function(self, context, *args, **kwargs)
2019-04-18 09:35:10.970 1 ERROR oslo_messaging.rpc.server   File "/usr/lib/python2.7/site-packages/nova/compute/manager.py", line 5476, in detach_volume
2019-04-18 09:35:10.970 1 ERROR oslo_messaging.rpc.server     do_detach_volume(context, volume_id, instance, attachment_id)
2019-04-18 09:35:10.970 1 ERROR oslo_messaging.rpc.server   File "/usr/lib/python2.7/site-packages/oslo_concurrency/lockutils.py", line 274, in inner
2019-04-18 09:35:10.970 1 ERROR oslo_messaging.rpc.server     return f(*args, **kwargs)
2019-04-18 09:35:10.970 1 ERROR oslo_messaging.rpc.server   File "/usr/lib/python2.7/site-packages/nova/compute/manager.py", line 5474, in do_detach_volume
2019-04-18 09:35:10.970 1 ERROR oslo_messaging.rpc.server     attachment_id=attachment_id)
2019-04-18 09:35:10.970 1 ERROR oslo_messaging.rpc.server   File "/usr/lib/python2.7/site-packages/nova/compute/manager.py", line 5425, in _detach_volume
2019-04-18 09:35:10.970 1 ERROR oslo_messaging.rpc.server     attachment_id=attachment_id, destroy_bdm=destroy_bdm)
2019-04-18 09:35:10.970 1 ERROR oslo_messaging.rpc.server   File "/usr/lib/python2.7/site-packages/nova/virt/block_device.py", line 420, in detach
2019-04-18 09:35:10.970 1 ERROR oslo_messaging.rpc.server     attachment_id, destroy_bdm)
2019-04-18 09:35:10.970 1 ERROR oslo_messaging.rpc.server   File "/usr/lib/python2.7/site-packages/nova/virt/block_device.py", line 342, in _do_detach
2019-04-18 09:35:10.970 1 ERROR oslo_messaging.rpc.server     self.driver_detach(context, instance, volume_api, virt_driver)
2019-04-18 09:35:10.970 1 ERROR oslo_messaging.rpc.server   File "/usr/lib/python2.7/site-packages/nova/virt/block_device.py", line 318, in driver_detach
2019-04-18 09:35:10.970 1 ERROR oslo_messaging.rpc.server     volume_api.roll_detaching(context, volume_id)
2019-04-18 09:35:10.970 1 ERROR oslo_messaging.rpc.server   File "/usr/lib/python2.7/site-packages/oslo_utils/excutils.py", line 220, in __exit__
2019-04-18 09:35:10.970 1 ERROR oslo_messaging.rpc.server     self.force_reraise()
2019-04-18 09:35:10.970 1 ERROR oslo_messaging.rpc.server   File "/usr/lib/python2.7/site-packages/oslo_utils/excutils.py", line 196, in force_reraise
2019-04-18 09:35:10.970 1 ERROR oslo_messaging.rpc.server     six.reraise(self.type_, self.value, self.tb)
2019-04-18 09:35:10.970 1 ERROR oslo_messaging.rpc.server   File "/usr/lib/python2.7/site-packages/nova/virt/block_device.py", line 300, in driver_detach
2019-04-18 09:35:10.970 1 ERROR oslo_messaging.rpc.server     encryption=encryption)
2019-04-18 09:35:10.970 1 ERROR oslo_messaging.rpc.server   File "/usr/lib/python2.7/site-packages/nova/virt/libvirt/driver.py", line 1654, in detach_volume
2019-04-18 09:35:10.970 1 ERROR oslo_messaging.rpc.server     encryption=encryption)
2019-04-18 09:35:10.970 1 ERROR oslo_messaging.rpc.server   File "/usr/lib/python2.7/site-packages/nova/virt/libvirt/driver.py", line 1297, in _disconnect_volume
2019-04-18 09:35:10.970 1 ERROR oslo_messaging.rpc.server     vol_driver.disconnect_volume(connection_info, instance)
2019-04-18 09:35:10.970 1 ERROR oslo_messaging.rpc.server   File "/usr/lib/python2.7/site-packages/nova/virt/libvirt/volume/iscsi.py", line 74, in disconnect_volume
2019-04-18 09:35:10.970 1 ERROR oslo_messaging.rpc.server     self.connector.disconnect_volume(connection_info['data'], None)
2019-04-18 09:35:10.970 1 ERROR oslo_messaging.rpc.server   File "/usr/lib/python2.7/site-packages/os_brick/utils.py", line 150, in trace_logging_wrapper
2019-04-18 09:35:10.970 1 ERROR oslo_messaging.rpc.server     result = f(*args, **kwargs)
2019-04-18 09:35:10.970 1 ERROR oslo_messaging.rpc.server   File "/usr/lib/python2.7/site-packages/oslo_concurrency/lockutils.py", line 274, in inner
2019-04-18 09:35:10.970 1 ERROR oslo_messaging.rpc.server     return f(*args, **kwargs)
2019-04-18 09:35:10.970 1 ERROR oslo_messaging.rpc.server   File "/usr/lib/python2.7/site-packages/os_brick/initiator/connectors/iscsi.py", line 859, in disconnect_volume
2019-04-18 09:35:10.970 1 ERROR oslo_messaging.rpc.server     device_info=device_info)
2019-04-18 09:35:10.970 1 ERROR oslo_messaging.rpc.server   File "/usr/lib/python2.7/site-packages/os_brick/initiator/connectors/iscsi.py", line 903, in _cleanup_connection
2019-04-18 09:35:10.970 1 ERROR oslo_messaging.rpc.server     path_used, was_multipath)
2019-04-18 09:35:10.970 1 ERROR oslo_messaging.rpc.server   File "/usr/lib/python2.7/site-packages/os_brick/initiator/linuxscsi.py", line 277, in remove_connection
2019-04-18 09:35:10.970 1 ERROR oslo_messaging.rpc.server     self.remove_scsi_device(dev_path, force, exc, flush)
2019-04-18 09:35:10.970 1 ERROR oslo_messaging.rpc.server   File "/usr/lib/python2.7/site-packages/os_brick/initiator/linuxscsi.py", line 73, in remove_scsi_device
2019-04-18 09:35:10.970 1 ERROR oslo_messaging.rpc.server     self.flush_device_io(device)
2019-04-18 09:35:10.970 1 ERROR oslo_messaging.rpc.server   File "/usr/lib/python2.7/site-packages/os_brick/initiator/linuxscsi.py", line 316, in flush_device_io
2019-04-18 09:35:10.970 1 ERROR oslo_messaging.rpc.server     interval=10, root_helper=self._root_helper)
2019-04-18 09:35:10.970 1 ERROR oslo_messaging.rpc.server   File "/usr/lib/python2.7/site-packages/os_brick/executor.py", line 52, in _execute
2019-04-18 09:35:10.970 1 ERROR oslo_messaging.rpc.server     result = self.__execute(*args, **kwargs)
2019-04-18 09:35:10.970 1 ERROR oslo_messaging.rpc.server   File "/usr/lib/python2.7/site-packages/os_brick/privileged/rootwrap.py", line 169, in execute
2019-04-18 09:35:10.970 1 ERROR oslo_messaging.rpc.server     return execute_root(*cmd, **kwargs)
2019-04-18 09:35:10.970 1 ERROR oslo_messaging.rpc.server   File "/usr/lib/python2.7/site-packages/oslo_privsep/priv_context.py", line 207, in _wrap
2019-04-18 09:35:10.970 1 ERROR oslo_messaging.rpc.server     return self.channel.remote_call(name, args, kwargs)
2019-04-18 09:35:10.970 1 ERROR oslo_messaging.rpc.server   File "/usr/lib/python2.7/site-packages/oslo_privsep/daemon.py", line 202, in remote_call
2019-04-18 09:35:10.970 1 ERROR oslo_messaging.rpc.server     raise exc_type(*result[2])
2019-04-18 09:35:10.970 1 ERROR oslo_messaging.rpc.server ProcessExecutionError: Unexpected error while running command.
2019-04-18 09:35:10.970 1 ERROR oslo_messaging.rpc.server Command: blockdev --flushbufs /dev/sdb
2019-04-18 09:35:10.970 1 ERROR oslo_messaging.rpc.server Exit code: 1
2019-04-18 09:35:10.970 1 ERROR oslo_messaging.rpc.server Stdout: u''
2019-04-18 09:35:10.970 1 ERROR oslo_messaging.rpc.server Stderr: u'blockdev: cannot open /dev/sdb: No such device or address\n'
2019-04-18 09:35:10.970 1 ERROR oslo_messaging.rpc.server 
2019-04-18 09:35:12.371 1 DEBUG oslo_service.periodic_task [req-05c9f467-9bc0-4955-999d-168a375ce54e - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python2.7/site-packages/oslo_service/periodic_task.py:215



Actual results:

Nova volume detach fails to complete (or update cinder)
Volume remains in detaching state. 

Expected results:
Volume should successfully detach from second instance as it did from the first one. 



Additional info:

Comment 3 Tzach Shefi 2019-04-21 06:47:42 UTC
FYI, 
Limited to OSP13 only, 
Retested on OSP14/15 LVM, detaching worked flawlessly.

Comment 4 Lee Yarwood 2019-04-23 11:30:17 UTC
(In reply to Tzach Shefi from comment #3)
> FYI, 
> Limited to OSP13 only, 
> Retested on OSP14/15 LVM, detaching worked flawlessly.

I can actually reproduce this upstream against master. Did you ensure that the mountpoint was different for each instance?

I've been attaching another volume to one of the instances to ensure this happens:

$ cinder create --allow-multiattach 1
[..]
| id                             | 14ce03c0-e2fa-4ab3-8702-cddeb654ef73          |
[..]
$ cinder create 1
[..]
| id                             | e6bc2ea0-1b37-4442-a00e-886ebcc700ca |
[..]
$ nova boot --flavor 1 --image cirros-0.4.0-x86_64-disk --nic net-id=ac3eacd9-9d97-4b0f-9a6c-31575247d6fb test-1
$ nova boot --flavor 1 --image cirros-0.4.0-x86_64-disk --nic net-id=ac3eacd9-9d97-4b0f-9a6c-31575247d6fb test-2

$ nova volume-attach test-1 e6bc2ea0-1b37-4442-a00e-886ebcc700ca
$ nova volume-attach test-1 14ce03c0-e2fa-4ab3-8702-cddeb654ef73
$ nova volume-attach test-2 14ce03c0-e2fa-4ab3-8702-cddeb654ef73

$ sudo targetcli ls
o- / ......................................................................................................................... [...]
[..]
  o- iscsi ............................................................................................................ [Targets: 2]
  | o- iqn.2010-10.org.openstack:volume-14ce03c0-e2fa-4ab3-8702-cddeb654ef73 ............................................. [TPGs: 1]
  | | o- tpg1 .......................................................................................... [no-gen-acls, auth per-acl]
  | |   o- acls .......................................................................................................... [ACLs: 1]
  | |   | o- iqn.1994-05.com.redhat:381c8a2dcf5f ...................................................... [1-way auth, Mapped LUNs: 1]
  | |   |   o- mapped_lun0 ................. [lun0 block/iqn.2010-10.org.openstack:volume-14ce03c0-e2fa-4ab3-8702-cddeb654ef73 (rw)]
  | |   o- luns .......................................................................................................... [LUNs: 1]
  | |   | o- lun0  [block/iqn.2010-10.org.openstack:volume-14ce03c0-e2fa-4ab3-8702-cddeb654ef73 (/dev/stack-volumes-lvmdriver-1/volume-14ce03c0-e2fa-4ab3-8702-cddeb654ef73) (default_tg_pt_gp)]
  | |   o- portals .................................................................................................... [Portals: 1]
  | |     o- 192.168.122.199:3260 ............................................................................................. [OK]
[..]

$ nova volume-detach test-2 14ce03c0-e2fa-4ab3-8702-cddeb654ef73

$ sudo targetcli ls
o- / ......................................................................................................................... [...]
[..]
  o- iscsi ............................................................................................................ [Targets: 2]
  | o- iqn.2010-10.org.openstack:volume-14ce03c0-e2fa-4ab3-8702-cddeb654ef73 ............................................. [TPGs: 1]
  | | o- tpg1 .......................................................................................... [no-gen-acls, auth per-acl]
  | |   o- acls .......................................................................................................... [ACLs: 0]
  | |   o- luns .......................................................................................................... [LUNs: 1]
  | |   | o- lun0  [block/iqn.2010-10.org.openstack:volume-14ce03c0-e2fa-4ab3-8702-cddeb654ef73 (/dev/stack-volumes-lvmdriver-1/volume-14ce03c0-e2fa-4ab3-8702-cddeb654ef73) (default_tg_pt_gp)]
  | |   o- portals .................................................................................................... [Portals: 1]
  | |     o- 192.168.122.199:3260 ............................................................................................. [OK]
[..]

$ nova volume-detach test-1 14ce03c0-e2fa-4ab3-8702-cddeb654ef73
$ cinder list
+--------------------------------------+--------+------+------+-------------+----------+--------------------------------------+
| ID                                   | Status | Name | Size | Volume Type | Bootable | Attached to                          |
+--------------------------------------+--------+------+------+-------------+----------+--------------------------------------+
| 14ce03c0-e2fa-4ab3-8702-cddeb654ef73 | in-use | -    | 1    | lvmdriver-1 | false    | e4a94972-f2b9-4edd-b1bd-f955120b285e |
| e6bc2ea0-1b37-4442-a00e-886ebcc700ca | in-use | -    | 1    | lvmdriver-1 | false    | e4a94972-f2b9-4edd-b1bd-f955120b285e |
+--------------------------------------+--------+------+------+-------------+----------+--------------------------------------+

Comment 17 Tzach Shefi 2019-06-21 08:43:12 UTC
Set depends on flag failing to attach lvm volumes to instance.

Comment 20 Tzach Shefi 2019-06-23 06:30:20 UTC
13  -p 2019-06-20.1
includes a pre-fixed-in of:
openstack-cinder-12.0.6-3.el7ost.noarch

Waiting for "latest" deployment to complete, in order to check if fixed-in landed.

Comment 21 Tzach Shefi 2019-06-24 12:46:41 UTC
Verified on:
openstack-cinder-12.0.7-2.el7ost.noarch


Create LVM backed multi-attach type
(overcloud) [stack@undercloud-0 ~]$ cinder type-create lvm-ma
+--------------------------------------+--------+-------------+-----------+
| ID                                   | Name   | Description | Is_Public |
+--------------------------------------+--------+-------------+-----------+
| b41a5dc4-09e0-4b04-8741-338e0778454f | lvm-ma | -           | True      |
+--------------------------------------+--------+-------------+-----------+

(overcloud) [stack@undercloud-0 ~]$ cinder type-key lvm-ma set multiattach="<is> True"
(overcloud) [stack@undercloud-0 ~]$ cinder type-key lvm-ma set volume_backend_name=tripleo_iscsi
(overcloud) [stack@undercloud-0 ~]$ cinder extra-specs-list
+--------------------------------------+--------+----------------------------------------------------------------------+
| ID                                   | Name   | extra_specs                                                          |
+--------------------------------------+--------+----------------------------------------------------------------------+
| b41a5dc4-09e0-4b04-8741-338e0778454f | lvm-ma | {'volume_backend_name': 'tripleo_iscsi', 'multiattach': '<is> True'} |
+--------------------------------------+--------+----------------------------------------------------------------------+


Create a multi-attach volume:
(overcloud) [stack@undercloud-0 ~]$ cinder create 2 --volume-type lvm-ma --name lvm-ma-vol
+--------------------------------+---------------------------------------+
| Property                       | Value                                 |
+--------------------------------+---------------------------------------+
| attachments                    | []                                    |
| availability_zone              | nova                                  |
| bootable                       | false                                 |
| consistencygroup_id            | None                                  |
| created_at                     | 2019-06-24T12:07:54.000000            |
| description                    | None                                  |
| encrypted                      | False                                 |
| id                             | ee31add8-fca4-464d-b660-32e7c58410fc  |
| metadata                       | {}                                    |
| migration_status               | None                                  |
| multiattach                    | True                                  |
| name                           | lvm-ma-vol                            |
| os-vol-host-attr:host          | hostgroup@tripleo_iscsi#tripleo_iscsi |
| os-vol-mig-status-attr:migstat | None                                  |
| os-vol-mig-status-attr:name_id | None                                  |
| os-vol-tenant-attr:tenant_id   | 9a6b53a2a6834d2daa58810b25819610      |
| replication_status             | None                                  |
| size                           | 2                                     |
| snapshot_id                    | None                                  |
| source_volid                   | None                                  |
| status                         | creating                              |
| updated_at                     | 2019-06-24T12:07:54.000000            |
| user_id                        | ef188d6713974ec79d848afb0f33adb0      |
| volume_type                    | lvm-ma                                |
+--------------------------------+---------------------------------------+


Boot three instance, two of them will land on same compute node

(overcloud) [stack@undercloud-0 ~]$ nova show vm1
+--------------------------------------+----------------------------------------------------------+
| Property                             | Value                                                    |
+--------------------------------------+----------------------------------------------------------+
| OS-DCF:diskConfig                    | MANUAL                                                   |
| OS-EXT-AZ:availability_zone          | nova                                                     |
| OS-EXT-SRV-ATTR:host                 | compute-1.localdomain                                    |
| OS-EXT-SRV-ATTR:hostname             | vm1                                                      |
| OS-EXT-STS:vm_state                  | active     

(overcloud) [stack@undercloud-0 ~]$ nova show vm2
+--------------------------------------+----------------------------------------------------------+
| Property                             | Value                                                    |
+--------------------------------------+----------------------------------------------------------+
| OS-DCF:diskConfig                    | MANUAL                                                   |
| OS-EXT-AZ:availability_zone          | nova                                                     |
| OS-EXT-SRV-ATTR:host                 | compute-1.localdomain                                    |
| OS-EXT-SRV-ATTR:hostname             | vm2                                                      |
| OS-EXT-STS:vm_state                  | active 


(overcloud) [stack@undercloud-0 ~]$ nova show vm3
+--------------------------------------+----------------------------------------------------------+
| Property                             | Value                                                    |
+--------------------------------------+----------------------------------------------------------+
| OS-DCF:diskConfig                    | MANUAL                                                   |
| OS-EXT-AZ:availability_zone          | nova                                                     |
| OS-EXT-SRV-ATTR:host                 | compute-1.localdomain                                    |
| OS-EXT-SRV-ATTR:hostname             | vm3                                                      |
| OS-EXT-STS:vm_state                  | active                                                   |


Odd I have two computes and all three instances landed on same compute-1, I'll check this later maybe resource issue. 

Any way they are all on same compute (need for this verification any way) lets attach the multi-attach volume to all three. 

First attempt attach to two VMs each with other mount point. 

(overcloud) [stack@undercloud-0 ~]$ nova volume-attach vm1 ee31add8-fca4-464d-b660-32e7c58410fc auto
+----------+--------------------------------------+
| Property | Value                                |
+----------+--------------------------------------+
| device   | /dev/vdb                             |
| id       | ee31add8-fca4-464d-b660-32e7c58410fc |
| serverId | 86801778-5256-4171-94f0-e7e7af6aa92c |
| volumeId | ee31add8-fca4-464d-b660-32e7c58410fc |
+----------+--------------------------------------+

(overcloud) [stack@undercloud-0 ~]$ nova volume-attach vm2 ee31add8-fca4-464d-b660-32e7c58410fc /dev/vdc
+----------+--------------------------------------+
| Property | Value                                |
+----------+--------------------------------------+
| device   | /dev/vdb                             |
| id       | ee31add8-fca4-464d-b660-32e7c58410fc |
| serverId | 0e394775-3cb5-40e8-b95a-9839bc63ad21 |
| volumeId | ee31add8-fca4-464d-b660-32e7c58410fc |
+----------+--------------------------------------+


Okay notice ignoring my request for /dev/vdc (probably cirros lack support) both instance got the volume attach to same mount point vdb. 
Here is cinder showing both VM1/2 attached to same volume
(overcloud) [stack@undercloud-0 ~]$ cinder list
+--------------------------------------+--------+------------+------+-------------+----------+---------------------------------------------------------------------------+
| ID                                   | Status | Name       | Size | Volume Type | Bootable | Attached to                                                               |
+--------------------------------------+--------+------------+------+-------------+----------+---------------------------------------------------------------------------+
| ee31add8-fca4-464d-b660-32e7c58410fc | in-use | lvm-ma-vol | 2    | lvm-ma      | false    | 86801778-5256-4171-94f0-e7e7af6aa92c,0e394775-3cb5-40e8-b95a-9839bc63ad21 |
+--------------------------------------+--------+------------+------+-------------+----------+---------------------------------------------------------------------------+

Detach vol from first instance:
(overcloud) [stack@undercloud-0 ~]$ nova volume-detach 86801778-5256-4171-94f0-e7e7af6aa92c ee31add8-fca4-464d-b660-32e7c58410fc

Volume-detached fine:
(overcloud) [stack@undercloud-0 ~]$ cinder list
+--------------------------------------+--------+------------+------+-------------+----------+--------------------------------------+
| ID                                   | Status | Name       | Size | Volume Type | Bootable | Attached to                          |
+--------------------------------------+--------+------------+------+-------------+----------+--------------------------------------+
| ee31add8-fca4-464d-b660-32e7c58410fc | in-use | lvm-ma-vol | 2    | lvm-ma      | false    | 0e394775-3cb5-40e8-b95a-9839bc63ad21 |
+--------------------------------------+--------+------------+------+-------------+----------+--------------------------------------+

Detach second vm from volume:
(overcloud) [stack@undercloud-0 ~]$ nova volume-detach 0e394775-3cb5-40e8-b95a-9839bc63ad21 ee31add8-fca4-464d-b660-32e7c58410fc

(overcloud) [stack@undercloud-0 ~]$ cinder list
+--------------------------------------+-----------+------------+------+-------------+----------+--------------------------------------+
| ID                                   | Status    | Name       | Size | Volume Type | Bootable | Attached to                          |
+--------------------------------------+-----------+------------+------+-------------+----------+--------------------------------------+
| ee31add8-fca4-464d-b660-32e7c58410fc | detaching | lvm-ma-vol | 2    | lvm-ma      | false    | 0e394775-3cb5-40e8-b95a-9839bc63ad21 |
+--------------------------------------+-----------+------------+------+-------------+----------+--------------------------------------+

So I was a bit worried about he detaching but having waited a few more seconds we see volume is now unattached to any of the instances:
(overcloud) [stack@undercloud-0 ~]$ cinder list
+--------------------------------------+-----------+------------+------+-------------+----------+-------------+
| ID                                   | Status    | Name       | Size | Volume Type | Bootable | Attached to |
+--------------------------------------+-----------+------------+------+-------------+----------+-------------+
| ee31add8-fca4-464d-b660-32e7c58410fc | available | lvm-ma-vol | 2    | lvm-ma      | false    |             |
+--------------------------------------+-----------+------------+------+-------------+----------+-------------+

 
Great, now lets test three instance and make sure each has a unique mount point:

vm1
(overcloud) [stack@undercloud-0 ~]$ nova volume-attach vm1 ee31add8-fca4-464d-b660-32e7c58410fc auto
+----------+--------------------------------------+
| Property | Value                                |
+----------+--------------------------------------+
| device   | /dev/vdb                             |
| id       | ee31add8-fca4-464d-b660-32e7c58410fc |
| serverId | 86801778-5256-4171-94f0-e7e7af6aa92c |
| volumeId | ee31add8-fca4-464d-b660-32e7c58410fc |
+----------+--------------------------------------+


Create three none multi-attach volumes, just so mount point won't be same. 
(overcloud) [stack@undercloud-0 ~]$ cinder list
+--------------------------------------+-----------+------------+------+-------------+----------+--------------------------------------+
| ID                                   | Status    | Name       | Size | Volume Type | Bootable | Attached to                          |
+--------------------------------------+-----------+------------+------+-------------+----------+--------------------------------------+
| 161b7f4c-2499-4308-82d5-e307343316e0 | available | lvm-vol3   | 1    | -           | false    |                                      |
| 27f6a68a-9ebc-43f9-bf13-3ac74d1eb620 | available | lvm-vol1   | 1    | -           | false    |                                      |
| c04a404b-33fd-4c02-8a09-1c54a029e66e | available | lvm-vol2   | 1    | -           | false    |                                      |
| ee31add8-fca4-464d-b660-32e7c58410fc | in-use    | lvm-ma-vol | 2    | lvm-ma      | false    | 86801778-5256-4171-94f0-e7e7af6aa92c |
+--------------------------------------+-----------+------------+------+-------------+----------+--------------------------------------+

Attach on of these new volumes to vm2
(overcloud) [stack@undercloud-0 ~]$ nova volume-attach vm2 27f6a68a-9ebc-43f9-bf13-3ac74d1eb620 auto
+----------+--------------------------------------+
| Property | Value                                |
+----------+--------------------------------------+
| device   | /dev/vdb                             |
| id       | 27f6a68a-9ebc-43f9-bf13-3ac74d1eb620 |
| serverId | 0e394775-3cb5-40e8-b95a-9839bc63ad21 |
| volumeId | 27f6a68a-9ebc-43f9-bf13-3ac74d1eb620 |
+----------+--------------------------------------+

Now attach the multi-attach volume to vm2 it should get mounted under vdc
(overcloud) [stack@undercloud-0 ~]$ nova volume-attach vm2 ee31add8-fca4-464d-b660-32e7c58410fc auto
+----------+--------------------------------------+
| Property | Value                                |
+----------+--------------------------------------+
| device   | /dev/vdc                             |
| id       | ee31add8-fca4-464d-b660-32e7c58410fc |
| serverId | 0e394775-3cb5-40e8-b95a-9839bc63ad21 |
| volumeId | ee31add8-fca4-464d-b660-32e7c58410fc |
+----------+--------------------------------------+

Now attach the remaining none multi-attach volumes to vm3, then attach the multi-attach volume to vm3
(overcloud) [stack@undercloud-0 ~]$ nova volume-attach vm3 c04a404b-33fd-4c02-8a09-1c54a029e66e auto
+----------+--------------------------------------+
| Property | Value                                |
+----------+--------------------------------------+
| device   | /dev/vdb                             |
| id       | c04a404b-33fd-4c02-8a09-1c54a029e66e |
| serverId | 876c24d3-21e4-4ea3-8c9e-4626fceb1287 |
| volumeId | c04a404b-33fd-4c02-8a09-1c54a029e66e |
+----------+--------------------------------------+
(overcloud) [stack@undercloud-0 ~]$ nova volume-attach vm3 161b7f4c-2499-4308-82d5-e307343316e0 auto
+----------+--------------------------------------+
| Property | Value                                |
+----------+--------------------------------------+
| device   | /dev/vdc                             |
| id       | 161b7f4c-2499-4308-82d5-e307343316e0 |
| serverId | 876c24d3-21e4-4ea3-8c9e-4626fceb1287 |
| volumeId | 161b7f4c-2499-4308-82d5-e307343316e0 |
+----------+--------------------------------------+
(overcloud) [stack@undercloud-0 ~]$ nova volume-attach vm3 ee31add8-fca4-464d-b660-32e7c58410fc auto
+----------+--------------------------------------+
| Property | Value                                |
+----------+--------------------------------------+
| device   | /dev/vdd                             |
| id       | ee31add8-fca4-464d-b660-32e7c58410fc |
| serverId | 876c24d3-21e4-4ea3-8c9e-4626fceb1287 |
| volumeId | ee31add8-fca4-464d-b660-32e7c58410fc |
+----------+--------------------------------------+


Lets review cinder attachments 
(overcloud) [stack@undercloud-0 ~]$ cinder list
+--------------------------------------+--------+------------+------+-------------+----------+----------------------------------------------------------------------------------------------------------------+
| ID                                   | Status | Name       | Size | Volume Type | Bootable | Attached to                                                                                                    |
+--------------------------------------+--------+------------+------+-------------+----------+----------------------------------------------------------------------------------------------------------------+
| 161b7f4c-2499-4308-82d5-e307343316e0 | in-use | lvm-vol3   | 1    | -           | false    | 876c24d3-21e4-4ea3-8c9e-4626fceb1287                                                                           |
| 27f6a68a-9ebc-43f9-bf13-3ac74d1eb620 | in-use | lvm-vol1   | 1    | -           | false    | 0e394775-3cb5-40e8-b95a-9839bc63ad21                                                                           |
| c04a404b-33fd-4c02-8a09-1c54a029e66e | in-use | lvm-vol2   | 1    | -           | false    | 876c24d3-21e4-4ea3-8c9e-4626fceb1287                                                                           |
| ee31add8-fca4-464d-b660-32e7c58410fc | in-use | lvm-ma-vol | 2    | lvm-ma      | false    | 86801778-5256-4171-94f0-e7e7af6aa92c,876c24d3-21e4-4ea3-8c9e-4626fceb1287,0e394775-3cb5-40e8-b95a-9839bc63ad21 |
+--------------------------------------+--------+------------+------+-------------+----------+----------------------------------------------------------------------------------------------------------------+


Looks good a multi-attach volume is attach to 3 VMs each on a different mount point. 

Detach vm2 from MA volume
(overcloud) [stack@undercloud-0 ~]$ nova volume-detach vm2 ee31add8-fca4-464d-b660-32e7c58410fc 

Volume detached from vm2, remains attach to vm1/3
| ee31add8-fca4-464d-b660-32e7c58410fc | in-use | lvm-ma-vol | 2    | lvm-ma      | false    | 86801778-5256-4171-94f0-e7e7af6aa92c,876c24d3-21e4-4ea3-8c9e-4626fceb1287 |


Detach volume from vm1 
(overcloud) [stack@undercloud-0 ~]$ nova volume-detach vm1 ee31add8-fca4-464d-b660-32e7c58410fc 

| ee31add8-fca4-464d-b660-32e7c58410fc | in-use | lvm-ma-vol | 2    | lvm-ma      | false    | 876c24d3-21e4-4ea3-8c9e-4626fceb1287 |


Now detach vm3 from MA volume
(overcloud) [stack@undercloud-0 ~]$ nova volume-detach vm3 ee31add8-fca4-464d-b660-32e7c58410fc 
| ee31add8-fca4-464d-b660-32e7c58410fc | available | lvm-ma-vol | 2    | lvm-ma      | false    |                                      |


Looking good we managed to detach a multi-attach volume from 3 instance on two cycles:
First attempt where same mount point was used on all three VMs.
On the second cycle we used unique mount point per each vm, again all three VMs successfully detached.

Comment 25 errata-xmlrpc 2019-07-10 13:00:41 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHBA-2019:1732


Note You need to log in before you can comment on or make changes to this bug.