Bug 1943181 - iSCSI: Not flushing and leaving levtover devices when multipath configuration has changed [NEEDINFO]
Summary: iSCSI: Not flushing and leaving levtover devices when multipath configuration...
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: Red Hat OpenStack
Classification: Red Hat
Component: python-os-brick
Version: 13.0 (Queens)
Hardware: Unspecified
OS: Unspecified
high
high
Target Milestone: z16
: 13.0 (Queens)
Assignee: Gorka Eguileor
QA Contact: Tzach Shefi
URL:
Whiteboard:
: 1956273 (view as bug list)
Depends On: 1942974 1943146
Blocks:
TreeView+ depends on / blocked
 
Reported: 2021-03-25 14:34 UTC by Gorka Eguileor
Modified: 2022-09-05 13:16 UTC (History)
10 users (show)

Fixed In Version: python-os-brick-2.3.9-11.el7ost
Doc Type: Bug Fix
Doc Text:
Before this update, when the Compute service (nova) made a `terminate-connection` call to the Block Storage service (cinder), single and multipath devices were not being flushed and there was a risk of data loss because these devices were in a `leftover` state. + The cause of this problem was that the os-brick `disconnect_volume` code assumed that the `use_multipath` parameter had the same value as the connector that was used in the original `connect_volume` call. + With this update, the Block Storage service changes how it performs disconnects. The os-brick code now properly flushes and detaches volumes when the multipath configuration in the Compute service changes for volumes attached to instances.
Clone Of: 1943146
Environment:
Last Closed: 2021-06-16 10:59:46 UTC
Target Upstream Version:
Embargoed:
gregraka: needinfo?


Attachments (Terms of Use)
Nova compute log from verification (1.67 MB, text/plain)
2021-06-07 14:41 UTC, Tzach Shefi
no flags Details


Links
System ID Private Priority Status Summary Last Updated
Launchpad 1921381 0 None None None 2021-03-25 14:42:32 UTC
OpenStack gerrit 634675 0 None MERGED iSCSI: log exception if portals not found 2021-03-30 09:48:42 UTC
OpenStack gerrit 782992 0 None MERGED iSCSI: Fix flushing after multipath cfg change 2021-03-30 09:48:39 UTC
Red Hat Issue Tracker OSP-791 0 None None None 2022-09-05 13:16:39 UTC
Red Hat Product Errata RHBA-2021:2385 0 None None None 2021-06-16 11:00:22 UTC

Description Gorka Eguileor 2021-03-25 14:34:52 UTC
+++ This bug was initially created as a clone of Bug #1943146 +++
+++ This bug was initially created as a clone of Bug #1942974 +++

Description of problem:

OS-Brick disconnect_volume code assumes that the use_multipath parameter that is used to instantiate the connector has the same value than the connector that was used on the original connect_volume call.

Unfortunately this is not necessarily true, because Nova can attach a volume, then its multipath configuration can be enabled or disabled, and then a detach can be issued.

This leads to a series of serious issues such as:

- Not flushing the single path on disconnect_volume (possible data loss) and leaving it as a leftover device on the host when Nova calls terminate-connection on Cinder.

- Not flushing the multipath device (possible data loss) and leaving it as a lefover device similarly to the other case.


How reproducible:

There are many combinations depending on whether we are enabling or disabling multipathing and what kind of information the Cinder driver returns for single pathed connections, but in general it's easy to reproduce on systems that don't return target_portals and target_iqns on single pathed connections.

Steps to Reproduce:

- Enabling issue:

1. Configure Nova not to use multipathing  (volume_use_multipath = False under [libvirt])
2. Attach a volume to an instance
3. Enable multipathing in Nova (volume_use_multipath = True under [libvirt]), DEBUG log levels and restart the service.
4. Write a lot of data to the volume inside the instance
5. Detach the volume from the instance

We'll see in the logs that there is no flushing or removal of the device in logs, and the device will still be present in the system (even if Cinder has already unmapped it), and if we check the volume contents it's very likely that not all data has been saved.

- Disabling issue:

1. Configure Nova to use multipathing  (volume_use_multipath = True under [libvirt])
2. Attach a volume to an instance
3. Disable multipathing in Nova (volume_use_multipath = False under [libvirt]), DEBUG log levels and restart the service.
4. Write a lot of data to the volume inside the instance
5. Detach the volume from the instance

We'll see in the logs that there is no flushing and removal of the multipathed device in the logs, and all the individual devices are removed.  Depending on the speed of the system and storage the multipath may disappear from the system (if all the data had been written) but most likely the multipathed device will remain, without any device under it, and data will have been lost on the volume.

Comment 14 Luigi Toscano 2021-05-03 11:35:07 UTC
*** Bug 1956273 has been marked as a duplicate of this bug. ***

Comment 15 Tzach Shefi 2021-06-07 14:41:40 UTC
Created attachment 1789231 [details]
Nova compute log from verification

Verified on:
python2-os-brick-2.3.9-11.el7ost.noarch

Using 3par iscsi as Cinder backend, as it doesn't return target_portals and target_iqns on single pathed connections, but rather a single 'target_iqn'. 
Enabled privsep logging on compute nodes. 

We're going to start with the common flow which would most often be used,
switching from a none multipath deployment to multipath one. 
Startout with Nova's default  volume_use_multipath = False. 

Boot an instance:
(overcloud) [stack@undercloud-0 ~]$ nova show inst1
+--------------------------------------+----------------------------------------------------------+
| Property                             | Value                                                    |
+--------------------------------------+----------------------------------------------------------+
| OS-DCF:diskConfig                    | MANUAL                                                   |
| OS-EXT-AZ:availability_zone          | nova                                                     |
| OS-EXT-SRV-ATTR:host                 | compute13-0.redhat.local                                 |
| OS-EXT-SRV-ATTR:hostname             | inst1                                                    |
| OS-EXT-SRV-ATTR:hypervisor_hostname  | compute13-0.redhat.local                                 |
| OS-EXT-SRV-ATTR:instance_name        | instance-00000002                                        |
| OS-EXT-SRV-ATTR:kernel_id            |                                                          |
| OS-EXT-SRV-ATTR:launch_index         | 0                                                        |
| OS-EXT-SRV-ATTR:ramdisk_id           |                                                          |
| OS-EXT-SRV-ATTR:reservation_id       | r-g5p16q81                                               |
| OS-EXT-SRV-ATTR:root_device_name     | /dev/vda                                                 |
| OS-EXT-SRV-ATTR:user_data            | -                                                        |
| OS-EXT-STS:power_state               | 1                                                        |
| OS-EXT-STS:task_state                | -                                                        |
| OS-EXT-STS:vm_state                  | active                                                   |
| OS-SRV-USG:launched_at               | 2021-06-07T07:35:21.000000                               |
| OS-SRV-USG:terminated_at             | -                                                        |
| accessIPv4                           |                                                          |
| accessIPv6                           |                                                          |
| config_drive                         |                                                          |
| created                              | 2021-06-07T07:35:02Z                                     |
| description                          | inst1                                                    |
| flavor:disk                          | 1                                                        |
| flavor:ephemeral                     | 0                                                        |
| flavor:extra_specs                   | {}                                                       |
| flavor:original_name                 | tiny                                                     |
| flavor:ram                           | 512                                                      |
| flavor:swap                          | 0                                                        |
| flavor:vcpus                         | 1                                                        |
| hostId                               | 417e9304ad081218d6b4268818d0367e6d1135e97e5ff0904a8a9a88 |
| host_status                          | UP                                                       |
| id                                   | 7ea91146-ecee-47ce-a83f-f547c1a3cc05                     |
| image                                | cirros (1e2c066a-5f65-4273-9fac-9a124a73e2b0)            |
| internal network                     | 192.168.0.21, 10.0.0.239                                 |
| key_name                             | key1                                                     |
| locked                               | False                                                    |
| metadata                             | {}                                                       |
| name                                 | inst1                                                    |
| os-extended-volumes:volumes_attached | []                                                       |
| progress                             | 0                                                        |
| security_groups                      | inst1-sg                                                 |
| status                               | ACTIVE                                                   |
| tags                                 | []                                                       |
| tenant_id                            | 95c315374acd4d2b956a4354281190bc                         |
| updated                              | 2021-06-07T07:35:21Z                                     |
| user_id                              | 3c7a05dfa6314a18874863612dd19003                         |
+--------------------------------------+----------------------------------------------------------+

Create an empty volume:
(overcloud) [stack@undercloud-0 ~]$ cinder create 5 --name 3parVol
+--------------------------------+--------------------------------------+                                                                                                                                                                    
| Property                       | Value                                |                                                                                                                                                                    
+--------------------------------+--------------------------------------+                                                                                                                                                                    
| attachments                    | []                                   |                                                                                                                                                                    
| availability_zone              | nova                                 |                                                                                                                                                                    
| bootable                       | false                                |                                                                                                                                                                    
| consistencygroup_id            | None                                 |                                                                                                                                                                    
| created_at                     | 2021-06-07T13:21:00.000000           |                                                                                                                                                                    
| description                    | None                                 |                                                                                                                                                                    
| encrypted                      | False                                |                                                                                                                                                                    
| id                             | 97271836-7d0b-426e-b22c-06ec3d64557d |                                                                                                                                                                    
| metadata                       | {}                                   |                                                                                                                                                                    
| migration_status               | None                                 |                                                                                                                                                                    
| multiattach                    | False                                |                                                                                                                                                                    
| name                           | 3parVol                              |
| os-vol-host-attr:host          | controller13-0@3par#SSD_r5           |
| os-vol-mig-status-attr:migstat | None                                 |
| os-vol-mig-status-attr:name_id | None                                 |
| os-vol-tenant-attr:tenant_id   | 95c315374acd4d2b956a4354281190bc     |
| replication_status             | None                                 |
| size                           | 5                                    |
| snapshot_id                    | None                                 |
| source_volid                   | None                                 |
| status                         | available                            |
| updated_at                     | 2021-06-07T13:21:01.000000           |
| user_id                        | 3c7a05dfa6314a18874863612dd19003     |
| volume_type                    | tripleo                              |
+--------------------------------+--------------------------------------+


Attach volume to instances:
(overcloud) [stack@undercloud-0 ~]$ nova volume-attach inst1 97271836-7d0b-426e-b22c-06ec3d64557d
+----------+--------------------------------------+
| Property | Value                                |
+----------+--------------------------------------+
| device   | /dev/vdb                             |
| id       | 97271836-7d0b-426e-b22c-06ec3d64557d |
| serverId | 7ea91146-ecee-47ce-a83f-f547c1a3cc05 |
| volumeId | 97271836-7d0b-426e-b22c-06ec3d64557d |
+----------+--------------------------------------+


Inside instance, create fs and mount volume, write some data
# lsblk
NAME    MAJ:MIN RM  SIZE RO TYPE MOUNTPOINT
vda     253:0    0    1G  0 disk 
|-vda1  253:1    0 1015M  0 part /
`-vda15 253:15   0    8M  0 part 
vdb     253:16   0    5G  0 disk /root/tmp

Now lets switch nova.conf to volume_use_multipath = True
Don't forget restart nova docker. 

And now we detach the volum:
(overcloud) [stack@undercloud-0 ~]$ nova volume-detach inst1 97271836-7d0b-426e-b22c-06ec3d64557d
(overcloud) [stack@undercloud-0 ~]$ cinder list
+--------------------------------------+-----------+--------------+------+-------------+----------+-------------+
| ID                                   | Status    | Name         | Size | Volume Type | Bootable | Attached to |
+--------------------------------------+-----------+--------------+------+-------------+----------+-------------+
| 97271836-7d0b-426e-b22c-06ec3d64557d | available | 3parVol      | 5    | tripleo     | false    |             |
+--------------------------------------+-----------+--------------+------+-------------+----------+-------------+

instance is still up
(overcloud) [stack@undercloud-0 ~]$ nova list
+--------------------------------------+-------+--------+------------+-------------+-----------------------------------+
| ID                                   | Name  | Status | Task State | Power State | Networks                          |
+--------------------------------------+-------+--------+------------+-------------+-----------------------------------+
| 7ea91146-ecee-47ce-a83f-f547c1a3cc05 | inst1 | ACTIVE | -          | Running     | internal=192.168.0.21, 10.0.0.239 |
+--------------------------------------+-------+--------+------------+-------------+-----------------------------------+


And if we look at nova log we see:

2021-06-07 13:30:38.077 6 DEBUG nova.virt.libvirt.guest [req-79b80767-6a80-4f99-b813-903f2f3540cf 3c7a05dfa6314a18874863612dd19003 95c315374acd4d2b956a4354281190bc - default default] Attempting initial detach for device vdb detach_device_with_retry /usr/lib/python2.7/site-packages/nova/virt/libvirt/guest.py:440
2021-06-07 13:30:38.077 6 DEBUG nova.virt.libvirt.guest [req-79b80767-6a80-4f99-b813-903f2f3540cf 3c7a05dfa6314a18874863612dd19003 95c315374acd4d2b956a4354281190bc - default default] detach device xml: <disk type="block" device="disk">
  <driver name="qemu" type="raw" cache="none" io="native"/>
  <source dev="/dev/sda"/>                                             volume is vda is the actual vdb inside the instance. 
  <target bus="virtio" dev="vdb"/>
  <serial>97271836-7d0b-426e-b22c-06ec3d64557d</serial>
  <address type="pci" domain="0x0000" bus="0x00" slot="0x06" function="0x0"/>
</disk>


The whole section of the log,

2021-06-07 13:30:38.201 6 DEBUG nova.virt.libvirt.volume.iscsi [req-79b80767-6a80-4f99-b813-903f2f3540cf 3c7a05dfa6314a18874863612dd19003 95c315374acd4d2b956a4354281190bc - default default] [instance: 7ea91146-ecee-47ce-a83f-f547c1a3cc05] calling os-brick to detach iSCSI Volume disconnect_volume /usr/lib/python2.7/site-packages/nova/virt/libvirt/volume/iscsi.py:72
2021-06-07 13:30:38.201 6 DEBUG os_brick.initiator.connectors.iscsi [req-79b80767-6a80-4f99-b813-903f2f3540cf 3c7a05dfa6314a18874863612dd19003 95c315374acd4d2b956a4354281190bc - default default] ==> disconnect_volume: call u"{'args': (<os_brick.initiator.connectors.iscsi.ISCSIConnector object at 0x7fedbac970d0>, {u'device_path': u'/dev/sda', u'target_discovered': True, u'encrypted': False, u'qos_specs': None, u'target_iqn': u'iqn.2000-05.com.3pardata:21220002ac021f6b', u'target_portal': u'10.35.146.4:3260', u'target_lun': 0, u'access_mode': u'rw'}, None), 'kwargs': {}}" trace_logging_wrapper /usr/lib/python2.7/site-packages/os_brick/utils.py:146
2021-06-07 13:30:38.202 6 DEBUG oslo_concurrency.lockutils [req-79b80767-6a80-4f99-b813-903f2f3540cf 3c7a05dfa6314a18874863612dd19003 95c315374acd4d2b956a4354281190bc - default default] Lock "connect_volume" acquired by "os_brick.initiator.connectors.iscsi.disconnect_volume" :: waited 0.000s inner /usr/lib/python2.7/site-packages/oslo_concurrency/lockutils.py:273
2021-06-07 13:30:38.204 6 INFO oslo.privsep.daemon [req-79b80767-6a80-4f99-b813-903f2f3540cf 3c7a05dfa6314a18874863612dd19003 95c315374acd4d2b956a4354281190bc - default default] Running privsep helper: ['sudo', 'nova-rootwrap', '/etc/nova/rootwrap.conf', 'privsep-helper', '--config-file', '/usr/share/nova/nova-dist.conf', '--config-file', '/etc/nova/nova.conf', '--privsep_context', 'os_brick.privileged.default', '--privsep_sock_path', '/tmp/tmpkXHqz7/privsep.sock']
2021-06-07 13:30:38.967 6 INFO oslo.privsep.daemon [req-79b80767-6a80-4f99-b813-903f2f3540cf 3c7a05dfa6314a18874863612dd19003 95c315374acd4d2b956a4354281190bc - default default] Spawned new privsep daemon via rootwrap
2021-06-07 13:30:38.968 6 DEBUG oslo.privsep.daemon [req-79b80767-6a80-4f99-b813-903f2f3540cf 3c7a05dfa6314a18874863612dd19003 95c315374acd4d2b956a4354281190bc - default default] Accepted privsep connection to /tmp/tmpkXHqz7/privsep.sock __init__ /usr/lib/python2.7/site-packages/oslo_privsep/daemon.py:331
2021-06-07 13:30:38.902 127 INFO oslo.privsep.daemon [-] privsep daemon starting
2021-06-07 13:30:38.907 127 INFO oslo.privsep.daemon [-] privsep process running with uid/gid: 0/0
2021-06-07 13:30:38.910 127 INFO oslo.privsep.daemon [-] privsep process running with capabilities (eff/prm/inh): CAP_SYS_ADMIN/CAP_SYS_ADMIN/none
2021-06-07 13:30:38.910 127 INFO oslo.privsep.daemon [-] privsep daemon running as pid 127
2021-06-07 13:30:38.970 127 DEBUG oslo.privsep.daemon [-] privsep: request[140658973530224]: (1,) loop /usr/lib/python2.7/site-packages/oslo_privsep/daemon.py:443
2021-06-07 13:30:38.970 127 DEBUG oslo.privsep.daemon [-] privsep: reply[140658973530224]: (2,) loop /usr/lib/python2.7/site-packages/oslo_privsep/daemon.py:456
2021-06-07 13:30:38.971 127 DEBUG oslo.privsep.daemon [-] privsep: request[140658973530224]: (3, 'os_brick.privileged.rootwrap.execute_root', ('iscsiadm', '-m', 'discoverydb', '-o', 'show', '-P', 1), {'check_exit_code': 0}) loop /usr/lib/python2.7/site-packages/oslo_privsep/daemon.py:443
2021-06-07 13:30:39.131 127 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): iscsiadm -m discoverydb -o show -P 1 execute /usr/lib/python2.7/site-packages/oslo_concurrency/processutils.py:372
2021-06-07 13:30:39.221 127 DEBUG oslo_concurrency.processutils [-] CMD "iscsiadm -m discoverydb -o show -P 1" returned: 0 in 0.090s execute /usr/lib/python2.7/site-packages/oslo_concurrency/processutils.py:409
2021-06-07 13:30:39.222 127 DEBUG oslo.privsep.daemon [-] privsep: reply[140658973530224]: (4, ('SENDTARGETS:\nNo targets found.\niSNS:\nNo targets found.\nSTATIC:\nTarget: iqn.2000-05.com.3pardata:21220002ac021f6b\n\tPortal: 10.35.146.4:3260,-1\n\t\tIface Name: default\nFIRMWARE:\nNo targets found.\n', '')) loop /usr/lib/python2.7/site-packages/oslo_privsep/daemon.py:456
2021-06-07 13:30:39.223 6 DEBUG os_brick.initiator.connectors.iscsi [req-79b80767-6a80-4f99-b813-903f2f3540cf 3c7a05dfa6314a18874863612dd19003 95c315374acd4d2b956a4354281190bc - default default] iscsiadm ['-m', 'discoverydb', '-o', 'show', '-P', 1]: stdout=SENDTARGETS:
No targets found.
iSNS:
No targets found.
STATIC:
Target: iqn.2000-05.com.3pardata:21220002ac021f6b
	Portal: 10.35.146.4:3260,-1
		Iface Name: default
FIRMWARE:
No targets found.
 stderr= _run_iscsiadm_bare /usr/lib/python2.7/site-packages/os_brick/initiator/connectors/iscsi.py:1121
2021-06-07 13:30:39.223 6 DEBUG os_brick.initiator.connectors.iscsi [req-79b80767-6a80-4f99-b813-903f2f3540cf 3c7a05dfa6314a18874863612dd19003 95c315374acd4d2b956a4354281190bc - default default] Regex to get portals from discoverydb: ^SENDTARGETS:
.*?^DiscoveryAddress: 10.35.146.4,3260.*?
(.*?)^(?:DiscoveryAddress|iSNS):.* _get_discoverydb_portals /usr/lib/python2.7/site-packages/os_brick/initiator/connectors/iscsi.py:363
2021-06-07 13:30:39.225 6 DEBUG os_brick.initiator.connectors.iscsi [req-79b80767-6a80-4f99-b813-903f2f3540cf 3c7a05dfa6314a18874863612dd19003 95c315374acd4d2b956a4354281190bc - default default] Getting connected devices for (ips,iqns,luns)=[(u'10.35.146.4:3260', u'iqn.2000-05.com.3pardata:21220002ac021f6b', 0)] _get_connection_devices /usr/lib/python2.7/site-packages/os_brick/initiator/connectors/iscsi.py:811
2021-06-07 13:30:39.225 127 DEBUG oslo.privsep.daemon [-] privsep: request[140658973530224]: (3, 'os_brick.privileged.rootwrap.execute_root', ('iscsiadm', '-m', 'node'), {'check_exit_code': False}) loop /usr/lib/python2.7/site-packages/oslo_privsep/daemon.py:443
2021-06-07 13:30:39.226 127 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): iscsiadm -m node execute /usr/lib/python2.7/site-packages/oslo_concurrency/processutils.py:372
2021-06-07 13:30:39.315 127 DEBUG oslo_concurrency.processutils [-] CMD "iscsiadm -m node" returned: 0 in 0.090s execute /usr/lib/python2.7/site-packages/oslo_concurrency/processutils.py:409
2021-06-07 13:30:39.316 127 DEBUG oslo.privsep.daemon [-] privsep: reply[140658973530224]: (4, ('10.35.146.4:3260,-1 iqn.2000-05.com.3pardata:21220002ac021f6b\n', '')) loop /usr/lib/python2.7/site-packages/oslo_privsep/daemon.py:456
2021-06-07 13:30:39.317 127 DEBUG oslo.privsep.daemon [-] privsep: request[140658973530224]: (3, 'os_brick.privileged.rootwrap.execute_root', ('iscsiadm', '-m', 'session'), {'check_exit_code': (0, 21, 255)}) loop /usr/lib/python2.7/site-packages/oslo_privsep/daemon.py:443
2021-06-07 13:30:39.318 127 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): iscsiadm -m session execute /usr/lib/python2.7/site-packages/oslo_concurrency/processutils.py:372
2021-06-07 13:30:39.363 6 DEBUG nova.network.base_api [req-018dd69f-57d9-41cc-b991-5b9d40e7be52 - - - - -] [instance: 7ea91146-ecee-47ce-a83f-f547c1a3cc05] Updating instance_info_cache with network_info: [{"profile": {}, "ovs_interfaceid": "ba302204-9511-4800-b20d-c21953f60538", "preserve_on_delete": false, "network": {"bridge": "br-int", "subnets": [{"ips": [{"meta": {}, "version": 4, "type": "fixed", "floating_ips": [{"meta": {}, "version": 4, "type": "floating", "address": "10.0.0.239"}], "address": "192.168.0.21"}], "version": 4, "meta": {"dhcp_server": "192.168.0.11"}, "dns": [{"meta": {}, "version": 4, "type": "dns", "address": "8.8.8.8"}], "routes": [], "cidr": "192.168.0.0/24", "gateway": {"meta": {}, "version": 4, "type": "gateway", "address": "192.168.0.1"}}], "meta": {"injected": false, "tenant_id": "95c315374acd4d2b956a4354281190bc", "mtu": 1450}, "id": "a51aee7c-5381-43fc-b038-a81dc11cab30", "label": "internal"}, "devname": "tapba302204-95", "vnic_type": "normal", "qbh_params": null, "meta": {}, "details": {"port_filter": true, "bridge_name": "br-int", "datapath_type": "system", "ovs_hybrid_plug": true}, "address": "fa:16:3e:06:a9:ab", "active": true, "type": "ovs", "id": "ba302204-9511-4800-b20d-c21953f60538", "qbg_params": null}] update_instance_cache_with_nw_info /usr/lib/python2.7/site-packages/nova/network/base_api.py:48
2021-06-07 13:30:39.384 6 DEBUG oslo_concurrency.lockutils [req-018dd69f-57d9-41cc-b991-5b9d40e7be52 - - - - -] Releasing semaphore "refresh_cache-7ea91146-ecee-47ce-a83f-f547c1a3cc05" lock /usr/lib/python2.7/site-packages/oslo_concurrency/lockutils.py:228
2021-06-07 13:30:39.384 6 DEBUG nova.compute.manager [req-018dd69f-57d9-41cc-b991-5b9d40e7be52 - - - - -] [instance: 7ea91146-ecee-47ce-a83f-f547c1a3cc05] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python2.7/site-packages/nova/compute/manager.py:7175
2021-06-07 13:30:39.411 127 DEBUG oslo_concurrency.processutils [-] CMD "iscsiadm -m session" returned: 0 in 0.093s execute /usr/lib/python2.7/site-packages/oslo_concurrency/processutils.py:409
2021-06-07 13:30:39.411 127 DEBUG oslo.privsep.daemon [-] privsep: reply[140658973530224]: (4, ('tcp: [2] 10.35.146.4:3260,122 iqn.2000-05.com.3pardata:21220002ac021f6b (non-flash)\n', '')) loop /usr/lib/python2.7/site-packages/oslo_privsep/daemon.py:456
2021-06-07 13:30:39.412 6 DEBUG os_brick.initiator.connectors.iscsi [req-79b80767-6a80-4f99-b813-903f2f3540cf 3c7a05dfa6314a18874863612dd19003 95c315374acd4d2b956a4354281190bc - default default] iscsiadm ('-m', 'session'): stdout=tcp: [2] 10.35.146.4:3260,122 iqn.2000-05.com.3pardata:21220002ac021f6b (non-flash)
 stderr= _run_iscsiadm_bare /usr/lib/python2.7/site-packages/os_brick/initiator/connectors/iscsi.py:1121
2021-06-07 13:30:39.413 6 DEBUG os_brick.initiator.connectors.iscsi [req-79b80767-6a80-4f99-b813-903f2f3540cf 3c7a05dfa6314a18874863612dd19003 95c315374acd4d2b956a4354281190bc - default default] iscsi session list stdout=tcp: [2] 10.35.146.4:3260,122 iqn.2000-05.com.3pardata:21220002ac021f6b (non-flash)
 stderr= _run_iscsi_session /usr/lib/python2.7/site-packages/os_brick/initiator/connectors/iscsi.py:1110
2021-06-07 13:30:39.415 6 DEBUG os_brick.initiator.connectors.iscsi [req-79b80767-6a80-4f99-b813-903f2f3540cf 3c7a05dfa6314a18874863612dd19003 95c315374acd4d2b956a4354281190bc - default default] Resulting device map defaultdict(<function <lambda> at 0x7fedb8139ed8>, {(u'10.35.146.4:3260', u'iqn.2000-05.com.3pardata:21220002ac021f6b'): (set([u'sda']), set([]))}) _get_connection_devices /usr/lib/python2.7/site-packages/os_brick/initiator/connectors/iscsi.py:843
2021-06-07 13:30:39.416 6 DEBUG os_brick.initiator.linuxscsi [req-79b80767-6a80-4f99-b813-903f2f3540cf 3c7a05dfa6314a18874863612dd19003 95c315374acd4d2b956a4354281190bc - default default] Removing single pathed devices sda remove_connection /usr/lib/python2.7/site-packages/os_brick/initiator/linuxscsi.py:271
2021-06-07 13:30:39.416 6 DEBUG os_brick.initiator.linuxscsi [req-79b80767-6a80-4f99-b813-903f2f3540cf 3c7a05dfa6314a18874863612dd19003 95c315374acd4d2b956a4354281190bc - default default] Flushing IO for device /dev/sda flush_device_io /usr/lib/python2.7/site-packages/os_brick/initiator/linuxscsi.py:317
2021-06-07 13:30:39.417 127 DEBUG oslo.privsep.daemon [-] privsep: request[140658973530224]: (3, 'os_brick.privileged.rootwrap.execute_root', ('blockdev', '--flushbufs', u'/dev/sda'), {'attempts': 3, 'interval': 10, 'timeout': 300}) loop /usr/lib/python2.7/site-packages/oslo_privsep/daemon.py:443
2021-06-07 13:30:39.417 127 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): blockdev --flushbufs /dev/sda execute /usr/lib/python2.7/site-packages/oslo_concurrency/processutils.py:372
2021-06-07 13:30:39.509 127 DEBUG oslo_concurrency.processutils [-] CMD "blockdev --flushbufs /dev/sda" returned: 0 in 0.092s execute /usr/lib/python2.7/site-packages/oslo_concurrency/processutils.py:409
2021-06-07 13:30:39.509 127 DEBUG oslo.privsep.daemon [-] privsep: reply[140658973530224]: (4, ('', '')) loop /usr/lib/python2.7/site-packages/oslo_privsep/daemon.py:456
2021-06-07 13:30:39.510 6 DEBUG os_brick.initiator.linuxscsi [req-79b80767-6a80-4f99-b813-903f2f3540cf 3c7a05dfa6314a18874863612dd19003 95c315374acd4d2b956a4354281190bc - default default] Remove SCSI device /dev/sda with /sys/block/sda/device/delete remove_scsi_device /usr/lib/python2.7/site-packages/os_brick/initiator/linuxscsi.py:76
2021-06-07 13:30:39.511 127 DEBUG oslo.privsep.daemon [-] privsep: request[140658973530224]: (3, 'os_brick.privileged.rootwrap.execute_root', ('tee', '-a', u'/sys/block/sda/device/delete'), {'process_input': '1'}) loop /usr/lib/python2.7/site-packages/oslo_privsep/daemon.py:443
2021-06-07 13:30:39.511 127 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): tee -a /sys/block/sda/device/delete execute /usr/lib/python2.7/site-packages/oslo_concurrency/processutils.py:372
2021-06-07 13:30:39.614 127 DEBUG oslo_concurrency.processutils [-] CMD "tee -a /sys/block/sda/device/delete" returned: 0 in 0.103s execute /usr/lib/python2.7/site-packages/oslo_concurrency/processutils.py:409
2021-06-07 13:30:39.615 127 DEBUG oslo.privsep.daemon [-] privsep: reply[140658973530224]: (4, ('1', '')) loop /usr/lib/python2.7/site-packages/oslo_privsep/daemon.py:456
2021-06-07 13:30:39.616 6 DEBUG os_brick.initiator.linuxscsi [req-79b80767-6a80-4f99-b813-903f2f3540cf 3c7a05dfa6314a18874863612dd19003 95c315374acd4d2b956a4354281190bc - default default] Checking to see if SCSI volumes sda have been removed. wait_for_volumes_removal /usr/lib/python2.7/site-packages/os_brick/initiator/linuxscsi.py:84
2021-06-07 13:30:39.616 6 DEBUG os_brick.initiator.linuxscsi [req-79b80767-6a80-4f99-b813-903f2f3540cf 3c7a05dfa6314a18874863612dd19003 95c315374acd4d2b956a4354281190bc - default default] SCSI volumes sda have been removed. wait_for_volumes_removal /usr/lib/python2.7/site-packages/os_brick/initiator/linuxscsi.py:93
2021-06-07 13:30:39.616 6 DEBUG os_brick.initiator.connectors.iscsi [req-79b80767-6a80-4f99-b813-903f2f3540cf 3c7a05dfa6314a18874863612dd19003 95c315374acd4d2b956a4354281190bc - default default] Disconnecting from: [(u'10.35.146.4:3260', u'iqn.2000-05.com.3pardata:21220002ac021f6b')] _disconnect_connection /usr/lib/python2.7/site-packages/os_brick/initiator/connectors/iscsi.py:1098
2021-06-07 13:30:39.617 127 DEBUG oslo.privsep.daemon [-] privsep: request[140658973530224]: (3, 'os_brick.privileged.rootwrap.execute_root', ('iscsiadm', '-m', 'node', '-T', u'iqn.2000-05.com.3pardata:21220002ac021f6b', '-p', u'10.35.146.4:3260', '--op', 'update', '-n', 'node.startup', '-v', 'manual'), {'attempts': 1, 'check_exit_code': (0, 21, 255), 'delay_on_retry': True}) loop /usr/lib/python2.7/site-packages/oslo_privsep/daemon.py:443
2021-06-07 13:30:39.618 127 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): iscsiadm -m node -T iqn.2000-05.com.3pardata:21220002ac021f6b -p 10.35.146.4:3260 --op update -n node.startup -v manual execute /usr/lib/python2.7/site-packages/oslo_concurrency/processutils.py:372
2021-06-07 13:30:39.708 127 DEBUG oslo_concurrency.processutils [-] CMD "iscsiadm -m node -T iqn.2000-05.com.3pardata:21220002ac021f6b -p 10.35.146.4:3260 --op update -n node.startup -v manual" returned: 0 in 0.090s execute /usr/lib/python2.7/site-packages/oslo_concurrency/processutils.py:409
2021-06-07 13:30:39.709 127 DEBUG oslo.privsep.daemon [-] privsep: reply[140658973530224]: (4, ('', '')) loop /usr/lib/python2.7/site-packages/oslo_privsep/daemon.py:456
2021-06-07 13:30:39.710 6 DEBUG os_brick.initiator.connectors.iscsi [req-79b80767-6a80-4f99-b813-903f2f3540cf 3c7a05dfa6314a18874863612dd19003 95c315374acd4d2b956a4354281190bc - default default] iscsiadm ('--op', 'update', '-n', 'node.startup', '-v', 'manual'): stdout= stderr= _run_iscsiadm /usr/lib/python2.7/site-packages/os_brick/initiator/connectors/iscsi.py:999
2021-06-07 13:30:39.711 127 DEBUG oslo.privsep.daemon [-] privsep: request[140658973530224]: (3, 'os_brick.privileged.rootwrap.execute_root', ('iscsiadm', '-m', 'node', '-T', u'iqn.2000-05.com.3pardata:21220002ac021f6b', '-p', u'10.35.146.4:3260', '--logout'), {'attempts': 1, 'check_exit_code': (0, 21, 255), 'delay_on_retry': True}) loop /usr/lib/python2.7/site-packages/oslo_privsep/daemon.py:443
2021-06-07 13:30:39.711 127 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): iscsiadm -m node -T iqn.2000-05.com.3pardata:21220002ac021f6b -p 10.35.146.4:3260 --logout execute /usr/lib/python2.7/site-packages/oslo_concurrency/processutils.py:372
2021-06-07 13:30:39.822 127 DEBUG oslo_concurrency.processutils [-] CMD "iscsiadm -m node -T iqn.2000-05.com.3pardata:21220002ac021f6b -p 10.35.146.4:3260 --logout" returned: 0 in 0.111s execute /usr/lib/python2.7/site-packages/oslo_concurrency/processutils.py:409
2021-06-07 13:30:39.823 127 DEBUG oslo.privsep.daemon [-] privsep: reply[140658973530224]: (4, ('Logging out of session [sid: 2, target: iqn.2000-05.com.3pardata:21220002ac021f6b, portal: 10.35.146.4,3260]\nLogout of [sid: 2, target: iqn.2000-05.com.3pardata:21220002ac021f6b, portal: 10.35.146.4,3260] successful.\n', '')) loop /usr/lib/python2.7/site-packages/oslo_privsep/daemon.py:456
2021-06-07 13:30:39.824 6 DEBUG os_brick.initiator.connectors.iscsi [req-79b80767-6a80-4f99-b813-903f2f3540cf 3c7a05dfa6314a18874863612dd19003 95c315374acd4d2b956a4354281190bc - default default] iscsiadm ('--logout',): stdout=Logging out of session [sid: 2, target: iqn.2000-05.com.3pardata:21220002ac021f6b, portal: 10.35.146.4,3260]
Logout of [sid: 2, target: iqn.2000-05.com.3pardata:21220002ac021f6b, portal: 10.35.146.4,3260] successful.
 stderr= _run_iscsiadm /usr/lib/python2.7/site-packages/os_brick/initiator/connectors/iscsi.py:999
2021-06-07 13:30:39.825 127 DEBUG oslo.privsep.daemon [-] privsep: request[140658973530224]: (3, 'os_brick.privileged.rootwrap.execute_root', ('iscsiadm', '-m', 'node', '-T', u'iqn.2000-05.com.3pardata:21220002ac021f6b', '-p', u'10.35.146.4:3260', '--op', 'delete'), {'attempts': 5, 'check_exit_code': (0, 21, 255), 'delay_on_retry': True}) loop /usr/lib/python2.7/site-packages/oslo_privsep/daemon.py:443
2021-06-07 13:30:39.825 127 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): iscsiadm -m node -T iqn.2000-05.com.3pardata:21220002ac021f6b -p 10.35.146.4:3260 --op delete execute /usr/lib/python2.7/site-packages/oslo_concurrency/processutils.py:372
2021-06-07 13:30:39.922 127 DEBUG oslo_concurrency.processutils [-] CMD "iscsiadm -m node -T iqn.2000-05.com.3pardata:21220002ac021f6b -p 10.35.146.4:3260 --op delete" returned: 0 in 0.096s execute /usr/lib/python2.7/site-packages/oslo_concurrency/processutils.py:409
2021-06-07 13:30:39.923 127 DEBUG oslo.privsep.daemon [-] privsep: reply[140658973530224]: (4, ('', '')) loop /usr/lib/python2.7/site-packages/oslo_privsep/daemon.py:456
2021-06-07 13:30:39.925 6 DEBUG os_brick.initiator.connectors.iscsi [req-79b80767-6a80-4f99-b813-903f2f3540cf 3c7a05dfa6314a18874863612dd19003 95c315374acd4d2b956a4354281190bc - default default] iscsiadm ('--op', 'delete'): stdout= stderr= _run_iscsiadm /usr/lib/python2.7/site-packages/os_brick/initiator/connectors/iscsi.py:999
2021-06-07 13:30:39.926 6 DEBUG oslo_concurrency.lockutils [req-79b80767-6a80-4f99-b813-903f2f3540cf 3c7a05dfa6314a18874863612dd19003 95c315374acd4d2b956a4354281190bc - default default] Lock "connect_volume" released by "os_brick.initiator.connectors.iscsi.disconnect_volume" :: held 1.724s inner /usr/lib/python2.7/site-packages/oslo_concurrency/lockutils.py:285
2021-06-07 13:30:39.926 6 DEBUG os_brick.initiator.connectors.iscsi [req-79b80767-6a80-4f99-b813-903f2f3540cf 3c7a05dfa6314a18874863612dd19003 95c315374acd4d2b956a4354281190bc - default default] <== disconnect_volume: return (1724ms) None trace_logging_wrapper /usr/lib/python2.7/site-packages/os_brick/utils.py:170
2021-06-07 13:30:39.927 6 DEBUG nova.virt.libvirt.volume.iscsi [req-79b80767-6a80-4f99-b813-903f2f3540cf 3c7a05dfa6314a18874863612dd19003 95c315374acd4d2b956a4354281190bc - default default] [instance: 7ea91146-ecee-47ce-a83f-f547c1a3cc05] Disconnected iSCSI Volume disconnect_volume /usr/lib/python2.7/site-packages/nova/virt/libvirt/volume/iscsi.py:78

A few lines above the end we see sda getting disconnected. 
2021-06-07 13:30:39.417 127 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): blockdev --flushbufs /dev/sda execute /usr/lib/python2.7/site-packages/oslo_concurrency/processutils.py:372


And so despite having switched from none multipath to multipath,
we see the volume (vda) disconnected. 

This should sufficient to verify bz, 
However going the extra mile, I'd also test the oposite path,  
moving from a multipathed enabled system to a none multipath deployment. 

I deleted inst1+volume from above, created new ones istead:
(overcloud) [stack@undercloud-0 ~]$ nova volume-attach inst1 52932af3-dcd2-45b0-b6dc-8ce9ef5ac15e
+----------+--------------------------------------+
| Property | Value                                |
+----------+--------------------------------------+
| device   | /dev/vdb                             |
| id       | 52932af3-dcd2-45b0-b6dc-8ce9ef5ac15e |
| serverId | 463f4427-3c6a-4865-84b8-374a2fd3db0d |
| volumeId | 52932af3-dcd2-45b0-b6dc-8ce9ef5ac15e |
+----------+--------------------------------------+
(overcloud) [stack@undercloud-0 ~]$ nova show inst1
+--------------------------------------+----------------------------------------------------------------------------------+
| Property                             | Value                                                                            |                              
+--------------------------------------+----------------------------------------------------------------------------------+                              
| OS-DCF:diskConfig                    | MANUAL                                                                           |                              
| OS-EXT-AZ:availability_zone          | nova                                                                             |                              
| OS-EXT-SRV-ATTR:host                 | compute13-0.redhat.local                                                         |                              
| OS-EXT-SRV-ATTR:hostname             | inst1                                                                            |                              
| OS-EXT-SRV-ATTR:hypervisor_hostname  | compute13-0.redhat.local                                                         |                              
| OS-EXT-SRV-ATTR:instance_name        | instance-00000008                                                                |
| OS-EXT-SRV-ATTR:kernel_id            |                                                                                  |
| OS-EXT-SRV-ATTR:launch_index         | 0                                                                                |
| OS-EXT-SRV-ATTR:ramdisk_id           |                                                                                  |
| OS-EXT-SRV-ATTR:reservation_id       | r-3zohg4hh                                                                       |
| OS-EXT-SRV-ATTR:root_device_name     | /dev/vda                                                                         |
| OS-EXT-SRV-ATTR:user_data            | -                                                                                |
| OS-EXT-STS:power_state               | 1                                                                                |
| OS-EXT-STS:task_state                | -                                                                                |
| OS-EXT-STS:vm_state                  | active                                                                           |
| OS-SRV-USG:launched_at               | 2021-06-07T14:13:31.000000                                                       |
| OS-SRV-USG:terminated_at             | -                                                                                |
| accessIPv4                           |                                                                                  |
| accessIPv6                           |                                                                                  |
| config_drive                         |                                                                                  |
| created                              | 2021-06-07T14:13:15Z                                                             |
| description                          | inst1                                                                            |
| flavor:disk                          | 1                                                                                |
| flavor:ephemeral                     | 0                                                                                |
| flavor:extra_specs                   | {}                                                                               |
| flavor:original_name                 | tiny                                                                             |
| flavor:ram                           | 512                                                                              |
| flavor:swap                          | 0                                                                                |
| flavor:vcpus                         | 1                                                                                |
| hostId                               | 417e9304ad081218d6b4268818d0367e6d1135e97e5ff0904a8a9a88                         |
| host_status                          | UP                                                                               |
| id                                   | 463f4427-3c6a-4865-84b8-374a2fd3db0d                                             |
| image                                | cirros (1e2c066a-5f65-4273-9fac-9a124a73e2b0)                                    |
| internal network                     | 192.168.0.27, 10.0.0.239                                                         |
| key_name                             | key1                                                                             |
| locked                               | False                                                                            |
| metadata                             | {}                                                                               |
| name                                 | inst1                                                                            |
| os-extended-volumes:volumes_attached | [{"id": "52932af3-dcd2-45b0-b6dc-8ce9ef5ac15e", "delete_on_termination": false}] |
| progress                             | 0                                                                                |
| security_groups                      | inst1-sg                                                                         |
| status                               | ACTIVE                                                                           |
| tags                                 | []                                                                               |
| tenant_id                            | 95c315374acd4d2b956a4354281190bc                                                 |
| updated                              | 2021-06-07T14:13:31Z                                                             |
| user_id                              | 3c7a05dfa6314a18874863612dd19003                                                 |
+--------------------------------------+----------------------------------------------------------------------------------+

We left compute13-0.redhat.local with multipath enabled 
confirming inst1 is multipahed attache to volume we see:

[root@compute13-0 ~]# multipath -ll
360002ac0000000000000161c00021f6b dm-0 3PARdata,VV              
size=5.0G features='1 queue_if_no_path' hwhandler='1 alua' wp=rw
`-+- policy='service-time 0' prio=50 status=active
  |- 8:0:0:0  sda 8:0  active ready running
  |- 9:0:0:0  sdb 8:16 active ready running
  |- 10:0:0:0 sdc 8:32 active ready running
  `- 11:0:0:0 sdd 8:48 active ready running


And now lets switch nova from multipath to none multipath,
restart nova, as expected the current volume remains multipathed-attached 

[root@compute13-0 ~]# multipath -ll
360002ac0000000000000161c00021f6b dm-0 3PARdata,VV              
size=5.0G features='1 queue_if_no_path' hwhandler='1 alua' wp=rw
`-+- policy='service-time 0' prio=50 status=active
  |- 8:0:0:0  sda 8:0  active ready running
  |- 9:0:0:0  sdb 8:16 active ready running
  |- 10:0:0:0 sdc 8:32 active ready running
  `- 11:0:0:0 sdd 8:48 active ready running


Not that it matters much, but lets create fs/mount/write data on this volume before we detach it.
Detach the volume:
(overcloud) [stack@undercloud-0 ~]$ nova volume-detach inst1 52932af3-dcd2-45b0-b6dc-8ce9ef5ac15e

I can't add the log bits here too much test for bz. 
but the full log is attached it covers both cases.

If you look inside you'll see as expected, despite siwtching from multipath to none multipath
volume which was multipathed before change is removed as expected. 

End result voluem is detached:

(overcloud) [stack@undercloud-0 ~]$ cinder list
+--------------------------------------+-----------+--------------+------+-------------+----------+-------------+
| ID                                   | Status    | Name         | Size | Volume Type | Bootable | Attached to |
+--------------------------------------+-----------+--------------+------+-------------+----------+-------------+
| 52932af3-dcd2-45b0-b6dc-8ce9ef5ac15e | available | 3par_vol2    | 5    | tripleo     | false    |             |
 

We're good to verify tested volume detach from instance on both scenarios,
a none multipathed system switched to multipath and vice versa, both worked as expected.

Comment 16 Tzach Shefi 2021-06-07 14:42:33 UTC
See above, forget to set flag.

Comment 21 errata-xmlrpc 2021-06-16 10:59:46 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory (Red Hat OpenStack Platform 13.0 bug fix and enhancement advisory), and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHBA-2021:2385


Note You need to log in before you can comment on or make changes to this bug.