Bug 1943146 - iSCSI: Not flushing and leaving levtover devices when multipath configuration has changed
Summary: iSCSI: Not flushing and leaving levtover devices when multipath configuration...
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: Red Hat OpenStack
Classification: Red Hat
Component: python-os-brick
Version: 16.1 (Train)
Hardware: Unspecified
OS: Unspecified
high
high
Target Milestone: z7
: 16.1 (Train on RHEL 8.2)
Assignee: Gorka Eguileor
QA Contact: Tzach Shefi
URL:
Whiteboard:
Depends On: 1942974
Blocks: 1943181
TreeView+ depends on / blocked
 
Reported: 2021-03-25 13:16 UTC by Gorka Eguileor
Modified: 2021-12-09 20:18 UTC (History)
8 users (show)

Fixed In Version: python-os-brick-2.10.5-1.20201114041632.el8ost
Doc Type: If docs needed, set a value
Doc Text:
Clone Of: 1942974
: 1943181 (view as bug list)
Environment:
Last Closed: 2021-12-09 20:18:25 UTC
Target Upstream Version:
Embargoed:


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
Launchpad 1921381 0 None None None 2021-03-25 14:42:03 UTC
OpenStack gerrit 782992 0 None MERGED iSCSI: Fix flushing after multipath cfg change 2021-03-30 09:49:24 UTC
Red Hat Issue Tracker OSP-776 0 None None None 2021-11-18 11:31:16 UTC
Red Hat Product Errata RHBA-2021:3762 0 None None None 2021-12-09 20:18:54 UTC

Description Gorka Eguileor 2021-03-25 13:16:58 UTC
+++ This bug was initially created as a clone of Bug #1942974 +++

Description of problem:

OS-Brick disconnect_volume code assumes that the use_multipath parameter that is used to instantiate the connector has the same value than the connector that was used on the original connect_volume call.

Unfortunately this is not necessarily true, because Nova can attach a volume, then its multipath configuration can be enabled or disabled, and then a detach can be issued.

This leads to a series of serious issues such as:

- Not flushing the single path on disconnect_volume (possible data loss) and leaving it as a leftover device on the host when Nova calls terminate-connection on Cinder.

- Not flushing the multipath device (possible data loss) and leaving it as a lefover device similarly to the other case.


How reproducible:

There are many combinations depending on whether we are enabling or disabling multipathing and what kind of information the Cinder driver returns for single pathed connections, but in general it's easy to reproduce on systems that don't return target_portals and target_iqns on single pathed connections.

Steps to Reproduce:

- Enabling issue:

1. Configure Nova not to use multipathing  (volume_use_multipath = False under [libvirt])
2. Attach a volume to an instance
3. Enable multipathing in Nova (volume_use_multipath = True under [libvirt]), DEBUG log levels and restart the service.
4. Write a lot of data to the volume inside the instance
5. Detach the volume from the instance

We'll see in the logs that there is no flushing or removal of the device in logs, and the device will still be present in the system (even if Cinder has already unmapped it), and if we check the volume contents it's very likely that not all data has been saved.

- Disabling issue:

1. Configure Nova to use multipathing  (volume_use_multipath = True under [libvirt])
2. Attach a volume to an instance
3. Disable multipathing in Nova (volume_use_multipath = False under [libvirt]), DEBUG log levels and restart the service.
4. Write a lot of data to the volume inside the instance
5. Detach the volume from the instance

We'll see in the logs that there is no flushing and removal of the multipathed device in the logs, and all the individual devices are removed.  Depending on the speed of the system and storage the multipath may disappear from the system (if all the data had been written) but most likely the multipathed device will remain, without any device under it, and data will have been lost on the volume.

Comment 11 Tzach Shefi 2021-08-12 10:38:15 UTC
Verified on:
python3-os-brick-2.10.5-1.20210706143310.634fb4a.el8ost

Using 3par iscsi as Cinder backend, as it doesn't return target_portals and target_iqns on single pathed connections, but rather a single 'target_iqn'. 
Enabled privsep logging on compute nodes. 

Starting with the common flow, of switching from a non multipath deployment to multipathed one. 
Nova's default  volume_use_multipath = False. 

Booted an instance:
(overcloud) [stack@undercloud-0 ~]$ nova show inst1                                                                                                                                                                                          
+--------------------------------------+----------------------------------------------------------+                                                                                                                                          
| Property                             | Value                                                    |                                                                                                                                          
+--------------------------------------+----------------------------------------------------------+                                                                                                                                          
| OS-DCF:diskConfig                    | MANUAL                                                   |                                                                                                                                          
| OS-EXT-AZ:availability_zone          | nova                                                     |                                                                                                                                          
| OS-EXT-SRV-ATTR:host                 | compute-1.redhat.local                                   |                                                                                                                                          
| OS-EXT-SRV-ATTR:hostname             | inst1                                                    |                                                                                                                                          
| OS-EXT-SRV-ATTR:hypervisor_hostname  | compute-1.redhat.local                                   |
| OS-EXT-SRV-ATTR:instance_name        | instance-00000002                                        |
| OS-EXT-SRV-ATTR:kernel_id            |                                                          |
| OS-EXT-SRV-ATTR:launch_index         | 0                                                        |
| OS-EXT-SRV-ATTR:ramdisk_id           |                                                          |
| OS-EXT-SRV-ATTR:reservation_id       | r-6rgowiyz                                               |
| OS-EXT-SRV-ATTR:root_device_name     | /dev/vda                                                 |
| OS-EXT-SRV-ATTR:user_data            | -                                                        |
| OS-EXT-STS:power_state               | 1                                                        |
| OS-EXT-STS:task_state                | -                                                        |
| OS-EXT-STS:vm_state                  | active                                                   |
| OS-SRV-USG:launched_at               | 2021-08-12T09:22:33.000000                               |
| OS-SRV-USG:terminated_at             | -                                                        |
| accessIPv4                           |                                                          |
| accessIPv6                           |                                                          |
| config_drive                         |                                                          |
| created                              | 2021-08-12T09:22:21Z                                     |
| description                          | inst1                                                    |
| flavor:disk                          | 1                                                        |
| flavor:ephemeral                     | 0                                                        |
| flavor:extra_specs                   | {}                                                       |
| flavor:original_name                 | tiny                                                     |
| flavor:ram                           | 512                                                      |
| flavor:swap                          | 0                                                        |
| flavor:vcpus                         | 1                                                        |
| hostId                               | 02c567078911269e4d364ab8a8afd1cbc33e840f27efc1e173cf3f61 |
| host_status                          | UP                                                       |
| id                                   | 762c72b2-8d28-4537-82f6-37d6b0e72331                     |
| image                                | cirros (c0d92435-9e28-465b-a77b-36defb162699)            |
| internal network                     | 192.168.0.18, 10.0.0.224                                 |
| key_name                             | key1                                                     |
| locked                               | False                                                    |
| locked_reason                        | -                                                        |
| metadata                             | {}                                                       |
| name                                 | inst1                                                    |
| os-extended-volumes:volumes_attached | []                                                       |
| progress                             | 0                                                        |
| security_groups                      | inst1-sg                                                 |
| server_groups                        | []                                                       |
| status                               | ACTIVE                                                   |
| tags                                 | []                                                       |
| tenant_id                            | ca57bc259e7c431ba1d2acefe760cbc8                         |
| trusted_image_certificates           | -                                                        |
| updated                              | 2021-08-12T09:22:33Z                                     |
| user_id                              | f69591d38a374ea1a8f9032ff08c637a                         |
+--------------------------------------+----------------------------------------------------------+

Create an empty 3par backed Cinder volume:
(overcloud) [stack@undercloud-0 ~]$ cinder create 4 --name 3parVol --volume-type 3par
\+--------------------------------+--------------------------------------+                                                                                                                                                                   
| Property                       | Value                                |                                                                                                                                                                    
+--------------------------------+--------------------------------------+                                                                                                                                                                    
| attachments                    | []                                   |                                                                                                                                                                    
| availability_zone              | nova                                 |                                                                                                                                                                    
| bootable                       | false                                |                                                                                                                                                                    
| consistencygroup_id            | None                                 |                                                                                                                                                                    
| created_at                     | 2021-08-12T09:41:06.000000           |                                                                                                                                                                    
| description                    | None                                 |                                                                                                                                                                    
| encrypted                      | False                                |                                                                                                                                                                    
| id                             | 0ddd4505-98da-4028-b8c8-aeb86c8eae75 |                                                                                                                                                                    
| metadata                       | {}                                   |                                                                                                                                                                    
| migration_status               | None                                 |                                                                                                                                                                    
| multiattach                    | False                                |                                                                                                                                                                    
| name                           | 3parVol                              |                                                                                                                                                                    
| os-vol-host-attr:host          | None                                 |                                                                                                                                                                    
| os-vol-mig-status-attr:migstat | None                                 |                                                                                                                                                                    
| os-vol-mig-status-attr:name_id | None                                 |                                                                                                                                                                    
| os-vol-tenant-attr:tenant_id   | ca57bc259e7c431ba1d2acefe760cbc8     |                                                                                                                                                                    
| replication_status             | None                                 |                                                                                                                                                                    
| size                           | 4                                    |                                                                                                                                                                    
| snapshot_id                    | None                                 |                                                                                                                                                                    
| source_volid                   | None                                 |
| status                         | creating                             |
| updated_at                     | None                                 |
| user_id                        | f69591d38a374ea1a8f9032ff08c637a     |
| volume_type                    | 3par                                 |
+--------------------------------+--------------------------------------+

(overcloud) [stack@undercloud-0 ~]$ cinder list
+--------------------------------------+-----------+---------+------+-------------+----------+-------------+
| ID                                   | Status    | Name    | Size | Volume Type | Bootable | Attached to |
+--------------------------------------+-----------+---------+------+-------------+----------+-------------+
| 0ddd4505-98da-4028-b8c8-aeb86c8eae75 | available | 3parVol | 4    | 3par        | false    |             |
+--------------------------------------+-----------+---------+------+-------------+----------+-------------+

Attach volume to instance:
(overcloud) [stack@undercloud-0 ~]$ nova volume-attach inst1 0ddd4505-98da-4028-b8c8-aeb86c8eae75
+-----------------------+--------------------------------------+
| Property              | Value                                |
+-----------------------+--------------------------------------+
| delete_on_termination | False                                |
| device                | /dev/vdb                             |
| id                    | 0ddd4505-98da-4028-b8c8-aeb86c8eae75 |
| serverId              | 762c72b2-8d28-4537-82f6-37d6b0e72331 |
| tag                   | -                                    |
| volumeId              | 0ddd4505-98da-4028-b8c8-aeb86c8eae75 |
+-----------------------+--------------------------------------+

Inside instance mount volume:
# lsblk
NAME    MAJ:MIN RM  SIZE RO TYPE MOUNTPOINT
vda     253:0    0    1G  0 disk 
|-vda1  253:1    0 1015M  0 part /
`-vda15 253:15   0    8M  0 part 
vdb     253:16   0    4G  0 disk /root/tmp


Now lets switch nova.conf to volume_use_multipath = True

[root@compute-1 ~]# grep volume_use /var/lib/config-data/puppet-generated/nova_libvirt/etc/nova/nova.conf                                                                                                                                    
#volume_use_multipath=false
volume_use_multipath=true

Don't forget restart nova docker.

Now try to detach volume from instance:
(overcloud) [stack@undercloud-0 ~]$ nova volume-detach inst1 0ddd4505-98da-4028-b8c8-aeb86c8eae75
(overcloud) [stack@undercloud-0 ~]$ cinder list
nova list
+--------------------------------------+-----------+---------+------+-------------+----------+-------------+
| ID                                   | Status    | Name    | Size | Volume Type | Bootable | Attached to |
+--------------------------------------+-----------+---------+------+-------------+----------+-------------+
| 0ddd4505-98da-4028-b8c8-aeb86c8eae75 | available | 3parVol | 4    | 3par        | false    |             |
+--------------------------------------+-----------+---------+------+-------------+----------+-------------+

Instance remains up:
(overcloud) [stack@undercloud-0 ~]$ nova list
+--------------------------------------+-------+--------+------------+-------------+-----------------------------------+
| ID                                   | Name  | Status | Task State | Power State | Networks                          |
+--------------------------------------+-------+--------+------------+-------------+-----------------------------------+
| 762c72b2-8d28-4537-82f6-37d6b0e72331 | inst1 | ACTIVE | -          | Running     | internal=192.168.0.18, 10.0.0.224 |
+--------------------------------------+-------+--------+------------+-------------+-----------------------------------+

Looking good, lets review nova log output:


2021-08-12 10:08:22.721 6 DEBUG oslo_concurrency.lockutils [req-b559d292-4b23-4ca1-9102-62acbd13dde0 f69591d38a374ea1a8f9032ff08c637a ca57bc259e7c431ba1d2acefe760cbc8 - default default] Lock "5fef3674-8f0c-445d-8ff3-4c8ddf8125b3" acquired by "nova.virt.block_device.DriverVolumeBlockDevice.detach.<locals>._do_locked_detach" :: waited 0.000s inner /usr/lib/python3.6/site-packages/oslo_concurrency/lockutils.py:327
2021-08-12 10:08:22.722 6 INFO nova.virt.block_device [req-b559d292-4b23-4ca1-9102-62acbd13dde0 f69591d38a374ea1a8f9032ff08c637a ca57bc259e7c431ba1d2acefe760cbc8 - default default] [instance: 762c72b2-8d28-4537-82f6-37d6b0e72331] Attempting to driver detach volume 0ddd4505-98da-4028-b8c8-aeb86c8eae75 from mountpoint /dev/vdb
2021-08-12 10:08:22.729 6 DEBUG nova.virt.libvirt.guest [req-b559d292-4b23-4ca1-9102-62acbd13dde0 f69591d38a374ea1a8f9032ff08c637a ca57bc259e7c431ba1d2acefe760cbc8 - default default] Attempting initial detach for device vdb detach_device_with_retry /usr/lib/python3.6/site-packages/nova/virt/libvirt/guest.py:453
2021-08-12 10:08:22.729 6 DEBUG nova.virt.libvirt.guest [req-b559d292-4b23-4ca1-9102-62acbd13dde0 f69591d38a374ea1a8f9032ff08c637a ca57bc259e7c431ba1d2acefe760cbc8 - default default] detach device xml: <disk type="block" device="disk">
  <driver name="qemu" type="raw" cache="none" io="native"/>
  <source dev="/dev/sda"/>
  <target bus="virtio" dev="vdb"/>
  <serial>0ddd4505-98da-4028-b8c8-aeb86c8eae75</serial>
  <address type="pci" domain="0x0000" bus="0x00" slot="0x06" function="0x0"/>
</disk>
 detach_device /usr/lib/python3.6/site-packages/nova/virt/libvirt/guest.py:504
2021-08-12 10:08:22.845 6 DEBUG nova.virt.libvirt.guest [req-b559d292-4b23-4ca1-9102-62acbd13dde0 f69591d38a374ea1a8f9032ff08c637a ca57bc259e7c431ba1d2acefe760cbc8 - default default] Successfully detached device vdb from guest. Persistent? 1. Live? True _try_detach_device /usr/lib/python3.6/site-packages/nova/virt/libvirt/guest.py:412
2021-08-12 10:08:22.846 6 DEBUG nova.virt.libvirt.guest [req-b559d292-4b23-4ca1-9102-62acbd13dde0 f69591d38a374ea1a8f9032ff08c637a ca57bc259e7c431ba1d2acefe760cbc8 - default default] Start retrying detach until device vdb is gone. detach_device_with_retry /usr/lib/python3.6/site-packages/nova/virt/libvirt/guest.py:469
2021-08-12 10:08:22.846 6 DEBUG oslo.service.loopingcall [req-b559d292-4b23-4ca1-9102-62acbd13dde0 f69591d38a374ea1a8f9032ff08c637a ca57bc259e7c431ba1d2acefe760cbc8 - default default] Waiting for function nova.virt.libvirt.guest.Guest.detach_device_with_retry.<locals>._do_wait_and_retry_detach to return. func /usr/lib/python3.6/site-packages/oslo_service/loopingcall.py:435
2021-08-12 10:08:22.849 6 DEBUG nova.virt.libvirt.volume.iscsi [req-b559d292-4b23-4ca1-9102-62acbd13dde0 f69591d38a374ea1a8f9032ff08c637a ca57bc259e7c431ba1d2acefe760cbc8 - default default] [instance: 762c72b2-8d28-4537-82f6-37d6b0e72331] calling os-brick to detach iSCSI Volume disconnect_volume /usr/lib/python3.6/site-packages/nova/virt/libvirt/volume/iscsi.py:72
2021-08-12 10:08:22.849 6 DEBUG os_brick.initiator.connectors.iscsi [req-b559d292-4b23-4ca1-9102-62acbd13dde0 f69591d38a374ea1a8f9032ff08c637a ca57bc259e7c431ba1d2acefe760cbc8 - default default] ==> disconnect_volume: call "{'args': (<os_brick.initiator.connectors.iscsi.ISCSIConnector object at 0x7fa68bf7b940>, {'target_portal': '10.35.146.3:3260', 'target_iqn': 'iqn.2000-05.com.3pardata:20220002ac021f6b', 'target_lun': 0, 'target_discovered': True, 'encrypted': False, 'qos_specs': None, 'access_mode': 'rw', 'device_path': '/dev/sda'}, None), 'kwargs': {}}" trace_logging_wrapper /usr/lib/python3.6/site-packages/os_brick/utils.py:146
2021-08-12 10:08:22.850 6 DEBUG oslo_concurrency.lockutils [req-b559d292-4b23-4ca1-9102-62acbd13dde0 f69591d38a374ea1a8f9032ff08c637a ca57bc259e7c431ba1d2acefe760cbc8 - default default] Lock "connect_volume" acquired by "os_brick.initiator.connectors.iscsi.ISCSIConnector.disconnect_volume" :: waited 0.000s inner /usr/lib/python3.6/site-packages/oslo_concurrency/lockutils.py:327
2021-08-12 10:08:22.851 6 INFO oslo.privsep.daemon [req-b559d292-4b23-4ca1-9102-62acbd13dde0 f69591d38a374ea1a8f9032ff08c637a ca57bc259e7c431ba1d2acefe760cbc8 - default default] Running privsep helper: ['sudo', 'nova-rootwrap', '/etc/nova/rootwrap.conf', 'privsep-helper', '--config-file', '/usr/share/nova/nova-dist.conf', '--config-file', '/etc/nova/nova.conf', '--privsep_context', 'os_brick.privileged.default', '--privsep_sock_path', '/tmp/tmpprv_p424/privsep.sock']
2021-08-12 10:08:23.882 6 INFO oslo.privsep.daemon [req-b559d292-4b23-4ca1-9102-62acbd13dde0 f69591d38a374ea1a8f9032ff08c637a ca57bc259e7c431ba1d2acefe760cbc8 - default default] Spawned new privsep daemon via rootwrap
2021-08-12 10:08:23.883 6 DEBUG oslo.privsep.daemon [req-b559d292-4b23-4ca1-9102-62acbd13dde0 f69591d38a374ea1a8f9032ff08c637a ca57bc259e7c431ba1d2acefe760cbc8 - default default] Accepted privsep connection to /tmp/tmpprv_p424/privsep.sock __init__ /usr/lib/python3.6/site-packages/oslo_privsep/daemon.py:371
2021-08-12 10:08:23.752 75 INFO oslo.privsep.daemon [-] privsep daemon starting
2021-08-12 10:08:23.757 75 INFO oslo.privsep.daemon [-] privsep process running with uid/gid: 0/0
2021-08-12 10:08:23.761 75 INFO oslo.privsep.daemon [-] privsep process running with capabilities (eff/prm/inh): CAP_SYS_ADMIN/CAP_SYS_ADMIN/none
2021-08-12 10:08:23.761 75 INFO oslo.privsep.daemon [-] privsep daemon running as pid 75
2021-08-12 10:08:23.886 75 DEBUG oslo.privsep.daemon [-] privsep: reply[140353196285424]: (2,) _call_back /usr/lib/python3.6/site-packages/oslo_privsep/daemon.py:511
2021-08-12 10:08:24.107 75 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): iscsiadm -m discoverydb -o show -P 1 execute /usr/lib/python3.6/site-packages/oslo_concurrency/processutils.py:372
2021-08-12 10:08:24.116 75 DEBUG oslo_concurrency.processutils [-] CMD "iscsiadm -m discoverydb -o show -P 1" returned: 0 in 0.009s execute /usr/lib/python3.6/site-packages/oslo_concurrency/processutils.py:409
2021-08-12 10:08:24.117 75 DEBUG oslo.privsep.daemon [-] privsep: reply[140353196285424]: (4, ('SENDTARGETS:\nNo targets found.\niSNS:\nNo targets found.\nSTATIC:\nTarget: iqn.2000-05.com.3pardata:20220002ac021f6b\n\tPortal: 10.35.146.3:3260,-1\n\t\tIface Name: default\nFIRMWARE:\nNo targets found.\n', '')) _call_back /usr/lib/python3.6/site-packages/oslo_privsep/daemon.py:511
2021-08-12 10:08:24.118 6 DEBUG os_brick.initiator.connectors.iscsi [req-b559d292-4b23-4ca1-9102-62acbd13dde0 f69591d38a374ea1a8f9032ff08c637a ca57bc259e7c431ba1d2acefe760cbc8 - default default] iscsiadm ['-m', 'discoverydb', '-o', 'show', '-P', 1]: stdout=SENDTARGETS:
No targets found.
iSNS:
No targets found.
STATIC:
Target: iqn.2000-05.com.3pardata:20220002ac021f6b
        Portal: 10.35.146.3:3260,-1
                Iface Name: default
FIRMWARE:
No targets found.
 stderr= _run_iscsiadm_bare /usr/lib/python3.6/site-packages/os_brick/initiator/connectors/iscsi.py:1144
2021-08-12 10:08:24.118 6 DEBUG os_brick.initiator.connectors.iscsi [req-b559d292-4b23-4ca1-9102-62acbd13dde0 f69591d38a374ea1a8f9032ff08c637a ca57bc259e7c431ba1d2acefe760cbc8 - default default] Regex to get portals from discoverydb: ^SENDTARGETS:
.*?^DiscoveryAddress: 10.35.146.3,3260.*?
(.*?)^(?:DiscoveryAddress|iSNS):.* _get_discoverydb_portals /usr/lib/python3.6/site-packages/os_brick/initiator/connectors/iscsi.py:365
2021-08-12 10:08:24.119 6 DEBUG os_brick.initiator.connectors.iscsi [req-b559d292-4b23-4ca1-9102-62acbd13dde0 f69591d38a374ea1a8f9032ff08c637a ca57bc259e7c431ba1d2acefe760cbc8 - default default] Getting connected devices for (ips,iqns,luns)=[('10.35.146.3:3260', 'iqn.2000-05.com.3pardata:20220002ac021f6b', 0)] _get_connection_devices /usr/lib/python3.6/site-packages/os_brick/initiator/connectors/iscsi.py:829
2021-08-12 10:08:24.120 75 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): iscsiadm -m node execute /usr/lib/python3.6/site-packages/oslo_concurrency/processutils.py:372
2021-08-12 10:08:24.128 75 DEBUG oslo_concurrency.processutils [-] CMD "iscsiadm -m node" returned: 0 in 0.007s execute /usr/lib/python3.6/site-packages/oslo_concurrency/processutils.py:409
2021-08-12 10:08:24.128 75 DEBUG oslo.privsep.daemon [-] privsep: reply[140353196285424]: (4, ('10.35.146.3:3260,4294967295 iqn.2000-05.com.3pardata:20220002ac021f6b\n', '')) _call_back /usr/lib/python3.6/site-packages/oslo_privsep/daemon.py:511
2021-08-12 10:08:24.129 75 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): iscsiadm -m session execute /usr/lib/python3.6/site-packages/oslo_concurrency/processutils.py:372
2021-08-12 10:08:24.137 75 DEBUG oslo_concurrency.processutils [-] CMD "iscsiadm -m session" returned: 0 in 0.008s execute /usr/lib/python3.6/site-packages/oslo_concurrency/processutils.py:409
2021-08-12 10:08:24.138 75 DEBUG oslo.privsep.daemon [-] privsep: reply[140353196285424]: (4, ('tcp: [2] 10.35.146.3:3260,22 iqn.2000-05.com.3pardata:20220002ac021f6b (non-flash)\n', '')) _call_back /usr/lib/python3.6/site-packages/oslo_privsep/daemon.py:511
2021-08-12 10:08:24.138 6 DEBUG os_brick.initiator.connectors.iscsi [req-b559d292-4b23-4ca1-9102-62acbd13dde0 f69591d38a374ea1a8f9032ff08c637a ca57bc259e7c431ba1d2acefe760cbc8 - default default] iscsiadm ('-m', 'session'): stdout=tcp: [2] 10.35.146.3:3260,22 iqn.2000-05.com.3pardata:20220002ac021f6b (non-flash)
 stderr= _run_iscsiadm_bare /usr/lib/python3.6/site-packages/os_brick/initiator/connectors/iscsi.py:1144
2021-08-12 10:08:24.139 6 DEBUG os_brick.initiator.connectors.iscsi [req-b559d292-4b23-4ca1-9102-62acbd13dde0 f69591d38a374ea1a8f9032ff08c637a ca57bc259e7c431ba1d2acefe760cbc8 - default default] iscsi session list stdout=tcp: [2] 10.35.146.3:3260,22 iqn.2000-05.com.3pardata:20220002ac021f6b (non-flash)
 stderr= _run_iscsi_session /usr/lib/python3.6/site-packages/os_brick/initiator/connectors/iscsi.py:1133
2021-08-12 10:08:24.140 6 DEBUG os_brick.initiator.connectors.iscsi [req-b559d292-4b23-4ca1-9102-62acbd13dde0 f69591d38a374ea1a8f9032ff08c637a ca57bc259e7c431ba1d2acefe760cbc8 - default default] Resulting device map defaultdict(<function ISCSIConnector._get_connection_devices.<locals>.<lambda> at 0x7fa68446cea0>, {('10.35.146.3:3260', 'iqn.2000-05.com.3pardata:20220002ac021f6b'): ({'sda'}, set())}) _get_connection_devices /usr/lib/python3.6/site-packages/os_brick/initiator/connectors/iscsi.py:861
2021-08-12 10:08:24.141 6 DEBUG os_brick.initiator.linuxscsi [req-b559d292-4b23-4ca1-9102-62acbd13dde0 f69591d38a374ea1a8f9032ff08c637a ca57bc259e7c431ba1d2acefe760cbc8 - default default] Removing single pathed devices sda remove_connection /usr/lib/python3.6/site-packages/os_brick/initiator/linuxscsi.py:309
2021-08-12 10:08:24.142 75 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): multipathd show status execute /usr/lib/python3.6/site-packages/oslo_concurrency/processutils.py:372
2021-08-12 10:08:24.154 75 DEBUG oslo_concurrency.processutils [-] CMD "multipathd show status" returned: 0 in 0.012s execute /usr/lib/python3.6/site-packages/oslo_concurrency/processutils.py:409
2021-08-12 10:08:24.154 75 DEBUG oslo.privsep.daemon [-] privsep: reply[140353196285424]: (4, ('path checker states:\nup                  1\n\npaths: 0\nbusy: False\n', '')) _call_back /usr/lib/python3.6/site-packages/oslo_privsep/daemon.py:511
2021-08-12 10:08:24.156 75 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): multipathd del path /dev/sda execute /usr/lib/python3.6/site-packages/oslo_concurrency/processutils.py:372
2021-08-12 10:08:24.165 75 DEBUG oslo_concurrency.processutils [-] CMD "multipathd del path /dev/sda" returned: 0 in 0.009s execute /usr/lib/python3.6/site-packages/oslo_concurrency/processutils.py:409
2021-08-12 10:08:24.165 75 DEBUG oslo.privsep.daemon [-] privsep: reply[140353196285424]: (4, ('ok\n', '')) _call_back /usr/lib/python3.6/site-packages/oslo_privsep/daemon.py:511


2021-08-12 10:08:24.166 6 DEBUG os_brick.initiator.linuxscsi [req-b559d292-4b23-4ca1-9102-62acbd13dde0 f69591d38a374ea1a8f9032ff08c637a ca57bc259e7c431ba1d2acefe760cbc8 - default default] Flushing IO for device /dev/sda flush_device_io /usr/lib/python3.6/site-packages/os_brick/initiator/linuxscsi.py:367
2021-08-12 10:08:24.167 75 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): blockdev --flushbufs /dev/sda execute /usr/lib/python3.6/site-packages/oslo_concurrency/processutils.py:372
2021-08-12 10:08:24.172 75 DEBUG oslo_concurrency.processutils [-] CMD "blockdev --flushbufs /dev/sda" returned: 0 in 0.005s execute /usr/lib/python3.6/site-packages/oslo_concurrency/processutils.py:409
2021-08-12 10:08:24.173 75 DEBUG oslo.privsep.daemon [-] privsep: reply[140353196285424]: (4, ('', '')) _call_back /usr/lib/python3.6/site-packages/oslo_privsep/daemon.py:511


Flushing device /dev/sda 


2021-08-12 10:08:24.173 6 DEBUG os_brick.initiator.linuxscsi [req-b559d292-4b23-4ca1-9102-62acbd13dde0 f69591d38a374ea1a8f9032ff08c637a ca57bc259e7c431ba1d2acefe760cbc8 - default default] Remove SCSI device /dev/sda with /sys/block/sda/device/delete remove_scsi_device /usr/lib/python3.6/site-packages/os_brick/initiator/linuxscsi.py:76
2021-08-12 10:08:24.174 75 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): tee -a /sys/block/sda/device/delete execute /usr/lib/python3.6/site-packages/oslo_concurrency/processutils.py:372
2021-08-12 10:08:24.232 75 DEBUG oslo_concurrency.processutils [-] CMD "tee -a /sys/block/sda/device/delete" returned: 0 in 0.057s execute /usr/lib/python3.6/site-packages/oslo_concurrency/processutils.py:409
2021-08-12 10:08:24.232 75 DEBUG oslo.privsep.daemon [-] privsep: reply[140353196285424]: (4, ('1', '')) _call_back /usr/lib/python3.6/site-packages/oslo_privsep/daemon.py:511
2021-08-12 10:08:24.234 6 DEBUG os_brick.initiator.linuxscsi [req-b559d292-4b23-4ca1-9102-62acbd13dde0 f69591d38a374ea1a8f9032ff08c637a ca57bc259e7c431ba1d2acefe760cbc8 - default default] Checking to see if SCSI volumes sda have been removed. wait_for_volumes_removal /usr/lib/python3.6/site-packages/os_brick/initiator/linuxscsi.py:84
2021-08-12 10:08:24.235 6 DEBUG os_brick.initiator.linuxscsi [req-b559d292-4b23-4ca1-9102-62acbd13dde0 f69591d38a374ea1a8f9032ff08c637a ca57bc259e7c431ba1d2acefe760cbc8 - default default] SCSI volumes sda have been removed. wait_for_volumes_removal /usr/lib/python3.6/site-packages/os_brick/initiator/linuxscsi.py:93
2021-08-12 10:08:24.236 6 DEBUG os_brick.initiator.connectors.iscsi [req-b559d292-4b23-4ca1-9102-62acbd13dde0 f69591d38a374ea1a8f9032ff08c637a ca57bc259e7c431ba1d2acefe760cbc8 - default default] Disconnecting from: [('10.35.146.3:3260', 'iqn.2000-05.com.3pardata:20220002ac021f6b')] _disconnect_connection /usr/lib/python3.6/site-packages/os_brick/initiator/connectors/iscsi.py:1121
2021-08-12 10:08:24.237 75 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): iscsiadm -m node -T iqn.2000-05.com.3pardata:20220002ac021f6b -p 10.35.146.3:3260 --op update -n node.startup -v manual execute /usr/lib/python3.6/site-packages/oslo_concurrency/processutils.py:372
2021-08-12 10:08:24.250 75 DEBUG oslo_concurrency.processutils [-] CMD "iscsiadm -m node -T iqn.2000-05.com.3pardata:20220002ac021f6b -p 10.35.146.3:3260 --op update -n node.startup -v manual" returned: 0 in 0.013s execute /usr/lib/python3.6/site-packages/oslo_concurrency/processutils.py:409
2021-08-12 10:08:24.250 75 DEBUG oslo.privsep.daemon [-] privsep: reply[140353196285424]: (4, ('', '')) _call_back /usr/lib/python3.6/site-packages/oslo_privsep/daemon.py:511
2021-08-12 10:08:24.251 6 DEBUG os_brick.initiator.connectors.iscsi [req-b559d292-4b23-4ca1-9102-62acbd13dde0 f69591d38a374ea1a8f9032ff08c637a ca57bc259e7c431ba1d2acefe760cbc8 - default default] iscsiadm ('--op', 'update', '-n', 'node.startup', '-v', 'manual'): stdout= stderr= _run_iscsiadm /usr/lib/python3.6/site-packages/os_brick/initiator/connectors/iscsi.py:1017
2021-08-12 10:08:24.253 75 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): iscsiadm -m node -T iqn.2000-05.com.3pardata:20220002ac021f6b -p 10.35.146.3:3260 --logout execute /usr/lib/python3.6/site-packages/oslo_concurrency/processutils.py:372
2021-08-12 10:08:24.298 75 DEBUG oslo_concurrency.processutils [-] CMD "iscsiadm -m node -T iqn.2000-05.com.3pardata:20220002ac021f6b -p 10.35.146.3:3260 --logout" returned: 0 in 0.046s execute /usr/lib/python3.6/site-packages/oslo_concurrency/processutils.py:409
2021-08-12 10:08:24.299 75 DEBUG oslo.privsep.daemon [-] privsep: reply[140353196285424]: (4, ('Logging out of session [sid: 2, target: iqn.2000-05.com.3pardata:20220002ac021f6b, portal: 10.35.146.3,3260]\nLogout of [sid: 2, target: iqn.2000-05.com.3pardata:20220002ac021f6b, portal: 10.35.146.3,3260] successful.\n', '')) _call_back /usr/lib/python3.6/site-packages/oslo_privsep/daemon.py:511
2021-08-12 10:08:24.300 6 DEBUG os_brick.initiator.connectors.iscsi [req-b559d292-4b23-4ca1-9102-62acbd13dde0 f69591d38a374ea1a8f9032ff08c637a ca57bc259e7c431ba1d2acefe760cbc8 - default default] iscsiadm ('--logout',): stdout=Logging out of session [sid: 2, target: iqn.2000-05.com.3pardata:20220002ac021f6b, portal: 10.35.146.3,3260]
Logout of [sid: 2, target: iqn.2000-05.com.3pardata:20220002ac021f6b, portal: 10.35.146.3,3260] successful.
 stderr= _run_iscsiadm /usr/lib/python3.6/site-packages/os_brick/initiator/connectors/iscsi.py:1017
2021-08-12 10:08:24.300 75 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): iscsiadm -m node -T iqn.2000-05.com.3pardata:20220002ac021f6b -p 10.35.146.3:3260 --op delete execute /usr/lib/python3.6/site-packages/oslo_concurrency/processutils.py:372
2021-08-12 10:08:24.309 75 DEBUG oslo_concurrency.processutils [-] CMD "iscsiadm -m node -T iqn.2000-05.com.3pardata:20220002ac021f6b -p 10.35.146.3:3260 --op delete" returned: 0 in 0.008s execute /usr/lib/python3.6/site-packages/oslo_concurrency/processutils.py:409
2021-08-12 10:08:24.309 75 DEBUG oslo.privsep.daemon [-] privsep: reply[140353196285424]: (4, ('', '')) _call_back /usr/lib/python3.6/site-packages/oslo_privsep/daemon.py:511
2021-08-12 10:08:24.310 6 DEBUG os_brick.initiator.connectors.iscsi [req-b559d292-4b23-4ca1-9102-62acbd13dde0 f69591d38a374ea1a8f9032ff08c637a ca57bc259e7c431ba1d2acefe760cbc8 - default default] iscsiadm ('--op', 'delete'): stdout= stderr= _run_iscsiadm /usr/lib/python3.6/site-packages/os_brick/initiator/connectors/iscsi.py:1017
2021-08-12 10:08:24.310 6 DEBUG oslo_concurrency.lockutils [req-b559d292-4b23-4ca1-9102-62acbd13dde0 f69591d38a374ea1a8f9032ff08c637a ca57bc259e7c431ba1d2acefe760cbc8 - default default] Lock "connect_volume" released by "os_brick.initiator.connectors.iscsi.ISCSIConnector.disconnect_volume" :: held 1.460s inner /usr/lib/python3.6/site-packages/oslo_concurrency/lockutils.py:339
2021-08-12 10:08:24.311 6 DEBUG os_brick.initiator.connectors.iscsi [req-b559d292-4b23-4ca1-9102-62acbd13dde0 f69591d38a374ea1a8f9032ff08c637a ca57bc259e7c431ba1d2acefe760cbc8 - default default] <== disconnect_volume: return (1460ms) None trace_logging_wrapper /usr/lib/python3.6/site-packages/os_brick/utils.py:170
2021-08-12 10:08:24.311 6 DEBUG nova.virt.libvirt.volume.iscsi [req-b559d292-4b23-4ca1-9102-62acbd13dde0 f69591d38a374ea1a8f9032ff08c637a ca57bc259e7c431ba1d2acefe760cbc8 - default default] [instance: 762c72b2-8d28-4537-82f6-37d6b0e72331] Disconnected iSCSI Volume disconnect_volume /usr/lib/python3.6/site-packages/nova/virt/libvirt/volume/iscsi.py:78
2021-08-12 10:08:25.535 6 DEBUG oslo_concurrency.lockutils [req-b559d292-4b23-4ca1-9102-62acbd13dde0 f69591d38a374ea1a8f9032ff08c637a ca57bc259e7c431ba1d2acefe760cbc8 - default default] Lock "5fef3674-8f0c-445d-8ff3-4c8ddf8125b3" released by "nova.virt.block_device.DriverVolumeBlockDevice.detach.<locals>._do_locked_detach" :: held 2.814s inner /usr/lib/python3.6/site-packages/oslo_concurrency/lockutils.py:339
2021-08-12 10:08:25.536 6 DEBUG nova.objects.instance [req-b559d292-4b23-4ca1-9102-62acbd13dde0 f69591d38a374ea1a8f9032ff08c637a ca57bc259e7c431ba1d2acefe760cbc8 - default default] Lazy-loading 'flavor' on Instance uuid 762c72b2-8d28-4537-82f6-37d6b0e72331 obj_load_attr /usr/lib/python3.6/site-packages/nova/objects/instance.py:1091
2021-08-12 10:08:25.614 6 DEBUG oslo_concurrency.lockutils [req-b559d292-4b23-4ca1-9102-62acbd13dde0 f69591d38a374ea1a8f9032ff08c637a ca57bc259e7c431ba1d2acefe760cbc8 - default default] Lock "762c72b2-8d28-4537-82f6-37d6b0e72331" released by "nova.compute.manager.ComputeManager.detach_volume.<locals>.do_detach_volume" :: held 4.073s inner /usr/lib/python3.6/site-packages/oslo_concurrency/lockutils.py:339
2021-08-12 10:08:57.822 6 DEBUG oslo_service.periodic_task [req-a12f933e-5a47-4faa-880e-50434ac6e9ae - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.6/site-packages/oslo_service/periodic_task.py:217
2021-08-12 10:08:57.849 6 DEBUG oslo_service.periodic_task [req-a12f933e-5a47-4faa-880e-50434ac6e9ae - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.6/site-packages/oslo_service/periodic_task.py:217
2021-08-12 10:08:57.870 6 DEBUG nova.compute.resource_tracker [req-a12f933e-5a47-4faa-880e-50434ac6e9ae - - - - -] Auditing locally available compute resources for compute-1.redhat.local (node: compute-1.redhat.local) update_available_resource /usr/lib/python3.6/site-packages/nova/compute/resource_tracker.py:910
2021-08-12 10:08:57.933 6 DEBUG oslo_concurrency.processutils [req-a12f933e-5a47-4faa-880e-50434ac6e9ae - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/762c72b2-8d28-4537-82f6-37d6b0e72331/disk --force-share execute /usr/lib/python3.6/site-packages/oslo_concurrency/processutils.py:372



Thus despite switching Nova.conf from none multipath to multipath,
The volume still managed to successfully disconnected, good to verify.  

While we're here, lets also try to other direction, 
change a multipathed deployment back to none multipathed detaching volume again.

First let's reattach said volume back to instance:
(overcloud) [stack@undercloud-0 ~]$ nova volume-attach inst1 0ddd4505-98da-4028-b8c8-aeb86c8eae75                                                                                                                                           
+-----------------------+--------------------------------------+                                                                                                                                                                             
| Property              | Value                                |                                                                                                                                                                             
+-----------------------+--------------------------------------+                                                                                                                                                                             
| delete_on_termination | False                                |
| device                | /dev/vdb                             |
| id                    | 0ddd4505-98da-4028-b8c8-aeb86c8eae75 |
| serverId              | 762c72b2-8d28-4537-82f6-37d6b0e72331 |
| tag                   | -                                    |
| volumeId              | 0ddd4505-98da-4028-b8c8-aeb86c8eae75 |
+-----------------------+--------------------------------------+
(overcloud) [stack@undercloud-0 ~]$ cinder list
+--------------------------------------+-----------+---------+------+-------------+----------+-------------+
| ID                                   | Status    | Name    | Size | Volume Type | Bootable | Attached to |                                                                                                                                 
+--------------------------------------+-----------+---------+------+-------------+----------+-------------+                                                                                                                                 
| 0ddd4505-98da-4028-b8c8-aeb86c8eae75 | attaching | 3parVol | 4    | 3par        | false    |             |                                                                                                                                 
+--------------------------------------+-----------+---------+------+-------------+----------+-------------+                                                                                                                                 
(overcloud) [stack@undercloud-0 ~]$ cinder list                                                                                                                                                                                              
+--------------------------------------+--------+---------+------+-------------+----------+--------------------------------------+                                                                                                           
| ID                                   | Status | Name    | Size | Volume Type | Bootable | Attached to                          |                                                                                                           
+--------------------------------------+--------+---------+------+-------------+----------+--------------------------------------+                                                                                                           
| 0ddd4505-98da-4028-b8c8-aeb86c8eae75 | in-use | 3parVol | 4    | 3par        | false    | 762c72b2-8d28-4537-82f6-37d6b0e72331 |                                                                                                           
+--------------------------------------+--------+---------+------+-------------+----------+--------------------------------------+  

On Nova compute node we can see multipath state:
[root@compute-1 ~]# multipath -ll
360002ac000000000000017dc00021f6b dm-0 3PARdata,VV
size=4.0G features='1 queue_if_no_path' hwhandler='1 alua' wp=rw
`-+- policy='service-time 0' prio=50 status=active
  |- 2:0:0:0 sda 8:0  active ready running
  |- 3:0:0:0 sdb 8:16 active ready running
  |- 4:0:0:0 sdc 8:32 active ready running
  `- 5:0:0:0 sdd 8:48 active ready running


Revert back to none multipath:
[root@compute-1 ~]# grep volume_use /var/lib/config-data/puppet-generated/nova_libvirt/etc/nova/nova.conf 
volume_use_multipath=false

And detach volume from instance:
(overcloud) [stack@undercloud-0 ~]$ nova volume-detach inst1 0ddd4505-98da-4028-b8c8-aeb86c8eae75                                                                                                                                          
(overcloud) [stack@undercloud-0 ~]$ nova list
cinder list
+--------------------------------------+-------+--------+------------+-------------+-----------------------------------+
| ID                                   | Name  | Status | Task State | Power State | Networks                          |
+--------------------------------------+-------+--------+------------+-------------+-----------------------------------+
| 762c72b2-8d28-4537-82f6-37d6b0e72331 | inst1 | ACTIVE | -          | Running     | internal=192.168.0.18, 10.0.0.224 |
+--------------------------------------+-------+--------+------------+-------------+-----------------------------------+
(overcloud) [stack@undercloud-0 ~]$ cinder list
+--------------------------------------+-----------+---------+------+-------------+----------+-------------+
| ID                                   | Status    | Name    | Size | Volume Type | Bootable | Attached to |                                                                                                                                 
+--------------------------------------+-----------+---------+------+-------------+----------+-------------+                                                                                                                                 
| 0ddd4505-98da-4028-b8c8-aeb86c8eae75 | available | 3parVol | 4    | 3par        | false    |             |                                                                                                                                 
+--------------------------------------+-----------+---------+------+-------------+----------+-------------+    

Again successfully detach a volume, 
this time while moving from multipathed to a non multipathed enabled deployment. 
Good to verify.

Comment 23 errata-xmlrpc 2021-12-09 20:18:25 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory (Red Hat OpenStack Platform 16.1.7 (Train) bug fix and enhancement advisory), and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHBA-2021:3762


Note You need to log in before you can comment on or make changes to this bug.