Bug 1738830 - [OSP 16] disconnect_volume errors during post_live_migration result in the overall failure of the migration despite the instance running on the destination.
Summary: [OSP 16] disconnect_volume errors during post_live_migration result in the ov...
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: Red Hat OpenStack
Classification: Red Hat
Component: openstack-nova
Version: 13.0 (Queens)
Hardware: Unspecified
OS: Unspecified
medium
medium
Target Milestone: rc
: 16.0 (Train on RHEL 8.1)
Assignee: Lee Yarwood
QA Contact: OSP DFG:Compute
URL:
Whiteboard:
Depends On:
Blocks: 1767917 1767925 1767928
TreeView+ depends on / blocked
 
Reported: 2019-08-08 09:01 UTC by Eduard Barrera
Modified: 2023-03-24 15:11 UTC (History)
13 users (show)

Fixed In Version: openstack-nova-20.0.1-0.20191031221638.6aa7d00.el8ost
Doc Type: If docs needed, set a value
Doc Text:
Clone Of:
: 1767917 (view as bug list)
Environment:
Last Closed: 2020-02-06 14:41:56 UTC
Target Upstream Version:
Embargoed:


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
Launchpad 1843639 0 None None None 2019-09-12 12:13:54 UTC
OpenStack gerrit 682621 0 'None' ABANDONED Add regression test for bug #1843639 2021-02-02 06:31:34 UTC
OpenStack gerrit 682622 0 'None' MERGED libvirt: Ignore volume exceptions during post_live_migration 2021-02-02 06:31:34 UTC
Red Hat Issue Tracker OSP-23469 0 None None None 2023-03-21 19:23:27 UTC
Red Hat Product Errata RHEA-2020:0283 0 None None None 2020-02-06 14:42:30 UTC

Description Eduard Barrera 2019-08-08 09:01:37 UTC
Description of problem:


In some cases live migration of single instances failes with the following error:

Error: Failed to perform requested operation on instance "hdp-24.cl1.intern", the instance has an error status: Please try again later [Error: Unexpected error while running command. Command: multipath -f 3600a098038304769735d4e3735632f4c Exit code: 1 Stdout: u'Aug 06 13:28:40 | /etc/multipath.conf does not exist, blacklisting all devices.\nAug 06 13:28:40 | A default multipath.conf file is loca].

Command for live migration:
(overcloud) [stack@osdirector]$ openstack server migrate --live overcloud-compute-12.cloud. 3c607542-b913-4903-bc8c-ed226c6f4d7c

After this the instance status is "Error" and the instance is running on overcloud-compute-12.cloud. but the api shows the old host:

(overcloud) [stack@osdirector ~]$ openstack server list --all-projects --name hdp-24.cl1.intern --long -c ID -c Name -c Status -c Host
+--------------------------------------+---------------------------------+--------+---------------------------------------+
| ID                                   | Name                            | Status | Host                                  |
+--------------------------------------+---------------------------------+--------+---------------------------------------+
| 3c607542-b913-4903-bc8c-ed226c6f4d7c | hdp-24.cl1.XXXXXX        | ERROR  | overcloud-compute-13.cXXXXXXXXXXXX            |
+--------------------------------------+---------------------------------+--------+---------------------------------------+

The instance is running on overcloud-compute-12.cloud.not overcloud-compute-13 as its registered on the database:

[heat-admin@overcloud-compute-12 ~]$ ps aux |grep 3c607542-b913-4903-bc8c-ed226c6f4d7c

qemu       39359 30.7 15.0 67855460 59452272 ?   Sl   13:27  49:26 /usr/libexec/qemu-kvm -name guest=instance-00000a3f,debug-threads=on -S -object secret,id=masterKey0,format=raw,file=/var/lib/libvirt/qemu/domain-2-instance-00000a3f/master-key.aes -machine pc-i440fx-rhel7.6.0,accel=kvm,usb=off,dump-guest-core=off -cpu Skylake-Server-IBRS,ss=on,hypervisor=on,tsc_adjust=on,clflushopt=on,pku=on,stibp=on,ssbd=on -m 65536 -realtime mlock=off -smp 8,sockets=8,cores=1,threads=1 -uuid 3c607542-b913-4903-bc8c-ed226c6f4d7c -smbios type=1,manufacturer=Red Hat,product=OpenStack Compute,version=17.0.9-9.el7ost,serial=d883797a-d4d2-e811-1000-00000000003f,uuid=3c607542-b913-4903-bc8c-ed226c6f4d7c,family=Virtual Machine -no-user-config -nodefaults -chardev socket,id=charmonitor,fd=80,server,nowait -mon chardev=charmonitor,id=monitor,mode=control -rtc base=utc,driftfix=slew -global kvm-pit.lost_tick_policy=delay -no-hpet -no-shutdown -boot strict=on -device piix3-usb-uhci,id=usb,bus=pci.0,addr=0x1.0x2 -drive file=/var/lib/nova/instances/3c607542-b913-4903-bc8c-ed226c6f4d7c/disk,format=qcow2,....


This is a really dangerous situation. If you try to restart the instance on the old host, 
then there are two instances running at the same time that are accessing  the same block device.




Version-Release number of selected component (if applicable):
OSP13

How reproducible:
Unsure

Steps to Reproduce:
1. Perform a live migration 
2.
3.

Actual results:
Instance running on destination but db still points to origin. Restarting the instance will cause it running on two places and two instances will be accessing to the same block devie

Expected results:
instance runs on destination

Additional info:

Comment 3 Alan Bishop 2019-08-08 20:05:50 UTC
The timeline looks like this:

1. The instance was successfully migrated from compute-12 to compute-13 around
   2019-08-06 11:52, finishing at 11:52. Here's the compute-12 log entry:

2019-08-06 11:53:34.496 1 INFO nova.compute.manager [req-43db7a44-ddaa-4ef7-b9d9-09549d21a74a ba51bfd923a0480188bf8ea1d22f4643 69875d3dd06b4d9084d0811758e0efcc - default default] [instance: 3c607542-b913-4903-bc8c-ed226c6f4d7c] Migrating instance to overcloud-compute-13.XXX finished successfully.

2. compute-12 was rebooted

2019-08-06 12:59:52.433 1 INFO nova.service [-] Starting compute node (version 17.0.10-1.el7ost)

3. Migration from compute-13 back to compute-12 was initiated around 13:20:24,
   but fails due to message timeout in pre_live_migration

2019-08-06 13:20:24.316 1 ERROR nova.compute.manager [req-7d55a61f-2eb9-47a1-b379-d1f191def162 ba51bfd923a0480188bf8ea1d22f4643 69875d3dd06b4d9084d0811758e0efcc - default default] [instance: 3c607542-b913-4903-bc8c-ed226c6f4d7c] Pre live migration failed at overcloud-compute-12.XXX: MessagingTimeout: Timed out waiting for a reply to message ID 66a8bb747e8f432ab4405bdee64bd4ca
2019-08-06 13:20:24.316 1 ERROR nova.compute.manager [instance: 3c607542-b913-4903-bc8c-ed226c6f4d7c] Traceback (most recent call last):
2019-08-06 13:20:24.316 1 ERROR nova.compute.manager [instance: 3c607542-b913-4903-bc8c-ed226c6f4d7c]   File "/usr/lib/python2.7/site-packages/nova/compute/manager.py", line 6152, in _do_live_migration
2019-08-06 13:20:24.316 1 ERROR nova.compute.manager [instance: 3c607542-b913-4903-bc8c-ed226c6f4d7c]     block_migration, disk, dest, migrate_data)
2019-08-06 13:20:24.316 1 ERROR nova.compute.manager [instance: 3c607542-b913-4903-bc8c-ed226c6f4d7c]   File "/usr/lib/python2.7/site-packages/nova/compute/rpcapi.py", line 798, in pre_live_migration
2019-08-06 13:20:24.316 1 ERROR nova.compute.manager [instance: 3c607542-b913-4903-bc8c-ed226c6f4d7c]     disk=disk, migrate_data=migrate_data)
2019-08-06 13:20:24.316 1 ERROR nova.compute.manager [instance: 3c607542-b913-4903-bc8c-ed226c6f4d7c]   File "/usr/lib/python2.7/site-packages/oslo_messaging/rpc/client.py", line 174, in call
2019-08-06 13:20:24.316 1 ERROR nova.compute.manager [instance: 3c607542-b913-4903-bc8c-ed226c6f4d7c]     retry=self.retry)
2019-08-06 13:20:24.316 1 ERROR nova.compute.manager [instance: 3c607542-b913-4903-bc8c-ed226c6f4d7c]   File "/usr/lib/python2.7/site-packages/oslo_messaging/transport.py", line 131, in _send
2019-08-06 13:20:24.316 1 ERROR nova.compute.manager [instance: 3c607542-b913-4903-bc8c-ed226c6f4d7c]     timeout=timeout, retry=retry)
2019-08-06 13:20:24.316 1 ERROR nova.compute.manager [instance: 3c607542-b913-4903-bc8c-ed226c6f4d7c]   File "/usr/lib/python2.7/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 559, in send
2019-08-06 13:20:24.316 1 ERROR nova.compute.manager [instance: 3c607542-b913-4903-bc8c-ed226c6f4d7c]     retry=retry)
2019-08-06 13:20:24.316 1 ERROR nova.compute.manager [instance: 3c607542-b913-4903-bc8c-ed226c6f4d7c]   File "/usr/lib/python2.7/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 548, in _send
2019-08-06 13:20:24.316 1 ERROR nova.compute.manager [instance: 3c607542-b913-4903-bc8c-ed226c6f4d7c]     result = self._waiter.wait(msg_id, timeout)
2019-08-06 13:20:24.316 1 ERROR nova.compute.manager [instance: 3c607542-b913-4903-bc8c-ed226c6f4d7c]   File "/usr/lib/python2.7/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 440, in wait
2019-08-06 13:20:24.316 1 ERROR nova.compute.manager [instance: 3c607542-b913-4903-bc8c-ed226c6f4d7c]     message = self.waiters.get(msg_id, timeout=timeout)
2019-08-06 13:20:24.316 1 ERROR nova.compute.manager [instance: 3c607542-b913-4903-bc8c-ed226c6f4d7c]   File "/usr/lib/python2.7/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 328, in get
2019-08-06 13:20:24.316 1 ERROR nova.compute.manager [instance: 3c607542-b913-4903-bc8c-ed226c6f4d7c]     'to message ID %s' % msg_id)
2019-08-06 13:20:24.316 1 ERROR nova.compute.manager [instance: 3c607542-b913-4903-bc8c-ed226c6f4d7c] MessagingTimeout: Timed out waiting for a reply to message ID 66a8bb747e8f432ab4405bdee64bd4ca
2019-08-06 13:20:24.316 1 ERROR nova.compute.manager [instance: 3c607542-b913-4903-bc8c-ed226c6f4d7c] 

compute-13 performs a rollback (I don't know the details of what this entails)

4. At 13:27:22 the instance is started on compute-12:

2019-08-06 13:27:22.285 1 INFO nova.compute.manager [req-f8ffa3be-9569-4fae-abaa-139bf75634f6 - - - - -] [instance: 3c607542-b913-4903-bc8c-ed226c6f4d7c] VM Started (Lifecycle Event)
2019-08-06 13:28:14.056 1 WARNING nova.compute.resource_tracker [req-3b9242ab-ad73-454d-b72f-dc67ac80a3f7 - - - - -] [instance: 3c607542-b913-4903-bc8c-ed226c6f4d7c] Instance not resizing, skipping migration.
2019-08-06 13:28:14.125 1 WARNING nova.compute.resource_tracker [req-3b9242ab-ad73-454d-b72f-dc67ac80a3f7 - - - - -] Instance 3c607542-b913-4903-bc8c-ed226c6f4d7c has been moved to another host overcloud-compute-13.XXX(overcloud-compute-13.XXX). There are allocations remaining against the source host that might need to be removed: {u'resources': {u'VCPU': 8, u'MEMORY_MB': 65536, u'DISK_GB': 100}}.

5. And stopped on compute-13

2019-08-06 13:28:19.268 1 INFO nova.compute.manager [req-924f52cc-6753-4c47-baee-b89250569168 - - - - -] [instance: 3c607542-b913-4903-bc8c-ed226c6f4d7c] VM Paused (Lifecycle Event)
2019-08-06 13:28:19.356 1 INFO nova.compute.manager [req-924f52cc-6753-4c47-baee-b89250569168 - - - - -] [instance: 3c607542-b913-4903-bc8c-ed226c6f4d7c] During sync_power_state the instance has a pending task (migrating). Skip.
2019-08-06 13:28:20.120 1 INFO nova.virt.libvirt.driver [req-271dfc66-5ecf-480b-a10c-df616649d691 ba51bfd923a0480188bf8ea1d22f4643 69875d3dd06b4d9084d0811758e0efcc - default default] [instance: 3c607542-b913-4903-bc8c-ed226c6f4d7c] Migration operation has completed

6. compute-13 fails during post_live_migration

2019-08-06 13:28:20.121 1 INFO nova.compute.manager [req-271dfc66-5ecf-480b-a10c-df616649d691 ba51bfd923a0480188bf8ea1d22f4643 69875d3dd06b4d9084d0811758e0efcc - default default] [instance: 3c607542-b913-4903-bc8c-ed226c6f4d7c] _post_live_migration() is started..

This quickly leads to the error shown in comment #1. The post migration code
attempts to disconnect the volume, but this fails when os-brick's
"multipath -f 3600a098038304769735d4e3735632f4c" fails:

Aug 06 13:28:40 | /etc/multipath.conf does not exist, blacklisting all devices.
Aug 06 13:28:40 | A default multipath.conf file is located at
Aug 06 13:28:40 | /usr/share/doc/device-mapper-multipath-0.4.9/multipath.conf
Aug 06 13:28:40 | You can run /sbin/mpathconf --enable to create
Aug 06 13:28:40 | /etc/multipath.conf. See man mpathconf(8) for more details
Aug 06 13:28:40 | 3600a098038304769735d4e3735632f4c: map in use
Aug 06 13:28:40 | failed to remove multipath map 3600a098038304769735d4e3735632f4c

The warnings about the missing /etc/multipath.conf file are a side effect of
services running in a container, but it's not the fatal part of the
problem. The fatal part is this:

Aug 06 13:28:40 | 3600a098038304769735d4e3735632f4c: map in use
Aug 06 13:28:40 | failed to remove multipath map 3600a098038304769735d4e3735632f4c

This is likely due to the cinder-volume service previously terminating the
compute-13 connection to the volume. We'd need to see the cinder logs to trace
that activity but, regardless, the request to terminate the connection
originates from nova.

I don't see any issues with cinder or os-brick, so I'd like DFG:Compute to
take a look.

Comment 5 Alan Bishop 2019-08-12 19:59:33 UTC
The cinder logs don't reveal anything immediately noteworthy (no errors). Debug logs were not enabled at the time the migration failed, so it may not be possible to know exactly what happened. The only thing I've seen so far is the nova RPC timeout in comment #3, which is why I asked nova folks to take a look.

Comment 15 Luigi Tamagnone 2019-09-25 13:41:19 UTC
Hi,
There is another case that has a close error. After migration 

openstack server migrate 166ec0bc-def7-48e7-a05c-7fb9589b09bd --live cmpcXX

migrate VM id:166ec0bc-def7-48e7-a05c-7fb9589b09bd req-id: req-9ef5484b-3b47-4812-9aed-40f0e0aba00e

nova-compute.log on compute node: 

2019-09-23 09:59:02.857 1 ERROR nova.compute.manager [req-35714866-66d7-4acd-9d3e-765f5709aee5 - - - - -] [instance: 166ec0bc-def7-48e7-a05c-7fb9589b09bd] An error occurred while refreshing the network cache.: ConnectTimeout: Request to http://10.30.186.9:9696/v2.0/networks?id=89c954fd-8358-4d53-8f24-015610c0f6c8 timed out
2019-09-23 09:59:02.857 1 ERROR nova.compute.manager [instance: 166ec0bc-def7-48e7-a05c-7fb9589b09bd] Traceback (most recent call last):
2019-09-23 09:59:02.857 1 ERROR nova.compute.manager [instance: 166ec0bc-def7-48e7-a05c-7fb9589b09bd]   File "/usr/lib/python2.7/site-packages/nova/compute/manager.py", line 6798, in _heal_instance_info_cache
2019-09-23 09:59:02.857 1 ERROR nova.compute.manager [instance: 166ec0bc-def7-48e7-a05c-7fb9589b09bd]     self.network_api.get_instance_nw_info(context, instance)
[...]
2019-09-23 09:59:02.857 1 ERROR nova.compute.manager [instance: 166ec0bc-def7-48e7-a05c-7fb9589b09bd]     resp = send(**kwargs)
2019-09-23 09:59:02.857 1 ERROR nova.compute.manager [instance: 166ec0bc-def7-48e7-a05c-7fb9589b09bd]   File "/usr/lib/python2.7/site-packages/keystoneauth1/session.py", line 763, in _send_request
2019-09-23 09:59:02.857 1 ERROR nova.compute.manager [instance: 166ec0bc-def7-48e7-a05c-7fb9589b09bd]     raise exceptions.ConnectTimeout(msg)
2019-09-23 09:59:02.857 1 ERROR nova.compute.manager [instance: 166ec0bc-def7-48e7-a05c-7fb9589b09bd] ConnectTimeout: Request to http://10.30.186.9:9696/v2.0/networks?id=89c954fd-8358-4d53-8f24-015610c0f6c8 timed out
2019-09-23 09:59:02.857 1 ERROR nova.compute.manager [instance: 166ec0bc-def7-48e7-a05c-7fb9589b09bd]
2019-09-23 12:45:12.631 1 WARNING nova.compute.manager [req-a0ecad42-2d88-4cb0-9e3e-4d823a835982 555af911f2714f4c9e551421d3d7bbf4 7cc52c5f2cd5407498b3d90eeb25d968 - default default] [instance: 166ec0bc-def7-48e7-a05c-7fb9589b09bd] Received unexpected event network-vif-unplugged-8bdf4230-6623-42e9-9ed7-0f4fa2092e78 for instance with vm_state active and task_state migrating.
2019-09-23 12:45:12.896 1 WARNING nova.compute.manager [req-ac2ba14f-48c0-4be7-8ef0-c49a059e325d 555af911f2714f4c9e551421d3d7bbf4 7cc52c5f2cd5407498b3d90eeb25d968 - default default] [instance: 166ec0bc-def7-48e7-a05c-7fb9589b09bd] Received unexpected event network-vif-plugged-8bdf4230-6623-42e9-9ed7-0f4fa2092e78 for instance with vm_state active and task_state migrating.
2019-09-23 12:45:15.348 1 WARNING nova.compute.manager [req-3929fd35-484a-4b5f-8d57-3cfe6acab806 555af911f2714f4c9e551421d3d7bbf4 7cc52c5f2cd5407498b3d90eeb25d968 - default default] [instance: 166ec0bc-def7-48e7-a05c-7fb9589b09bd] Received unexpected event network-vif-plugged-8bdf4230-6623-42e9-9ed7-0f4fa2092e78 for instance with vm_state active and task_state migrating.
2019-09-23 12:45:16.739 1 INFO nova.virt.libvirt.migration [req-9ef5484b-3b47-4812-9aed-40f0e0aba00e 9c56624fcfe04036a55eb79bcb699313 21cbcbde105e4d25bbf8b87763def29a - default default] [instance: 166ec0bc-def7-48e7-a05c-7fb9589b09bd] Increasing downtime to 50 ms after 0 sec elapsed time
2019-09-23 12:45:16.870 1 INFO nova.virt.libvirt.driver [req-9ef5484b-3b47-4812-9aed-40f0e0aba00e 9c56624fcfe04036a55eb79bcb699313 21cbcbde105e4d25bbf8b87763def29a - default default] [instance: 166ec0bc-def7-48e7-a05c-7fb9589b09bd] Migration running for 0 secs, memory 100% remaining; (bytes processed=0, remaining=0, total=0)
2019-09-23 12:45:17.255 1 WARNING nova.compute.manager [req-882e45dc-6bd8-4466-80d2-ef8d6ee9253c 555af911f2714f4c9e551421d3d7bbf4 7cc52c5f2cd5407498b3d90eeb25d968 - default default] [instance: 166ec0bc-def7-48e7-a05c-7fb9589b09bd] Received unexpected event network-vif-plugged-8bdf4230-6623-42e9-9ed7-0f4fa2092e78 for instance with vm_state active and task_state migrating.
2019-09-23 12:45:18.776 1 WARNING nova.compute.manager [req-6e806121-3573-4413-859d-40e5f8d5e34d 555af911f2714f4c9e551421d3d7bbf4 7cc52c5f2cd5407498b3d90eeb25d968 - default default] [instance: 166ec0bc-def7-48e7-a05c-7fb9589b09bd] Received unexpected event network-vif-plugged-8bdf4230-6623-42e9-9ed7-0f4fa2092e78 for instance with vm_state active and task_state migrating.
2019-09-23 12:45:25.951 1 WARNING nova.compute.resource_tracker [req-35714866-66d7-4acd-9d3e-765f5709aee5 - - - - -] [instance: 166ec0bc-def7-48e7-a05c-7fb9589b09bd] Instance not resizing, skipping migration.
2019-09-23 12:45:35.002 1 INFO nova.compute.manager [req-a9ce38a5-4250-49cc-84cf-a67606c853e7 - - - - -] [instance: 166ec0bc-def7-48e7-a05c-7fb9589b09bd] VM Paused (Lifecycle Event)
2019-09-23 12:45:35.083 1 INFO nova.compute.manager [req-a9ce38a5-4250-49cc-84cf-a67606c853e7 - - - - -] [instance: 166ec0bc-def7-48e7-a05c-7fb9589b09bd] During sync_power_state the instance has a pending task (migrating). Skip.
2019-09-23 12:45:35.511 1 INFO nova.virt.libvirt.driver [req-9ef5484b-3b47-4812-9aed-40f0e0aba00e 9c56624fcfe04036a55eb79bcb699313 21cbcbde105e4d25bbf8b87763def29a - default default] [instance: 166ec0bc-def7-48e7-a05c-7fb9589b09bd] Migration operation has completed
2019-09-23 12:45:35.513 1 INFO nova.compute.manager [req-9ef5484b-3b47-4812-9aed-40f0e0aba00e 9c56624fcfe04036a55eb79bcb699313 21cbcbde105e4d25bbf8b87763def29a - default default] [instance: 166ec0bc-def7-48e7-a05c-7fb9589b09bd] _post_live_migration() is started..
2019-09-23 12:45:50.511 1 INFO nova.compute.manager [-] [instance: 166ec0bc-def7-48e7-a05c-7fb9589b09bd] VM Stopped (Lifecycle Event)
2019-09-23 12:46:28.402 1 WARNING nova.compute.resource_tracker [req-35714866-66d7-4acd-9d3e-765f5709aee5 - - - - -] [instance: 166ec0bc-def7-48e7-a05c-7fb9589b09bd] Instance not resizing, skipping migration.
2019-09-23 12:46:38.263 1 WARNING nova.virt.libvirt.driver [req-9ef5484b-3b47-4812-9aed-40f0e0aba00e 9c56624fcfe04036a55eb79bcb699313 21cbcbde105e4d25bbf8b87763def29a - default default] [instance: 166ec0bc-def7-48e7-a05c-7fb9589b09bd] Error monitoring migration: Unexpected error while running command.
2019-09-23 12:46:38.263 1 ERROR nova.virt.libvirt.driver [instance: 166ec0bc-def7-48e7-a05c-7fb9589b09bd] Traceback (most recent call last):
2019-09-23 12:46:38.263 1 ERROR nova.virt.libvirt.driver [instance: 166ec0bc-def7-48e7-a05c-7fb9589b09bd]   File "/usr/lib/python2.7/site-packages/nova/virt/libvirt/driver.py", line 7420, in _live_migration
[...]
2019-09-23 12:46:38.263 1 ERROR nova.virt.libvirt.driver [instance: 166ec0bc-def7-48e7-a05c-7fb9589b09bd] ProcessExecutionError: Unexpected error while running command.
2019-09-23 12:46:38.263 1 ERROR nova.virt.libvirt.driver [instance: 166ec0bc-def7-48e7-a05c-7fb9589b09bd] Command: multipath -f /dev/disk/by-id/dm-uuid-mpath-360060e8012a705005040a70500000366
2019-09-23 12:46:38.263 1 ERROR nova.virt.libvirt.driver [instance: 166ec0bc-def7-48e7-a05c-7fb9589b09bd] Exit code: 1
2019-09-23 12:46:38.263 1 ERROR nova.virt.libvirt.driver [instance: 166ec0bc-def7-48e7-a05c-7fb9589b09bd] Stdout: u'Sep 23 12:45:58 | /etc/multipath.conf does not exist, blacklisting all devices.\nSep 23 12:45:58 | A default multipath.conf file is located at\nSep 23 12:45:58 | /usr/share/doc/device-mapper-multipath-0.4.9/multipath.conf\nSep 23 12:45:58 | You can run /sbin/mpathconf --enable to create\nSep 23 12:45:58 | /etc/multipath.conf. See man mpathconf(8) for more details\nSep 23 12:45:58 | 360060e8012a705005040a70500000366p1: map in use\nSep 23 12:45:58 | failed to remove multipath map /dev/disk/by-id/dm-uuid-mpath-360060e8012a705005040a70500000366\n'
2019-09-23 12:46:38.263 1 ERROR nova.virt.libvirt.driver [instance: 166ec0bc-def7-48e7-a05c-7fb9589b09bd] Stderr: u''
2019-09-23 12:46:38.263 1 ERROR nova.virt.libvirt.driver [instance: 166ec0bc-def7-48e7-a05c-7fb9589b09bd]
2019-09-23 12:46:38.264 1 ERROR nova.compute.manager [req-9ef5484b-3b47-4812-9aed-40f0e0aba00e 9c56624fcfe04036a55eb79bcb699313 21cbcbde105e4d25bbf8b87763def29a - default default] [instance: 166ec0bc-def7-48e7-a05c-7fb9589b09bd] Live migration failed.: ProcessExecutionError: Unexpected error while running command.
2019-09-23 12:46:38.264 1 ERROR nova.compute.manager [instance: 166ec0bc-def7-48e7-a05c-7fb9589b09bd] Traceback (most recent call last):
2019-09-23 12:46:38.264 1 ERROR nova.compute.manager [instance: 166ec0bc-def7-48e7-a05c-7fb9589b09bd]   File "/usr/lib/python2.7/site-packages/nova/compute/manager.py", line 6192, in _do_live_migration
[...]
2019-09-23 12:46:38.264 1 ERROR nova.compute.manager [instance: 166ec0bc-def7-48e7-a05c-7fb9589b09bd] ProcessExecutionError: Unexpected error while running command.
2019-09-23 12:46:38.264 1 ERROR nova.compute.manager [instance: 166ec0bc-def7-48e7-a05c-7fb9589b09bd] Command: multipath -f /dev/disk/by-id/dm-uuid-mpath-360060e8012a705005040a70500000366
2019-09-23 12:46:38.264 1 ERROR nova.compute.manager [instance: 166ec0bc-def7-48e7-a05c-7fb9589b09bd] Exit code: 1
2019-09-23 12:46:38.264 1 ERROR nova.compute.manager [instance: 166ec0bc-def7-48e7-a05c-7fb9589b09bd] Stdout: u'Sep 23 12:45:58 | /etc/multipath.conf does not exist, blacklisting all devices.\nSep 23 12:45:58 | A default multipath.conf file is located at\nSep 23 12:45:58 | /usr/share/doc/device-mapper-multipath-0.4.9/multipath.conf\nSep 23 12:45:58 | You can run /sbin/mpathconf --enable to create\nSep 23 12:45:58 | /etc/multipath.conf. See man mpathconf(8) for more details\nSep 23 12:45:58 | 360060e8012a705005040a70500000366p1: map in use\nSep 23 12:45:58 | failed to remove multipath map /dev/disk/by-id/dm-uuid-mpath-360060e8012a705005040a70500000366\n'
2019-09-23 12:46:38.264 1 ERROR nova.compute.manager [instance: 166ec0bc-def7-48e7-a05c-7fb9589b09bd] Stderr: u''
2019-09-23 12:46:38.264 1 ERROR nova.compute.manager [instance: 166ec0bc-def7-48e7-a05c-7fb9589b09bd]

Comment 26 errata-xmlrpc 2020-02-06 14:41:56 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHEA-2020:0283


Note You need to log in before you can comment on or make changes to this bug.