Note: This bug is displayed in read-only format because the product is no longer active in Red Hat Bugzilla.

Bug 1847015

Summary: after live-migration server show reports instance on one compute but it is running on another compute
Product: Red Hat OpenStack Reporter: Eduard Barrera <ebarrera>
Component: openstack-novaAssignee: Stephen Finucane <stephenfin>
Status: CLOSED DUPLICATE QA Contact: OSP DFG:Compute <osp-dfg-compute>
Severity: unspecified Docs Contact:
Priority: unspecified    
Version: 10.0 (Newton)CC: dasmith, eglynn, jhakimra, kchamart, sbauza, sgordon, vromanso
Target Milestone: ---   
Target Release: ---   
Hardware: Unspecified   
OS: Unspecified   
Whiteboard:
Fixed In Version: Doc Type: If docs needed, set a value
Doc Text:
Story Points: ---
Clone Of: Environment:
Last Closed: 2020-06-26 10:57:08 UTC Type: Bug
Regression: --- Mount Type: ---
Documentation: --- CRM:
Verified Versions: Category: ---
oVirt Team: --- RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: --- Target Upstream Version:
Embargoed:

Description Eduard Barrera 2020-06-15 12:55:47 UTC
Description of problem:

After performing a live migration 



 openstack server show 8bd25f74-5a35-49fc-ade7-bac132f8658a
+--------------------------------------+----------------------------------------------------------------------------------------+
| Field                                | Value                                                                                  |
+--------------------------------------+----------------------------------------------------------------------------------------+
| OS-DCF:diskConfig                    | MANUAL                                                                                 |
| OS-EXT-AZ:availability_zone          | linux                                                                                  |
| OS-EXT-SRV-ATTR:host                 | XXX-cd01-XX-06                                |
| OS-EXT-SRV-ATTR:hypervisor_hostname  | XXX-cd01-XX-06.      <=======


But if I check the sosreport there are no instances running on that compute node:

$ cat hostname 
XX-cd01-XX-06.dccs
$ grep kvm ps
root        1328  0.0  0.0      0     0 ?        S<   May13   0:00 [kvm-irqfd-clean]


Currently the instance is located on

XXX-cd01-XXX-08

qemu      122121       1  3 May13 ?        22:49:14 /usr/libexec/qemu-kvm -name guest=instance-00001554,debug-threads=on -S -object secret,id=masterKey0,format=raw,file=/var/lib/libvirt/qemu/domain-72-instance-00001554/master-key.aes -machine pc-i440fx-rhel7.4.0,accel=kvm,usb=off,dump-guest-core=off -cpu Broadwell,vme=on,f16c=on,rdrand=on,hypervisor=on,arat=on,xsaveopt=on,abm=on,+rtm,+hle -m 8192 -realtime mlock=off -smp 2,sockets=2,cores=1,threads=1 -uuid 8bd25f74-5a35-49fc-ade7-bac132f8658a -smbios type=1,manufacturer=Red Hat,product=OpenStack Compute,version=14.0.8-5.el7ost,serial=3060238c-0351-481e-b155-ffe86ef383fc,uuid=8bd25f74-5a35-49fc-ade7-bac132f8658a,family=Virtual Machine -no-user-config -nodefaults -chardev socket,id=charmonitor,path=/var/lib/libvirt/qemu/domain-72-instance-00001554/monitor.sock,server,nowait -mon chardev=charmonitor,id=monitor,mode=control -rtc base=utc,driftfix=slew -global kvm-pit.lost_tick_policy=delay -no-hpet -no-shutdown -boot strict=on -device piix3-usb-uhci,id=usb,bus=pci.0,addr=0x1.0x2 -device virtio-serial-pci,id=virtio-serial0,bus=pci.0,addr=0x5 -drive file=/dev/disk/by-id/dm-uuid-mpath-36000d31001461c0000000000000009c4,format=raw,if=none,id=drive-virtio-disk0,serial=e2434ca1-df19-478a-ac72-4009193e15b4,cache=none,discard=unmap,aio=native,throttling.iops-total=4000 -device virtio-blk-pci,scsi=off,bus=pci.0,addr=0x6,drive=drive-virtio-disk0,id=virtio-disk0,bootindex=1 -netdev tap,fd=145,id=hostnet0,vhost=on,vhostfd=147 -device virtio-net-pci,netdev=hostnet0,id=net0,mac=fa:16:3e:32:XXX,bus=pci.0,addr=0x3 -netdev tap,fd=148,id=hostnet1,vhost=on,vhostfd=149 -device virtio-net-pci,netdev=hostnet1,id=net1,mac=fa:16:3e:4f:XXX,bus=pci.0,addr=0x4 -add-fd set=4,fd=151 -chardev file,id=charserial0,path=/dev/fdset/4,append=on -device isa-serial,chardev=charserial0,id=serial0 -chardev pty,id=charserial1 -device isa-serial,chardev=charserial1,id=serial1 -chardev socket,id=charchannel0,path=/var/lib/libvirt/qemu/org.qemu.guest_agent.0.instance-00001554.sock,server,nowait -device virtserialport,bus=virtio-serial0.0,nr=1,chardev=charchannel0,id=channel0,name=org.qemu.guest_agent.0 -device usb-tablet,id=input0,bus=usb.0,port=1 -vnc 172.16.3.21:59 -k en-us -device cirrus-vga,id=video0,bus=pci.0,addr=0x2 -incoming defer -device virtio-balloon-pci,id=balloon0,bus=pci.0,addr=0x7 -msg timestamp=on


there are the following errors on openstack server show output:

2018-12-14T22:15:38Z                                                                                                                                                                            |
| fault                                | {u'message': u"Unexpected error while running command.\nCommand: blockdev --flushbufs /dev/sde\nExit code: 1\nStdout: u''\nStderr: u'blockdev: cannot open /dev/sde: No such file or            |
|                                      | directory\\n'", u'code': 500, u'details': u'  File "/usr/lib/python2.7/site-packages/nova/compute/manager.py", line 204, in decorated_function\n    return function(self, context, *args,       |
|                                      | **kwargs)\n  File "/usr/lib/python2.7/site-packages/nova/compute/manager.py", line 5455, in _post_live_migration\n    migrate_data)\n  File "/usr/lib/python2.7/site-                           |
|                                      | packages/nova/virt/libvirt/driver.py", line 6864, in post_live_migration\n    self._disconnect_volume(connection_info, disk_dev, instance)\n  File "/usr/lib/python2.7/site-                    |
|                                      | packages/nova/virt/libvirt/driver.py", line 1116, in _disconnect_volume\n    vol_driver.disconnect_volume(connection_info, disk_dev, instance)\n  File "/usr/lib/python2.7/site-                |
|                                      | packages/nova/virt/libvirt/volume/fibrechannel.py", line 71, in disconnect_volume\n    connection_info[\'data\'])\n  File "/usr/lib/python2.7/site-packages/os_brick/utils.py", line 137, in    |
|                                      | trace_logging_wrapper\n    return f(*args, **kwargs)\n  File "/usr/lib/python2.7/site-packages/oslo_concurrency/lockutils.py", line 271, in inner\n    return f(*args, **kwargs)\n  File        |
|                                      | "/usr/lib/python2.7/site-packages/os_brick/initiator/connectors/fibre_channel.py", line 280, in disconnect_volume\n    self._remove_devices(connection_properties, devices)\n  File             |
|                                      | "/usr/lib/python2.7/site-packages/os_brick/initiator/connectors/fibre_channel.py", line 287, in _remove_devices\n    self._linuxscsi.remove_scsi_device(device["device"])\n  File               |
|                                      | "/usr/lib/python2.7/site-packages/os_brick/initiator/linuxscsi.py", line 73, in remove_scsi_device\n    self.flush_device_io(device)\n  File "/usr/lib/python2.7/site-                          |
|                                      | packages/os_brick/initiator/linuxscsi.py", line 265, in flush_device_io\n    interval=10, root_helper=self._root_helper)\n  File "/usr/lib/python2.7/site-packages/os_brick/executor.py", line  |
|                                      | 52, in _execute\n    result = self.__execute(*args, **kwargs)\n  File "/usr/lib/python2.7/site-packages/os_brick/privileged/rootwrap.py", line 169, in execute\n    return execute_root(*cmd,   |
|                                      | **kwargs)\n  File "/usr/lib/python2.7/site-packages/oslo_privsep/priv_context.py", line 204, in _wrap\n    return self.channel.remote_call(name, args, kwargs)\n  File "/usr/lib/python2.7      |
|                                      | /site-packages/oslo_privsep/daemon.py", line 187, in remote_call\n    raise exc_type(*result[2])\n', u'created': u'2020-05-13T23:01:15Z'}                                                       |
| flavor                               | gp.large (f8ece25f-91f2-4848-9f38




Version-Release number of selected component (if applicable):
OSP10

How reproducible:
unsure

Steps to Reproduce:
1. perform live migration
2. Error above on server show
3.

Actual results:
errors staritng the instance

Expected results:
instance start

Comment 2 Stephen Finucane 2020-06-26 10:57:08 UTC
Apologies for the delayed response. This issue has already been resolved in OSP13z11 - see bug 1767928. It is not possible to resolve this in OSP10 since this is in ELS, however, it sounds like that should not be an issue since this customer is migrating.

*** This bug has been marked as a duplicate of bug 1767928 ***