Bug 1850846 - VM on wrong hypervisor after migration
Summary: VM on wrong hypervisor after migration
Keywords:
Status: CLOSED DUPLICATE of bug 1767928
Alias: None
Product: Red Hat OpenStack
Classification: Red Hat
Component: openstack-nova
Version: 13.0 (Queens)
Hardware: All
OS: Unspecified
high
high
Target Milestone: ---
: ---
Assignee: Lee Yarwood
QA Contact: OSP DFG:Compute
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2020-06-25 03:35 UTC by Brendan Shephard
Modified: 2023-10-06 20:49 UTC (History)
9 users (show)

Fixed In Version:
Doc Type: If docs needed, set a value
Doc Text:
Clone Of:
Environment:
Last Closed: 2020-07-16 12:26:14 UTC
Target Upstream Version:
Embargoed:


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
Red Hat Issue Tracker OSP-13506 0 None None None 2022-03-11 03:29:28 UTC

Description Brendan Shephard 2020-06-25 03:35:00 UTC
Description of problem:
After migrating a VM, nova reports that the VM is still on the original hypervisor, rather than the new one where it is actually located. As such, we can't perform any operations on the VM using the openstack / nova CLI

Version-Release number of selected component (if applicable):
RHOSP13

puppet-nova-12.4.0-10.el7ost.noarch                         Wed Nov  7 13:07:51 2018
python2-novaclient-10.1.0-1.el7ost.noarch                   Wed Nov  7 13:03:24 2018
python-nova-17.0.7-2.el7ost.noarch                          Wed Nov  7 13:04:03 2018



How reproducible:
Difficult

Steps to Reproduce:
1. Migrate VM and it doesn't work the first time for various reasons
2. Try migration again, it reports an error but the VM is actually moved
3. Verify VM is actually on new hypervisor using virsh, but nova still reports it on the original

Actual results:
Nova DB is not updated to reflect the new hypervisor location, which subsequently breaks any openstack or nova cli operations against that instance.

Expected results:
Nova DB reflect the location of the VM after a migration

Additional info:
We have a solution article about this:
https://access.redhat.com/solutions/2070503

Since it recommends making a DB update, and this VM does indeed have two Volumes attached to it. We would like engineering to review the situation and provide guidance and recommendations on whether the DB update is the best way to proceed, or should we try to resolve this issue using the placement API?

sosreports will be provided along with Nova database dumps

Comment 6 Lee Yarwood 2020-07-16 12:26:14 UTC
Closing this out as a duplicate of 1767928.

*** This bug has been marked as a duplicate of bug 1767928 ***


Note You need to log in before you can comment on or make changes to this bug.