Bug 1749382

Summary: Leftover Libvirt VMs on Compute nodes
Product: Red Hat OpenStack Reporter: kforde
Component: openstack-novaAssignee: melanie witt <mwitt>
Status: CLOSED ERRATA QA Contact: OSP DFG:Compute <osp-dfg-compute>
Severity: medium Docs Contact:
Priority: medium    
Version: 13.0 (Queens)CC: dasmith, jhakimra, kchamart, lyarwood, mbooth, mwitt, sbauza, sclewis, sgordon, stephenfin, vromanso
Target Milestone: Upstream M3Keywords: Triaged
Target Release: 16.0 (Train on RHEL 8.1)   
Hardware: x86_64   
OS: Linux   
Whiteboard:
Fixed In Version: openstack-nova-20.0.1-0.20191025043858.390db63.el8ost Doc Type: If docs needed, set a value
Doc Text:
Story Points: ---
Clone Of:
: 1753453 1753455 1763329 (view as bug list) Environment:
Last Closed: 2020-02-06 14:42:04 UTC Type: Bug
Regression: --- Mount Type: ---
Documentation: --- CRM:
Verified Versions: Category: ---
oVirt Team: --- RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: --- Target Upstream Version:
Embargoed:
Bug Depends On:    
Bug Blocks: 1753453, 1753455, 1763329    

Description kforde 2019-09-05 13:49:51 UTC
Description of problem:

The following error is showing up constantly on our environment:

2019-06-27 20:37:16.021 1 ERROR nova.compute.manager [req-75dcf312-f782-49e9-8228-a9f0119e9f70 - - - - -] Error updating resources for node compute-001.redhat.com.: DiskNotFound: No disk at /var/lib/nova/instances/fb8e93fe-31e5-485e-8119-180db8235024/disk

$ nova show fb8e93fe-31e5-485e-8119-180db8235024
ERROR (CommandError): No server with a name or ID of 'fb8e93fe-31e5-485e-8119-180db8235024' exists.

SSH-ing to the compute nodes we find that there are libvirt VMs running/shutoff on the node. 

The only solution is to 'undefine' the libvirt VM.



Version-Release number of selected component (if applicable):

# nova compute node
openstack-nova-migration-17.0.9-9.el7ost.noarch
python-nova-17.0.9-9.el7ost.noarch
python2-novaclient-10.1.0-1.el7ost.noarch
openstack-nova-compute-17.0.9-9.el7ost.noarch
openstack-nova-common-17.0.9-9.el7ost.noarch
puppet-nova-12.4.0-17.el7ost.noarch

# nova controller
python-nova-17.0.9-9.el7ost.noarch
openstack-nova-scheduler-17.0.9-9.el7ost.noarch
python2-novaclient-10.1.0-1.el7ost.noarch
openstack-nova-common-17.0.9-9.el7ost.noarch
puppet-nova-12.4.0-17.el7ost.noarch
openstack-nova-conductor-17.0.9-9.el7ost.noarch
openstack-nova-placement-api-17.0.9-9.el7ost.noarch
openstack-nova-api-17.0.9-9.el7ost.noarch
openstack-nova-api-17.0.9-9.el7ost.noarch


How reproducible:

Random - seems to appear after outages and may be related to the purging of Nova DB table frequency.

Steps to Reproduce:
1.
2.
3.

Actual results:


Expected results:


Additional info:

Comment 1 melanie witt 2019-09-06 14:59:53 UTC
Just wanted to note that the fix for this bug is going to be a workaround via the --before feature: nova-manage archive_deleted_rows --before <date>

This way, when running a cron to archive_deleted_rows, a --before <date> can be used to ensure that very recently deleted instances will *not* be archived at an inopportune time, for example, when the cluster is down/experiencing an outage. Then, when the cluster is restored, the normal periodic reap task will take care of cleaning up the orphaned deleted instance's libvirt guests.

Besides this workaround, there is a patch in-flight upstream to fix the root cause by cleaning up libvirt guests that are "unknown" to nova:

https://review.opendev.org/627765

but it is very complicated and is not close to a mergeable state yet. There are dangers (race conditions) around potentially destroying libvirt guests that are not orphans for instances that are in the middle of migrating.

Comment 5 melanie witt 2019-10-18 19:53:55 UTC
Change was merged upstream in Train before the M1 milestone.

Comment 6 melanie witt 2019-10-18 19:54:38 UTC
(In reply to melanie witt from comment #5)
> Change was merged upstream in Train before the M1 milestone.

https://review.opendev.org/556751

Comment 12 errata-xmlrpc 2020-02-06 14:42:04 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHEA-2020:0283