Bug 1375456 - VM Stopped Event is unexpected seen on particular compute node
Summary: VM Stopped Event is unexpected seen on particular compute node
Keywords:
Status: CLOSED INSUFFICIENT_DATA
Alias: None
Product: Red Hat OpenStack
Classification: Red Hat
Component: openstack-nova
Version: 7.0 (Kilo)
Hardware: Unspecified
OS: Unspecified
unspecified
urgent
Target Milestone: ---
: 10.0 (Newton)
Assignee: Sahid Ferdjaoui
QA Contact: Prasanth Anbalagan
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2016-09-13 08:02 UTC by Chen
Modified: 2019-12-16 06:44 UTC (History)
10 users (show)

Fixed In Version:
Doc Type: If docs needed, set a value
Doc Text:
Clone Of:
Environment:
Last Closed: 2016-12-26 09:21:59 UTC
Target Upstream Version:


Attachments (Terms of Use)

Description Chen 2016-09-13 08:02:21 UTC
Description of problem:

VM Stopped Event is unexpected seen on one particular compute node. The VMs are randomly shutdown but not specific VM.

Version-Release number of selected component (if applicable):

python-nova-2015.1.0-16.el7ost.noarch

How reproducible:

100% on particular compute node

Steps to Reproduce:
1.
2.
3.

Actual results:

2016-09-10 05:22:34.355 14263 INFO nova.compute.manager [req-16fe9e48-7b07-473d-993d-efacbda4a92f - - - - -] [instance: 615e1a7c-5fac-4c6a-947b-4051b0193334] VM Stopped (Lifecycle Event)
2016-09-10 05:22:34.506 14263 INFO nova.compute.manager [req-16fe9e48-7b07-473d-993d-efacbda4a92f - - - - -] [instance: 615e1a7c-5fac-4c6a-947b-4051b0193334] During _sync_instance_power_state the DB power_state (1) does not match the vm_power_state from the hypervisor (4). Updating power_state in the DB to match the hypervisor.
2016-09-10 05:22:34.637 14263 WARNING nova.compute.manager [req-16fe9e48-7b07-473d-993d-efacbda4a92f - - - - -] [instance: 615e1a7c-5fac-4c6a-947b-4051b0193334] Instance shutdown by itself. Calling the stop API. Current vm_state: active, current task_state: None, original DB power_state: 1, current VM power_state: 4
2016-09-10 05:22:34.761 14263 INFO nova.compute.manager [req-9d6df7f9-34ee-4d02-b40a-0f4818e452e4 - - - - -] [instance: 615e1a7c-5fac-4c6a-947b-4051b0193334] Instance is already powered off in the hypervisor when stop is called.
2016-09-10 05:22:34.802 14263 INFO nova.virt.libvirt.driver [req-9d6df7f9-34ee-4d02-b40a-0f4818e452e4 - - - - -] [instance: 615e1a7c-5fac-4c6a-947b-4051b0193334] Instance already shutdown.
2016-09-10 05:22:34.807 14263 INFO nova.virt.libvirt.driver [-] [instance: 615e1a7c-5fac-4c6a-947b-4051b0193334] Instance destroyed successfully.


Expected results:


Additional info:

The customer got 7 compute nodes and only one compute node is suffering such problem. 

I checked the compute node's logs and there was no OOM or qemu crash at that time. Also the logs in instance itself has nothing before shutting down. 

What could be the trigger of "VM Stopped (Lifecycle Event)" ?

Comment 9 Chen 2016-09-20 08:19:30 UTC
Hi Sahid,

The customer just gave the feedback to us that he was not sure where to put those configurations. 

In my test, after setting log_filters and log_outputs, /var/log/libvirt/libvirtd.log is not created automatically. Is it expected behaviour ?

Best Regards,
Chen

Comment 10 Chen 2016-09-20 09:20:47 UTC
Hi Sahid,

I noticed that the test package for python-nova has been deleted. Is that mandatory for the further investigation ? The current status is that the customer hasn't configured their libvirtd.conf yet.

Best Regards,
Chen

Comment 16 awaugama 2017-09-07 19:13:33 UTC
Closed without a fix therefore QE won't automate


Note You need to log in before you can comment on or make changes to this bug.