Hi I encountered the same bug moved to High since the only workaound is to delete the VM. happnes with nova network as well openstack-nova-common-2012.2.3-4.el6ost.noarch [root@puma04 ~(keystone_admin)]$ nova reboot c3074bdc-f93e-41d8-b409-a426be744326 ERROR: Cannot 'reboot' while instance is in vm_state error (HTTP 409) (Request-ID: req-9976e673-5d5b-439d-9a06-398deaf59c28) [root@puma04 ~(keystone_admin)]$ nova reset-state c3074bdc-f93e-41d8-b409-a426be744326 [root@puma04 ~(keystone_admin)]$ nova reboot c3074bdc-f93e-41d8-b409-a426be744326 ERROR: Cannot 'reboot' while instance is in vm_state error (HTTP 409) (Request-ID: req-1e729b58-9f20-44cc-bf7c-7be7ba46cadf) 013-03-12 11:27:25 INFO nova.compute.manager [req-c952cab3-134e-49c6-b8f1-ddcf9f6450ee None None] [instance: c3074bdc-f93e-41d8-b409-a426be744326] Rebooting instance after nova-compute restart. 2013-03-12 11:27:30 INFO nova.virt.libvirt.firewall [req-c952cab3-134e-49c6-b8f1-ddcf9f6450ee None None] [instance: c3074bdc-f93e-41d8-b409-a426be744326] Called setup_basic_filtering in nwfilter 2013-03-12 11:27:30 INFO nova.virt.libvirt.firewall [req-c952cab3-134e-49c6-b8f1-ddcf9f6450ee None None] [instance: c3074bdc-f93e-41d8-b409-a426be744326] Ensuring static filters 2013-03-12 11:27:33 WARNING nova.compute.manager [req-c952cab3-134e-49c6-b8f1-ddcf9f6450ee None None] [instance: c3074bdc-f93e-41d8-b409-a426be744326] Failed to resume instance nova show output attached . It seems like it happnes every 1-2 host reboots [root@puma04 ~(keystone_admin)]$ nova show c3074bdc-f93e-41d8-b409-a426be744326 +-------------------------------------+----------------------------------------------------------+ | Property | Value | +-------------------------------------+----------------------------------------------------------+ | OS-DCF:diskConfig | MANUAL | | OS-EXT-SRV-ATTR:host | puma04.scl.lab.tlv.redhat.com | | OS-EXT-SRV-ATTR:hypervisor_hostname | puma04.scl.lab.tlv.redhat.com | | OS-EXT-SRV-ATTR:instance_name | instance-00000012 | | OS-EXT-STS:power_state | 1 | | OS-EXT-STS:task_state | None | | OS-EXT-STS:vm_state | error | | accessIPv4 | | | accessIPv6 | | | config_drive | | | created | 2013-03-10T07:14:58Z | | flavor | m1.tiny (1) | | hostId | fd68b52c230bee600e1b4819221ef01fe7b24c6221e37ecb6d770793 | | id | c3074bdc-f93e-41d8-b409-a426be744326 | | image | Fedora17-image (3a8b059f-6c4d-482f-883c-84b80bc7b86b) | | key_name | None | | metadata | {} | | name | VM-FED17 | | net_vlan_190 network | 10.35.175.21 | | security_groups | [{u'name': u'default'}] | | status | ERROR | | tenant_id | 49202b0e09a4409c97475392341b57db | | updated | 2013-03-12T09:27:33Z | | user_id | 1f0db08c839547339a4ede1d1fb99066 | +-------------------------------------+----------------------------------------------------------+
The issue looks like resume_guests_state_on_host_boot config option we introduced seems to have issues when restarting the whole host (we are investigating and will hopefully soon know more and propose an actual fix.) As a workaround - we will disable this option by default. We have also opened a new bug to track the progress of the actual fix of the issue at #920704 The way to test this would be to try restarting the node and making sure that nova does not attempt to bring instances online instantly. Please note that setting resume_guests_state_on_host_boot to True is now something we want our customers to avoid until we get a full fix.
Note that this option is off by default and we had changed it for RHOS. This workaround is changing it back to the upstream default. There is actually a good argument for leaving it off by default, anyway. Many deployments would likely prefer it that way. Having a node go down with running instances on it is a failure, and applications using the cloud would likely have moved on and treated the instances that failed as gone and spawned new ones. For those types of applications, automatically trying to restart their instances may not be what they want. So right now I'm thinking that once we turn this off, we should just leave it that way, but we should still get to the bottom of this and fix it since some deployments still may want to turn it on.
The default resume_guests_state_on_host_boot = false After reboot all VMs are in shutoff state Tested on openstack-nova-compute-2012.2.3-7.el6ost.noarch
Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory, and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. http://rhn.redhat.com/errata/RHSA-2013-0709.html