Description of problem: When the VMs go from "shut off" state to "running", they first try to boot from PXE. At this stage, after they're already provisioned, they should only be booting from their hard disk. The boot fails and you have no other option but to open a console to see the problem. Then you can see the failed PXE boot and the message "no bootable device". You have to send ctrl+alt+del to the VM to boot it again and it boots from the HD. Version-Release number of selected component (if applicable): instack-undercloud-1.0.25-1.fc22.noarch How reproducible: 100% Steps to Reproduce: 1. Complete an installation with instack-virt-setup and instack-install-undercloud 2. Open virt-manager on your laptop and establish a remote connection to the host of your VMs. You should see the instack and the 4 "baremetal" machines (2 of them will be shut off but the others are running) 3. Shut off all the running VMs and try to boot them up again. Keep a console open to see the boot messages Actual results: No bootable device
(In reply to Udi from comment #0) > Description of problem: > When the VMs go from "shut off" state to "running", they first try to boot > from PXE. At this stage, after they're already provisioned, they should only > be booting from their hard disk. This is not a correct statement. The vm's are PXE booted on every boot. This is how Ironic works. The undercloud must be running to serve PXE/tftp/dhcp services to the overcloud nodes (whether vm or baremetal).
please see my earlier comment. was the undercloud running?
The instack VM (undercloud) was not yet running because the host rebooted and all the VMs were started manually, not necessarily in the right order. Perhaps it's possible, in this case, that the controller will fall back to booting from its other bootable devices (meaning boot from the hard disk) if the PXE boot fails.
it's not possible now, perhaps as a future feature it will be.
We've added a local boot feature, so you should be able to no longer reproduce the issue.
Verified in puddle 2015-06-26
Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory, and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://access.redhat.com/errata/RHEA-2015:1549