Bug 1270874 - Ironic should not be enabling PXE Boot on all VM interfaces
Ironic should not be enabling PXE Boot on all VM interfaces
Product: Red Hat OpenStack
Classification: Red Hat
Component: openstack-ironic (Show other bugs)
7.0 (Kilo)
Unspecified Unspecified
unspecified Severity high
: ---
: 8.0 (Liberty)
Assigned To: Lucas Alvares Gomes
Toure Dunnon
: ZStream
Depends On:
Blocks: 1273561
  Show dependency treegraph
Reported: 2015-10-12 10:59 EDT by Jason Montleon
Modified: 2016-09-06 11:21 EDT (History)
5 users (show)

See Also:
Fixed In Version:
Doc Type: Bug Fix
Doc Text:
Story Points: ---
Clone Of:
Last Closed: 2016-09-06 11:21:49 EDT
Type: Bug
Regression: ---
Mount Type: ---
Documentation: ---
Verified Versions:
Category: ---
oVirt Team: ---
RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: ---

Attachments (Terms of Use)

  None (edit)
Description Jason Montleon 2015-10-12 10:59:25 EDT
Description of problem:
RHOS only has a restriction that the Compute Nodes be bare metal. When creating VM's with multiple NICS, as is outlined as a requirement for any node (on on the provisioning network and one public facing) ironic enables PXE Boot for every single interface on the VM when trying to detect and provision it. It seems completely reasonable that a DHCP server would be available on the public network and if so the installation can PXE boot to a network that was not intended and never reach ironic.

I had properly configured my VM to pxe boot only from the provisioning netowrk and ironic overrode this configuration. 

Ironic should either not touch the boot configuration or intelligently configure pxe boot only for the correct interface

Version-Release number of selected component (if applicable):
RHOS 7.0

How reproducible:

Steps to Reproduce:
1. Install an undercloud/director
2. Create a VM with multiple interfaces, both with pxe boot available, have the provisioning network for RHOS be the second interface, and properly configure the system to PXE boot from the provisioning network.
3. Try to detect the VM for installation as an overcloud host.

Actual results:
VM will pxe boot to the wrong network.

Expected results:
VM PXE Boots to the correct network.

Additional info:
Comment 2 Jason Montleon 2015-10-12 11:00:15 EDT

"It is recommended to use bare metal systems for all nodes. At minimum, the Compute nodes require bare metal systems."
Comment 3 Stephen Herr 2015-10-12 11:24:45 EDT

"Set all Overcloud systems to PXE boot off the Provisioning NIC and disable PXE boot on the External NIC and any other NICs on the system. Also ensure PXE boot for Provisioning NIC is at the top of the boot order, ahead of hard disks and CD/DVD drives."
Comment 4 Lucas Alvares Gomes 2015-11-16 13:13:32 EST

Just for clarification, Ironic is deploying a VM so it's using the pxe_ssh driver, right? Cause this driver is just a testing driver for Ironic (not really meant for production)

But anyway, with pxe_ssh Ironic will change the virsh XML of that VM to boot it from "network", it doesn't specify any MAC address or anything like that, all it does is create a "<boot dev='network'/>" in the "<os>" XML node. e.g:

    <type arch='x86_64' machine='pc-1.0'>hvm</type>
    <boot dev='network'/>
    <bootmenu enable='no'/>
    <bios useserial='yes'/>

When you said you properly configured the VM to PXE boot only in the provisioning network, can you give me an example of that virsh XML please?

Comment 5 Jason Montleon 2015-11-16 13:30:17 EST
When I add a single interface to the boot order (via virt-manager) it does so like this:

    <type arch='x86_64' machine='pc-i440fx-2.3'>hvm</type>
    <bootmenu enable='no'/>
    <interface type='bridge'>
      <mac address='52:54:00:bb:e4:22'/>
      <source bridge='br-osp-d'/>
      <virtualport type='openvswitch'>
        <parameters interfaceid='8cf44b67-f259-4eaf-b5f1-da07da587efb'/>
      <model type='virtio'/>
      <boot order='1'/>
      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0'/>
    <interface type='bridge'>
      <mac address='52:54:00:2f:d1:1d'/>
      <source bridge='br-osp'/>
      <virtualport type='openvswitch'>
        <parameters interfaceid='4d0fe863-e588-4acb-ba7b-2bfe99aa8730'/>
      <model type='virtio'/>
      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x0'/>
Comment 6 Dmitry Tantsur 2016-09-06 11:21:49 EDT

The problem here is that SSH drivers are not meant for production, so they don't cover all possible cases. To top it all, SSH drivers are going away in the next release, so even if we fix it now, it will regress soon.

We will use ipmitool drivers with a service called virtualbmc: https://github.com/openstack/virtualbmc, that translates IPMI protocol into libvirt calls. You may want to ensure that this project has the necessary fixes and open an upstream bug against it. The SSH driver is unlikely to receive any updates now.

Note You need to log in before you can comment on or make changes to this bug.