Description of problem: VM nodes fail to deploy when instance_info is not provided. Version-Release number of selected component (if applicable): How reproducible: Always Steps to Reproduce: 1. Provision to vm nodes in ironic using fake_pxe driver 2. Deploy as shown in the documentation Actual results: VM's fail to load deploy_kernel and deploy_ramdisk. Using ironic node-validate fails with missing driver_info and instance_info. Expected results: Documentation should provide example lines to provision against each node in ironic. Additional info: I have used these to successfully provision and validate: 1. ironic node-update <nodeid> add driver_info/deploy_kernel=<bm_kernel> driver_info/deploy_ramdisk=<bm_ramdisk> 2. ironic node-update <nodeid> add instance_info/image_source=<oc_full_image.qcow2> instance_info/kernel=<oc_kernel> instance_info/ramdisk=<oc_ramdisk> instance_info/root_gb=<size_in_GB>
(In reply to Alexandru Dan from comment #0) > Description of problem: > VM nodes fail to deploy when instance_info is not provided. > That's expected. Are you using Ironic in standalone mode ? Otherwise the ironic nova driver should populate the instance_info attribute as part of the deployment process.
No, I was using OSP7 Director node and was trying to deploy an environment. Since then I have moved to OSP8 which no longer exhibits this issue.
(In reply to Alexandru Dan from comment #3) > No, I was using OSP7 Director node and was trying to deploy an environment. With fake_pxe ? Maybe I'm missing the context, because the fake_pxe driver is a testing driver only and the interface that deploys the node is mocked (fake).
AFAIK the fake_pxe driver can be used when there is no automation regarding powering on or off of a machine. Since there is no vmware_driver that can turn on and off a vmware machine I am using the fake_pxe driver in ironic and I am turning on and off the VM by hand. It's really not such a complicated mechanism and it works. At least after inserting the correct properties into ironic I managed to finally deploy openstack platform 7 on some vm's using fake_pxe and basic overcloud deployment. It's not a problem of what driver was used but a problem of what images get pushed or not pushed into pxe root to be booted up by the machines (vms). So yes it's fake but it should not matter when we are in a deployment, user already acknowledges that he has to turn on and off the machines manually and that's it.
(In reply to Alexandru Dan from comment #5) > AFAIK the fake_pxe driver can be used when there is no automation regarding > powering on or off of a machine. Since there is no vmware_driver that can > turn on and off a vmware machine I am using the fake_pxe driver in ironic > and I am turning on and off the VM by hand. > > It's really not such a complicated mechanism and it works. At least after > inserting the correct properties into ironic I managed to finally deploy > openstack platform 7 on some vm's using fake_pxe and basic overcloud > deployment. > > It's not a problem of what driver was used but a problem of what images get > pushed or not pushed into pxe root to be booted up by the machines (vms). > > So yes it's fake but it should not matter when we are in a deployment, user > already acknowledges that he has to turn on and off the machines manually > and that's it. Fair enough, yeah it should work... I just found it a little strange because upstream we only use the fake_* drivers for unittests purpose. As a note, you could automate the Vmware power control by using the pxe_ssh driver [0]. Or, you can also use the pxe_ipmitool driver + VirtualBMC [1] (see the usage [2]), VirtualBMC uses libvirt internally to power control the VMs and it does support Vmware [3]. [0] https://review.openstack.org/#/c/64542/ [1] https://github.com/openstack/virtualbmc [2] https://raw.githubusercontent.com/umago/virtualbmc/master/images/demo.gif [3] http://libvirt.org/
Hello, You can use 'ironic driver-properties <DRIVER>' and 'ironic node-validate <NODE UUID>' to check the list of required (and missing properties). As per comment 3 I'm closing this bug.