A number of use cases require a virtualised OSP control plane to allow operators to reduce the number of bare metal machines required by the OSP control plane, including the controllers and, potentially, a number of composable roles as part of the same control plane.
An OSP Overcloud control plane virtualised with RHV/oVirt requires an Ironic driver for OSP-d to manage the VMs that will make it (e.g. 3 controllers plus a number of composable roles).
This driver will work with Ironic in the Undercloud being integrated with director/TripleO.
We also need:
1. packaging for ironic-staging-drivers (may already exist)
2. adding ironic-staging-drivers to RHOS
3. installing ironic-staging-drivers on the undercloud
4. changing the undercloud to allow enrolling nodes with ovirt
5. downstream CI
Changing the component, as this RFE won't be implemented in ironic itself. The last bit will be the undercloud, so moving to it.
Also status -> ASSIGNED, as the undercloud patches are not up yet.
*** Bug 1500146 has been marked as a duplicate of this bug. ***
All patches merged, ready for testing.
Feedback from Rhys of the initial testing:
1) The current OSP13 puddle has a number of issues with overcloud deployment. I had to use a slightly older puddle (2018-02-14.1) which has its own set of bugs, but these will be ironed out over the next few weeks.
2) We ran into some problems with the current oVirt driver as part of ironic_staging_drivers - we couldn’t power on nodes through a ’nova boot’ process, but ‘ironic node-set-power-state’ worked just fine; Karim was really helpful here and identified a bug due to differences in the oVirt SDK. This has been patched: https://review.openstack.org/#/c/548943/1 - please can we get some reviews on this as it’s absolutely essential - without this patch we can’t provision.
3) The import of instackenv files to define the nodes is slightly broken for oVirt machines. It forces you to specify fields that you don’t need to pull in, and then the import hangs until you remove those (see ~/fix-ovirt-hosts.sh), then the import process continues and you can introspect
4) We need to properly document the package requirements and dependencies for the oVirt driver - you need to install a number of packages and manually install the oVirt SDK - you can use pip, or we need to ship the RPM in OSP (see ~/before-import.sh)
5) RHV doesn’t support VLAN trunking, so you have to create a new vNIC for each network traffic type and specify these, although there’s a multi-nics template example that ships with openstack-tripleo-heat-templates that can be customised to suit.
For the reference, scripts referenced in comment 10:
$ cat fix-ovirt-hosts.sh
for i in 1 2 3; do ironic node-update osp13-controller$i remove driver_info/ovirt_address; ironic node-update osp13-controller$i remove driver_info/ovirt_password; ironic node-update osp13-controller$i remove driver_info/ovirt_username; done
$ cat before-import.sh
sudo easy_install pip
sudo yum install -y gcc python-devel libxml2-devel libcurl-devel
sudo pip install ovirt-engine-sdk-python
sudo systemctl restart openstack-ironic-conductor
Installed latest and successfully deployed an overcloud with ovirt drivers.
This was successfully deployed with latest osp13 and using the python-ovirt-engine-sdk4 package which is now part of release
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.
For information on the advisory, and where to find the updated
files, follow the link below.
If the solution does not work for you, open a new bug report.