Red Hat Bugzilla – Bug 856149
Need to provide a formal API for Nova to use to plug/unplug VIFs
Last modified: 2016-04-26 12:18:01 EDT
Description of problem:
The current interaction model between Nova and Quantum is critically flawed in a number of respects
- There is a hard configuration dependency between Quantum & Nova. ie When changing the Quantum network driver implementation, the admin must also update the Nova libvirt_vif_driver config parameter to match.
- Some quantum drivers are providing custom plugin impls for the Nova libvirt VIF driver classes. This exposes Quantum to the internal implementation details of the libvirt driver in Nova. These impl details ought to be private to Nova since they can be changed arbitrarily at any time which will break Quantum plugins (eg see bug 1046758).
- Some Quantum drivers are tied to usage with Nova and libvirt. This is more or less the same point as above - since the drivers need to provide custom libvirt VIF drivers to work with Nova, this is tieing Quantum into Nova + libvirt, preventing its re-use with other non-libvirt drivers or applications like oVirt.
- Some Quantum drivers are doing work which belongs under the hypervisors' control. eg the Quantum Linux bridge driver is wanting to create TAP devices itself & add them to the bridge. This is only achievable by using the libvirt type=ethernet VIF config. Not only is this config designated unsupported by RHEL (due to its inherent security limitations), but this can only ever work with KVM. libvirt's LXC and Xen drivers do not use TAP devices for their networking, and want to be in charge of adding their own interface to the bridge.
All these problems could be solved if Quantum exposed a formal API for compute services to call to plug/unplug VIFs, instead of relying on hooking into the libvirt VIF driver internal impls. The API would do any port configuration work that might be neccessary, and then return information about where the VIF should be attached & what parameters it should use. The compute service would then take care of actually deciding the optimal libvirt configuration & let the hypervisor actually create & attach the VIF to the network.
See also upstream bug
Version-Release number of selected component (if applicable):
This will be addressed in https://blueprints.launchpad.net/quantum/+spec/vif-plugging-improvements. This is targeted for Grizzly.
There are 2 pending reviews that can be backported to Folsom - these improve the usage of the linuxbridge plugin:
Quantum - https://review.openstack.org/#/c/14961/
Nova - https://review.openstack.org/#/c/14830/
In the nova conf the libvirt_vif_driver3should have the following: libvirt_vif_driver = nova.virt.libvirt.vif.LibvirtGenericVIFDriver
Nova will receive the relevant information from Quantum. This should work out of the box.
In short we should check that the linux bridge and ovs plugins are able to spawn VM's and traffic works from them when the aforementioned driver is set.
There are anumber of notes here to be aware of:
1. When using the linuxbridge plugin the new generic driver can be used (libvirt_vif_driver = nova.virt.libvirt.vif.LibvirtGenericVIFDriver)
2. When using the openvswitch as the quantum plugin the driver
libvirt_vif_driver = nova.virt.libvirt.vif.LibvirtHybridOVSBridgeDriver
should be used. This is required for the security group support by Quantum. This is currently being addressed upstream
rpm -qa | grep quan
OVS - VMs get ip and communicate with outside components