Support has been added for intelligent NUMA node placement for guests that have been assigned a host PCI device. PCI I/O devices, such as Network Interface Cards (NICs), can be more closely associated with one processor than another. This is important because there are different memory performance and latency characteristics when accessing memory directly attached to one processor than when accessing memory directly attached to another processor in the same server. With this update, Openstack guest placement can be optimized by ensuring that a guest bound to a PCI device is scheduled to run on a NUMA node that is associated with the guest's pCPU and memory allocation. For example, if a guest's resource requirements fit in a single NUMA node, all guest resources will now be associated with the same NUMA node.
DescriptionRHOS Integration
2015-02-17 05:01:54 UTC
Cloned from launchpad blueprint https://blueprints.launchpad.net/nova/+spec/input-output-based-numa-scheduling.
Description:
The NUMA locality of I/O devices is another important characteristic to consider when configuring a high performance, low latency system for NFV workloads.
This blueprint aims at combining NUMA based PCIe device information with the CPU/NUMA related topology info included in the juno release.
This optimises Openstack guest placement by ensuring that a guest bound to a PCI device is scheduled to run on a NUMA node that is associated with the guests pCPU and memory allocation.
Specification URL (additional information):
http://specs.openstack.org/openstack/nova-specs/specs/kilo/approved/input-output-based-numa-scheduling.html
Created attachment 1040678[details]
Relevant info, single instance
I still need to do some additional testing, but looks good so far with a single SRIOV nic allocated to an instance.
I also continually booted up new instances, verifying that as each VM booted up, it was pinned to the correct NUMA node and when the VFs were all consumed, no other instances could successfully be booted up.
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.
For information on the advisory, and where to find the updated
files, follow the link below.
If the solution does not work for you, open a new bug report.
https://access.redhat.com/errata/RHEA-2015:1548