Support has been added for intelligent NUMA node placement for guests that have been assigned a host PCI device. PCI I/O devices, such as Network Interface Cards (NICs), can be more closely associated with one processor than another. This is important because there are different memory performance and latency characteristics when accessing memory directly attached to one processor than when accessing memory directly attached to another processor in the same server. With this update, Openstack guest placement can be optimized by ensuring that a guest bound to a PCI device is scheduled to run on a NUMA node that is associated with the guest's pCPU and memory allocation. For example, if a guest's resource requirements fit in a single NUMA node, all guest resources will now be associated with the same NUMA node.
DescriptionRHOS Integration
2014-06-05 04:02:12 UTC
Cloned from launchpad blueprint https://blueprints.launchpad.net/nova/+spec/input-output-based-numa-scheduling.
Description:
The NUMA locality of I/O devices is another important characteristic to consider when configuring a high performance, low latency system for NFV workloads.
This blueprint aims at combining NUMA based PCIe device information with the CPU/NUMA related topology info being developed in" Virt driver guest NUMA node placement & topology" https://blueprints.launchpad.net/nova/+spec/virt-driver-numa-placement .
The proposal is to extend the nova pci_devices database with a NUMA cell field. This will be done in the libvirt driver.
The NUMA filter (in development) will be extended to query the nova.pci_devices to check on the permitted NUMA allocation.
Specification URL (additional information):
None
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.
For information on the advisory, and where to find the updated
files, follow the link below.
If the solution does not work for you, open a new bug report.
https://rhn.redhat.com/errata/RHSA-2015-0790.html
Comment 24Red Hat Bugzilla
2023-09-18 00:10:43 UTC
The needinfo request[s] on this closed bug have been removed as they have been unresolved for 120 days