In RHEL-OSP 7.0, the NumaTopologyFilter used to filter instances as part of scheduling now has PCI device awareness. The description of this filter in "3.5.1. Configure Scheduling Filters" in the Administration Guide must be updated to include some brief details of the new functionality. BluePrint - https://blueprints.launchpad.net/nova/+spec/input-output-based-numa-scheduling
Assigning to Radek for review. Radek - this is another Nova bug, and the changes can be applied to the content in the Administration Guide for now.
While reading the relevant part of the Administration Guide, I found the following bullet point: * The nova boot command, see the "Command-Line Interface Reference" in https://access.redhat.com/documentation/en-US/Red_Hat_Enterprise_Linux_OpenStack_Platform/. I believe it would be quite convenient if the link led to https://access.redhat.com/documentation/en-US/Red_Hat_Enterprise_Linux_OpenStack_Platform/6/html/Command-Line_Interface_Reference_Guide/novaclient_commands.html#novaclient_subcommand_boot directly. (The OSP version in the URL would depend on the particular version described in the Administration Guide.)
Stephen, Could you please check if the following description is technically correct? With I/O (PCIe) based NUMA scheduling, the filter allows for locality of PCI devices passed to the guest and ensures that the guest is scheduled on the requested host NUMA node, thus improving performance and latency. (I'm thinking of appending that to the current description of the NUMATopologyFilter.)
Close, more accurately: With I/O (PCIe) based NUMA scheduling, when attaching PCI devices to guests their NUMA locality is stored - where the hardware chipset supports NUMA locality of PCI devices. The NUMATopologyFilter filter uses this information to ensure that the guest is scheduled on the host NUMA node associated with PCI device(s) it has been passed, thus improving performance and latency.
Assigning Don as the QA contact. Don, could you take a look at the newly added content?
Edited and merged (although it wasn't that simple due to a strange merge conflict in gitlab).
This content is now live on the Customer Portal. Closing.