Description of problem: Attaching a PCIe devices to an instance will implicitly add a virtual NUMA topology to said instance. Unless otherwise specified (via PCI NUMA policies), the instance will confined to the host NUMA node the the PCI device is affinitized to. If the guest is additionally configured with multiple virtual NUMA nodes (e.g. the 'hw:numa_nodes=2' extra spec), each virtual NUMA node will be associated with a different host NUMA node. However, the PCI device will always be associated with the first virtual NUMA node. If the PCI device were associated with another host NUMA node (and therefore another virtual NUMA node), this would result in cross-NUMA traffic and reduced performance. Version-Release number of selected component (if applicable): RHEL OSP 10 How reproducible: Everytime. Steps to Reproduce: 1. Using a multi-socket host, ensure a PCI device is located in a NUMA node other than node 0 2. Spawn a two-node instance with the PCI attached Actual results: NUMA affinity is not accounted for in the instance XML, resulting in cross-NUMA traffic for the PCI device. Expected results: The PCI device should be associated with the virtual NUMA node associated with the same host NUMA node as the PCI device itself.
*** Bug 1697906 has been marked as a duplicate of this bug. ***
https://www.berrange.com/posts/2017/02/16/setting-up-a-nested-kvm-guest-for-developing-testing-pci-device-assignment-with-numa/