This RFE focuses on making NUMA affinity for SR-IOV/PCI devices optional. The spec missed the Pike deadline so this has been deferred to Queens.
As (hopefully) noted previously, this is being taken care of by a Mirantis guy. I plan to keep an eye on this this cycle and step in if necessary.
Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory, and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://access.redhat.com/errata/RHEA-2018:2086
A mistake was made during implementation of this feature. While this RFE specifically called out support for optional NUMA affinity for SR-IOV devices, what was implemented upstream was support for optional NUMA affinity of standard PCI passthrough devices. These are handled differently. SR-IOV devices are created by neutron and attached as network devices. For example: openstack port create ... openstack server create --nic port-id=$port_id ... PCI passthrough devices, by comparison, are attached by specifying PCI aliases in the flavor and attached by nova at boot time: openstack flavor set m1.large --property "pci_passthrough:alias"="a1:2" openstack server create --flavor m1.large ... The feature, as currently implemented, allows PCI policies to be defined in the alias configuration in 'nova.conf' and therefore only supports the latter type of attachment. Clearly some additional work is required here, however, given that the feature as implemented has use (FPGAs jump to mind), we should build upon what's been done rather than replace it. As a result, I'm going to clone this BZ. The cloned BZ will focus on closing the SR-IOV gap, while this BZ will be renamed to handle the PCI passthrough case that has already been addressed.
*** Bug 1663653 has been marked as a duplicate of this bug. ***
Can you please review https://access.redhat.com/support/cases/#/case/02255851? This issue was observed in RH OSP13. Has it been fixed now?
(In reply to Vinayak from comment #15) > Can you please review > https://access.redhat.com/support/cases/#/case/02255851? This issue was > observed in RH OSP13. Has it been fixed now? I fail to see what hugepage allocation issues have to do with this feature. Could you elaborate (via a new bug), please?