When VM with NUMA configured is started Engine generates CPU pinning to make sure vCPUs are located on respective NUMA nodes on host (based on configured NUMA pinnig). But because Engine specifies CPU policy as "none" this CPU pinning is later destroyed by VDSM and CPUs from shared pool are used. The CPU policy for NUMA VMs should be set to "manual" to preserve the CPU pinning.
This bug report has Keywords: Regression or TestBlocker. Since no regressions or test blockers are allowed between releases, it is also being identified as a blocker for this release. Please resolve ASAP.
verified on 4.5.0.2-0.7.el8ev by running automation tests (sla/rhevmtests.compute.sla.numa.numa_test.TestStrictNumaModeOnVM.test_cpu_pinning and numa_test.TestPreferModeOnVm.test_cpu_pinning) . the tests get numa cpu pinning on host , compares the VM cpu pinning in cat /proc/700992/task/*/status |grep Cpus_allowed_list. Tested also for the migration case.
This bugzilla is included in oVirt 4.5.0 release, published on April 20th 2022. Since the problem described in this bug report should be resolved in oVirt 4.5.0 release, it has been closed with a resolution of CURRENT RELEASE. If the solution does not work for you, please open a new bug report.