Description of problem: Setting vNUMA pinning for a VM with two vNUMA nodes and odd number of vCPUs does not automatically pin all vCPUs to physical CPUs as required. There is always one vCPU left un-pinned. Version-Release number of selected component (if applicable): 4.2 master How reproducible: 100% Steps to Reproduce: 1. Create a VM with 3 vCPUs and 2 vNUMA nodes which are pinned to 2 host's NUMA nodes. 2. Leave the 'CPU pinning topology' field within the 'Resource allocation' tab empty, to let the CPU pinning calculated automatically by vNUMA pinning algorithm. 3. Run the VM and check libvirt xml to see the vCPU pinning topology. Actual results: vCPU #3 is not pinned at all. Only #0, #1 are pinned. Expected results: all 3 vCPUs should be pinned.
Verification scenario: Host tab: Numa node count = 2 Numa pinning tab: pin vNuma to Numa nodes System tab: Total vCPUs = 3 On Host: numactl -H available: 2 nodes (0-1) node 0 cpus: 0 1 2 3 4 5 12 13 14 15 16 17 node 0 size: 12276 MB node 0 free: 311 MB node 1 cpus: 6 7 8 9 10 11 18 19 20 21 22 23 node 1 size: 12288 MB node 1 free: 55 MB node distances: node 0 1 0: 10 21 1: 21 10 from VM xmldump: <cputune> <vcpupin vcpu='0' cpuset='0-5,12-17'/> <vcpupin vcpu='1' cpuset='0-5,12-17'/> <vcpupin vcpu='2' cpuset='6-11,18-23'/> </cputune>
This bugzilla is included in oVirt 4.2.2 release, published on March 28th 2018. Since the problem described in this bug report should be resolved in oVirt 4.2.2 release, it has been closed with a resolution of CURRENT RELEASE. If the solution does not work for you, please open a new bug report.