Bug 1511488 - vNUMA pinning not automatically pin all vCPUs as required in case of odd number of vCPUs
Summary: vNUMA pinning not automatically pin all vCPUs as required in case of odd numb...
Keywords:
Status: CLOSED CURRENTRELEASE
Alias: None
Product: ovirt-engine
Classification: oVirt
Component: Backend.Core
Version: 4.2.0
Hardware: Unspecified
OS: Unspecified
medium
medium
Target Milestone: ovirt-4.2.2
: 4.2.2.2
Assignee: Andrej Krejcir
QA Contact: Polina
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2017-11-09 12:57 UTC by Sharon Gratch
Modified: 2018-04-18 12:25 UTC (History)
3 users (show)

Fixed In Version:
Doc Type: Bug Fix
Doc Text:
Cause: When NUMA nodes were created in the UI, the CPUs were divided equally between the nodes and each node had the same number of CPUs assigned. Consequence: If the total number of CPUs was not divisible by the number of NUMA nodes, the remaining CPUs were not pinned to any node. Fix: Pin the remaining CPUs to some nodes. Result: A few nodes have one more CPU pinned than the rest.
Clone Of:
Environment:
Last Closed: 2018-04-18 12:25:37 UTC
oVirt Team: SLA
Embargoed:
rule-engine: ovirt-4.2+


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
oVirt gerrit 86965 0 master MERGED webadmin: Make VM numa node size divisible by hugepage size 2018-02-09 10:08:59 UTC
oVirt gerrit 87381 0 ovirt-engine-4.2 MERGED webadmin: Make VM numa node size divisible by hugepage size 2018-03-01 10:23:04 UTC

Description Sharon Gratch 2017-11-09 12:57:21 UTC
Description of problem:
Setting vNUMA pinning for a VM with two vNUMA nodes and odd number of vCPUs does not automatically pin all vCPUs to physical CPUs as required. There is always one vCPU left un-pinned. 

Version-Release number of selected component (if applicable):
4.2 master

How reproducible:
100%

Steps to Reproduce:
1. Create a VM with 3 vCPUs and 2 vNUMA nodes which are pinned to 2 host's NUMA    nodes.
2. Leave the 'CPU pinning topology' field within the 'Resource allocation' tab empty, to let the CPU pinning calculated automatically by vNUMA pinning algorithm.
3. Run the VM and check libvirt xml to see the vCPU pinning topology.


Actual results:
vCPU #3 is not pinned at all. Only #0, #1 are pinned.

Expected results:
all 3 vCPUs should be pinned.

Comment 1 Polina 2018-04-10 06:27:09 UTC
Verification scenario:

Host tab:
	Numa node count = 2
	Numa pinning tab:  pin vNuma to Numa nodes

System tab:
	Total vCPUs = 3

On Host:
numactl -H
available: 2 nodes (0-1)
node 0 cpus: 0 1 2 3 4 5 12 13 14 15 16 17
node 0 size: 12276 MB
node 0 free: 311 MB
node 1 cpus: 6 7 8 9 10 11 18 19 20 21 22 23
node 1 size: 12288 MB
node 1 free: 55 MB
node distances:
node   0   1 
  0:  10  21 
  1:  21  10

from VM xmldump:
 <cputune>
    <vcpupin vcpu='0' cpuset='0-5,12-17'/>
    <vcpupin vcpu='1' cpuset='0-5,12-17'/>
    <vcpupin vcpu='2' cpuset='6-11,18-23'/>
  </cputune>

Comment 2 Sandro Bonazzola 2018-04-18 12:25:37 UTC
This bugzilla is included in oVirt 4.2.2 release, published on March 28th 2018.

Since the problem described in this bug report should be
resolved in oVirt 4.2.2 release, it has been closed with a resolution of CURRENT RELEASE.

If the solution does not work for you, please open a new bug report.


Note You need to log in before you can comment on or make changes to this bug.