Bug 1745247 - Numa pinning is ignoring free hugepages for VM's using both
Summary: Numa pinning is ignoring free hugepages for VM's using both
Keywords:
Status: CLOSED DUPLICATE of bug 1720558
Alias: None
Product: ovirt-engine
Classification: oVirt
Component: General
Version: 4.3.5.5
Hardware: x86_64
OS: Unspecified
unspecified
medium
Target Milestone: ---
: ---
Assignee: bugs@ovirt.org
QA Contact: meital avital
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2019-08-24 10:44 UTC by Ralf Schenk
Modified: 2019-08-25 03:28 UTC (History)
3 users (show)

Fixed In Version:
Clone Of:
Environment:
Last Closed: 2019-08-25 03:28:27 UTC
oVirt Team: Virt
Embargoed:


Attachments (Terms of Use)

Description Ralf Schenk 2019-08-24 10:44:12 UTC
Description of problem:
Scheduler ignores free hugepages of numa-nodes when using numa-pinning and hugegaes (1G) for VM. Only free memory is taken into calculation which is too low per numa node since most ist reseved for hugepages.

Version-Release number of selected component (if applicable):
ovirt-engine-4.3.4.3-1.el7.noarch
vdsm-4.30.17-1.el7.x86_64
ovirt-release-host-node-4.3.4-1.el7.noarch

How reproducible:
Reserve 3/4 of RAM for hugepages (1G) try to ping VM with larger Memory requirements i.e. 32 GB to numa-nodes

Steps to Reproduce:
1. My EPYC 7281 based Servers (Dual Socket) have 8 Numa-Nodes each having 32 GB of memory for a total of 256 GB System Memory
2. reserve 192 x 1 GB hugepages reserved on the kernel cmdline default_hugepagesz=1G hugepagesz=1G hugepages=192 This reserves 24 hugepages on each numa-node.
3. Pin VM using 32 GB (custom Ppoperty hugepages=1048576) to numa-nodes 0-3 of CPU-Socket 1
4. Start VM

Actual results:
VM can't be started
error message in UI
"The host foo did not satisfy internal filter NUMA because cannot accommodate memory of VM's pinned virtual NUMA nodes within host's physical NUMA nodes"

Expected results:
Should start and use 8 hugepages = 8 GB / numa node 0-3 for 32 GB Memory


Additional info:
System has enough free-pages on numa-node 0-3:
grep "" /sys/devices/system/node/*/hugepages/hugepages-1048576kB/free_hugepages
/sys/devices/system/node/node0/hugepages/hugepages-1048576kB/free_hugepages:24
/sys/devices/system/node/node1/hugepages/hugepages-1048576kB/free_hugepages:22
/sys/devices/system/node/node2/hugepages/hugepages-1048576kB/free_hugepages:22
/sys/devices/system/node/node3/hugepages/hugepages-1048576kB/free_hugepages:24
/sys/devices/system/node/node4/hugepages/hugepages-1048576kB/free_hugepages:22
/sys/devices/system/node/node5/hugepages/hugepages-1048576kB/free_hugepages:14
/sys/devices/system/node/node6/hugepages/hugepages-1048576kB/free_hugepages:17
/sys/devices/system/node/node7/hugepages/hugepages-1048576kB/free_hugepages:19

was already in https://bugzilla.redhat.com/show_bug.cgi?id=1720558 but bug description changed and didn't show root cause of problem

Comment 1 Ryan Barry 2019-08-25 03:28:27 UTC
Sure, but the root cause is still the same as that bug. Closing as a duplicate, and we'll track there.

*** This bug has been marked as a duplicate of bug 1720558 ***


Note You need to log in before you can comment on or make changes to this bug.