Bug 1294495 - incorrect behavior when run vm on host, that has numa nodes with indexes 0 and 1
incorrect behavior when run vm on host, that has numa nodes with indexes 0 and 1
Status: CLOSED NOTABUG
Product: Red Hat Enterprise Linux 7
Classification: Red Hat
Component: libvirt (Show other bugs)
7.2
Unspecified Unspecified
unspecified Severity medium
: rc
: ---
Assigned To: Libvirt Maintainers
Virtualization Bugs
:
Depends On:
Blocks:
  Show dependency treegraph
 
Reported: 2015-12-28 08:29 EST by Artyom
Modified: 2015-12-30 02:44 EST (History)
4 users (show)

See Also:
Fixed In Version:
Doc Type: Bug Fix
Doc Text:
Story Points: ---
Clone Of:
Environment:
Last Closed: 2015-12-30 02:44:51 EST
Type: Bug
Regression: ---
Mount Type: ---
Documentation: ---
CRM:
Verified Versions:
Category: ---
oVirt Team: ---
RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: ---


Attachments (Terms of Use)

  None (edit)
Description Artyom 2015-12-28 08:29:35 EST
Description of problem:
I run vm with next parameters
<numatune>
    <memory mode='interleave' nodeset='0-1'/>
</numatune>
and I wait that vm process will have Mems_allowed_list:      0-1
but instead it, it has Mems_allowed_list:      0
I also tried it on host with numa indexes 0,2
<numatune>
    <memory mode='interleave' nodeset='0,2'/>
</numatune>
and it work as expected
Mems_allowed_list:      0,2

Version-Release number of selected component (if applicable):
# rpm -qa | grep libvirt
libvirt-daemon-driver-network-1.2.17-13.el7_2.2.ppc64le
libvirt-daemon-driver-nodedev-1.2.17-13.el7_2.2.ppc64le
libvirt-daemon-driver-storage-1.2.17-13.el7_2.2.ppc64le
libvirt-daemon-1.2.17-13.el7_2.2.ppc64le
libvirt-python-1.2.17-2.el7.ppc64le
libvirt-daemon-driver-secret-1.2.17-13.el7_2.2.ppc64le
libvirt-lock-sanlock-1.2.17-13.el7_2.2.ppc64le
libvirt-daemon-kvm-1.2.17-13.el7_2.2.ppc64le
libvirt-client-1.2.17-13.el7_2.2.ppc64le
libvirt-daemon-driver-nwfilter-1.2.17-13.el7_2.2.ppc64le
libvirt-daemon-driver-qemu-1.2.17-13.el7_2.2.ppc64le
libvirt-daemon-config-nwfilter-1.2.17-13.el7_2.2.ppc64le
libvirt-daemon-driver-interface-1.2.17-13.el7_2.2.ppc64le

How reproducible:
Always

Steps to Reproduce:
1. start vm with parametets:
<numatune>
    <memory mode='interleave' nodeset='0-1'/>
</numatune> on host with numa nodes 0 and 1
2. check vm memory allowed list
cat /proc/vm_pid/status | grep Mems_allowed_list
3.

Actual results:
Mems_allowed_list:      0

Expected results:
Mems_allowed_list:      0-1

Additional info:
Comment 2 Artyom 2015-12-30 02:44:51 EST
My mistake, I checked again host where I ran my test and it not has memory on second numa node, so I assume, that behavior described above is correct one and not a bug. 
# numactl -H
available: 2 nodes (0-1)
node 0 cpus: 0 8 16 24 32
node 0 size: 32768 MB
node 0 free: 28681 MB
node 1 cpus: 40 48 56 64 72
node 1 size: 0 MB
node 1 free: 0 MB
node distances:
node   0   1 
  0:  10  20 
  1:  20  10

Note You need to log in before you can comment on or make changes to this bug.