Bug 1294495 - incorrect behavior when run vm on host, that has numa nodes with indexes 0 and 1
Summary: incorrect behavior when run vm on host, that has numa nodes with indexes 0 and 1
Keywords:
Status: CLOSED NOTABUG
Alias: None
Product: Red Hat Enterprise Linux 7
Classification: Red Hat
Component: libvirt
Version: 7.2
Hardware: Unspecified
OS: Unspecified
unspecified
medium
Target Milestone: rc
: ---
Assignee: Libvirt Maintainers
QA Contact: Virtualization Bugs
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2015-12-28 13:29 UTC by Artyom
Modified: 2015-12-30 07:44 UTC (History)
4 users (show)

Fixed In Version:
Doc Type: Bug Fix
Doc Text:
Clone Of:
Environment:
Last Closed: 2015-12-30 07:44:51 UTC
Target Upstream Version:


Attachments (Terms of Use)

Description Artyom 2015-12-28 13:29:35 UTC
Description of problem:
I run vm with next parameters
<numatune>
    <memory mode='interleave' nodeset='0-1'/>
</numatune>
and I wait that vm process will have Mems_allowed_list:      0-1
but instead it, it has Mems_allowed_list:      0
I also tried it on host with numa indexes 0,2
<numatune>
    <memory mode='interleave' nodeset='0,2'/>
</numatune>
and it work as expected
Mems_allowed_list:      0,2

Version-Release number of selected component (if applicable):
# rpm -qa | grep libvirt
libvirt-daemon-driver-network-1.2.17-13.el7_2.2.ppc64le
libvirt-daemon-driver-nodedev-1.2.17-13.el7_2.2.ppc64le
libvirt-daemon-driver-storage-1.2.17-13.el7_2.2.ppc64le
libvirt-daemon-1.2.17-13.el7_2.2.ppc64le
libvirt-python-1.2.17-2.el7.ppc64le
libvirt-daemon-driver-secret-1.2.17-13.el7_2.2.ppc64le
libvirt-lock-sanlock-1.2.17-13.el7_2.2.ppc64le
libvirt-daemon-kvm-1.2.17-13.el7_2.2.ppc64le
libvirt-client-1.2.17-13.el7_2.2.ppc64le
libvirt-daemon-driver-nwfilter-1.2.17-13.el7_2.2.ppc64le
libvirt-daemon-driver-qemu-1.2.17-13.el7_2.2.ppc64le
libvirt-daemon-config-nwfilter-1.2.17-13.el7_2.2.ppc64le
libvirt-daemon-driver-interface-1.2.17-13.el7_2.2.ppc64le

How reproducible:
Always

Steps to Reproduce:
1. start vm with parametets:
<numatune>
    <memory mode='interleave' nodeset='0-1'/>
</numatune> on host with numa nodes 0 and 1
2. check vm memory allowed list
cat /proc/vm_pid/status | grep Mems_allowed_list
3.

Actual results:
Mems_allowed_list:      0

Expected results:
Mems_allowed_list:      0-1

Additional info:

Comment 2 Artyom 2015-12-30 07:44:51 UTC
My mistake, I checked again host where I ran my test and it not has memory on second numa node, so I assume, that behavior described above is correct one and not a bug. 
# numactl -H
available: 2 nodes (0-1)
node 0 cpus: 0 8 16 24 32
node 0 size: 32768 MB
node 0 free: 28681 MB
node 1 cpus: 40 48 56 64 72
node 1 size: 0 MB
node 1 free: 0 MB
node distances:
node   0   1 
  0:  10  20 
  1:  20  10


Note You need to log in before you can comment on or make changes to this bug.