Bug 1294495

Summary: incorrect behavior when run vm on host, that has numa nodes with indexes 0 and 1
Product: Red Hat Enterprise Linux 7 Reporter: Artyom <alukiano>
Component: libvirtAssignee: Libvirt Maintainers <libvirt-maint>
Status: CLOSED NOTABUG QA Contact: Virtualization Bugs <virt-bugs>
Severity: medium Docs Contact:
Priority: unspecified    
Version: 7.2CC: alukiano, dyuan, lhuang, rbalakri
Target Milestone: rc   
Target Release: ---   
Hardware: Unspecified   
OS: Unspecified   
Whiteboard:
Fixed In Version: Doc Type: Bug Fix
Doc Text:
Story Points: ---
Clone Of: Environment:
Last Closed: 2015-12-30 07:44:51 UTC Type: Bug
Regression: --- Mount Type: ---
Documentation: --- CRM:
Verified Versions: Category: ---
oVirt Team: --- RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: --- Target Upstream Version:
Embargoed:

Description Artyom 2015-12-28 13:29:35 UTC
Description of problem:
I run vm with next parameters
<numatune>
    <memory mode='interleave' nodeset='0-1'/>
</numatune>
and I wait that vm process will have Mems_allowed_list:      0-1
but instead it, it has Mems_allowed_list:      0
I also tried it on host with numa indexes 0,2
<numatune>
    <memory mode='interleave' nodeset='0,2'/>
</numatune>
and it work as expected
Mems_allowed_list:      0,2

Version-Release number of selected component (if applicable):
# rpm -qa | grep libvirt
libvirt-daemon-driver-network-1.2.17-13.el7_2.2.ppc64le
libvirt-daemon-driver-nodedev-1.2.17-13.el7_2.2.ppc64le
libvirt-daemon-driver-storage-1.2.17-13.el7_2.2.ppc64le
libvirt-daemon-1.2.17-13.el7_2.2.ppc64le
libvirt-python-1.2.17-2.el7.ppc64le
libvirt-daemon-driver-secret-1.2.17-13.el7_2.2.ppc64le
libvirt-lock-sanlock-1.2.17-13.el7_2.2.ppc64le
libvirt-daemon-kvm-1.2.17-13.el7_2.2.ppc64le
libvirt-client-1.2.17-13.el7_2.2.ppc64le
libvirt-daemon-driver-nwfilter-1.2.17-13.el7_2.2.ppc64le
libvirt-daemon-driver-qemu-1.2.17-13.el7_2.2.ppc64le
libvirt-daemon-config-nwfilter-1.2.17-13.el7_2.2.ppc64le
libvirt-daemon-driver-interface-1.2.17-13.el7_2.2.ppc64le

How reproducible:
Always

Steps to Reproduce:
1. start vm with parametets:
<numatune>
    <memory mode='interleave' nodeset='0-1'/>
</numatune> on host with numa nodes 0 and 1
2. check vm memory allowed list
cat /proc/vm_pid/status | grep Mems_allowed_list
3.

Actual results:
Mems_allowed_list:      0

Expected results:
Mems_allowed_list:      0-1

Additional info:

Comment 2 Artyom 2015-12-30 07:44:51 UTC
My mistake, I checked again host where I ran my test and it not has memory on second numa node, so I assume, that behavior described above is correct one and not a bug. 
# numactl -H
available: 2 nodes (0-1)
node 0 cpus: 0 8 16 24 32
node 0 size: 32768 MB
node 0 free: 28681 MB
node 1 cpus: 40 48 56 64 72
node 1 size: 0 MB
node 1 free: 0 MB
node distances:
node   0   1 
  0:  10  20 
  1:  20  10