Hide Forgot
Description of problem: libvirtd doesn't check the host node validity of memnode, and passed to qemu cmdline. version: libvirt-1.2.8-15.el7.x86_64 qemu-kvm-rhev-2.1.2-21.el7.x86_64 3.10.0-223.el7.x86_64 How reproducible: 100% Step to reproduce: 1. prepare a NUMA has two nodes # numactl --hard available: 2 nodes (0-1) node 0 cpus: 0 1 2 3 4 5 6 7 16 17 18 19 20 21 22 23 node 0 size: 65514 MB node 0 free: 63084 MB node 1 cpus: 8 9 10 11 12 13 14 15 24 25 26 27 28 29 30 31 node 1 size: 65536 MB node 1 free: 63172 MB node distances: node 0 1 0: 10 11 1: 11 10 2. config domain xml and align one guest cell to invalid host node # virsh edit rhel7 <memory unit='KiB'>2097152</memory> <currentMemory unit='KiB'>2097152</currentMemory> <vcpu placement='auto' current='2'>5</vcpu> <numatune> <memory mode='strict' nodeset='0-1'/> <memnode cellid='0' mode='strict' nodeset='1'/> <memnode cellid='1' mode='preferred' nodeset='2'/> </numatune> ... <cpu> <numa> <cell id='0' cpus='0-1' memory='1048576'/> <cell id='1' cpus='2-3' memory='1048576'/> </numa> </cpu> ... 3. start guest # virsh start rhel7 error: Failed to start domain rhel7 error: internal error: process exited while connecting to monitor: 2015-01-26T07:49:16.560747Z qemu-kvm: -object memory-backend-ram,size=1024M,id=ram-node1,host-nodes=2,policy=preferred: cannot bind memory to host NUMA nodes: Invalid argument the qemu cmdline is qemu /usr/libexec/qemu-kvm -name rhel7 -S -machine pc-i440fx-rhel7.0.0,accel=kvm,usb=off -m 2048 -realtime mlock=off -smp 2,maxcpus=5,sockets=5,cores=1,threads=1 -object memory-backend-ram,size=1024M,id=ram-node0,host-nodes=1,policy=bind -numa node,nodeid=0,cpus=0-1,memdev=ram-node0 -object memory-backend-ram,size=1024M,id=ram-node1,host-nodes=2,policy=preferred -numa node,nodeid=1,cpus=2-3,memdev=ram-node1 -uuid 1edfafc5-a55a-4396-9595-46e590bfc79a -no-user-config -nodefaults -chardev socket,id=charmonitor,path=/var/lib/libvirt/qemu/rhel7.monitor,server,nowait -mon chardev=charmonitor,id=monitor,mode=control -rtc base=utc,driftfix=slew -global kvm-pit.lost_tick_policy=discard -no-hpet -no-shutdown -global PIIX4_PM.disable_s3=0 -global PIIX4_PM.disable_s4=0 -boot strict=on -device piix3-usb-uhci,id=usb,bus=pci.0,addr=0x1.0x2 -device virtio-serial-pci,id=virtio-serial0,bus=pci.0,addr=0x5 -drive file=/mnt/jmiao/r71.img,if=none,id=drive-virtio-disk0,format=qcow2 -device virtio-blk-pci,scsi=off,bus=pci.0,addr=0x4,drive=drive-virtio-disk0,id=virtio-disk0,bootindex=1 -chardev pty,id=charserial0 -device isa-serial,chardev=charserial0,id=serial0 -vnc 127.0.0.1:0 -device cirrus-vga,id=video0,bus=pci.0,addr=0x2 -device virtio-balloon-pci,id=balloon0,bus=pci.0,addr=0x6 -msg timestamp=on Expect result: report invalid nodeset for memnode
Fixed upstream with v1.2.10-29-gc63ef04: commit c63ef0452b299899fbe55559b8d9e8818e91566d Author: Martin Kletzander <mkletzan> Date: Thu Nov 6 12:16:54 2014 +0100 numa: split util/ and conf/ and support non-contiguous nodesets
I can reproduce this bug with libvirt-1.2.8-16.el7.x86_64: 1. # virsh dumpxml test3 ... <numatune> <memnode cellid='0' mode='strict' nodeset='1'/> </numatune> ... 2. # numactl --hard available: 1 nodes (0) node 0 cpus: 0 1 2 3 node 0 size: 7365 MB node 0 free: 1867 MB node distances: node 0 0: 10 3. # virsh start test3 error: Failed to start domain test3 error: internal error: process exited while connecting to monitor: 2015-05-19T06:55:19.631827Z qemu-kvm: -object memory-backend-ram,size=500M,id=ram-node0,host-nodes=1,policy=bind: cannot bind memory to host NUMA nodes: Invalid argument And verify this bug with libvirt-1.2.15-2.el7.x86_64: 1. # virsh dumpxml test3 ... <numatune> <memnode cellid='0' mode='strict' nodeset='1'/> </numatune> ... 2. # numactl --hard available: 1 nodes (0) node 0 cpus: 0 1 2 3 node 0 size: 7365 MB node 0 free: 1867 MB node distances: node 0 0: 10 3. # virsh start test3 error: Failed to start domain test3 error: unsupported configuration: NUMA node 1 is unavailable
Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory, and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://rhn.redhat.com/errata/RHBA-2015-2202.html