Red Hat Bugzilla – Bug 1138545
guest NUMA cannot start when automatic NUMA placement
Last modified: 2015-03-05 02:44:00 EST
description of problem: guest NUMA cannot start when automatic NUMA placement. From the doc, only memnode is conflict with automatic NUMA placement, if <numa> <cell> also not compatible with automatic NUMA placement, an error in parse phase would be better, just like that for memnode. Version-Release number of selected component (if applicable): libvirt-1.2.8-1.el7.x86_64 How reproducible: 100% Steps to Reproduce: 1. prepare NUMA host, and add config to a guest: # virsh edit r7 ... <memory unit='KiB'>1048576</memory> <vcpu placement='auto'>4</vcpu> <numatune> <memory mode='strict' placement='auto'/> </numatune> <cpu> <numa> <cell id='0' cpus='0-1' memory='524288'/> <cell id='1' cpus='2-3' memory='524288'/> </numa> </cpu> ... 2. start it # virsh start r7 error: Failed to start domain r7 error: internal error: Advice from numad is needed in case of automatic numa placement Expect result: if <numa> <cell> also not compatible with automatic NUMA placement, an error in parse phase would be better, just like that for memnode. Actual result: failed to start.
By the way, this is happened in NUMA host.
Well, you're right. If someone wants to have guest NUMA nodes, he's much better off using static placement as automatic will most likely cause performance drop. Anyway, it should still be available to those who are using it already.
Fixed upstream by v1.2.10-2-g11a4875: commit 11a48758a7d6c946062c130b6186ae3eadd58e39 Author: Martin Kletzander <mkletzan@redhat.com> AuthorDate: Thu Oct 30 07:34:30 2014 +0100 qemu: make advice from numad available when building commandline
In latest libvirt-1.2.8-7.el7.x86_64, configuring 'auto' placement for guest vcpu will follow numad's suggestion in NUMA host. The verification steps are: 1. set auto placement for vcpu # virsh edit r71 ... <memory unit='KiB'>1048576</memory> <vcpu placement='auto'>4</vcpu> <numatune> <memory mode='strict' placement='auto'/> </numatune> <cpu> <numa> <cell id='0' cpus='0-1' memory='524288'/> <cell id='1' cpus='2-3' memory='524288'/> </numa> </cpu> ... 2. start guest # virsh start r71 Domain r71 started 3. check from libvirtd.log 2014-11-19 09:57:33.478+0000: 15781: debug : virCommandRunAsync:2398 : About to run /bin/numad -w 4:1024 ... 2014-11-19 09:57:35.485+0000: 15781: debug : qemuProcessStart:4297 : Nodeset returned from numad: 1 4. check guest vcpu pinning # numactl --hard available: 2 nodes (0-1) node 0 cpus: 0 1 2 3 4 5 6 7 16 17 18 19 20 21 22 23 node 0 size: 65514 MB node 0 free: 62337 MB node 1 cpus: 8 9 10 11 12 13 14 15 24 25 26 27 28 29 30 31 node 1 size: 65536 MB node 1 free: 62187 MB node distances: node 0 1 0: 10 11 1: 11 10 # cat /sys/fs/cgroup/cpuset/machine.slice/machine-qemu\\x2dr71.scope/vcpu0/cpuset.cpus 8-15,24-31 Guest is pinned to NUMA node 1. So change the status to VERIFIED.
Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory, and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://rhn.redhat.com/errata/RHSA-2015-0323.html