Bug 1221047
Summary: | 'virsh numatune DomName' shows incorrect numatune node set | ||
---|---|---|---|
Product: | Red Hat Enterprise Linux 7 | Reporter: | Gu Nini <ngu> |
Component: | libvirt | Assignee: | Martin Kletzander <mkletzan> |
Status: | CLOSED ERRATA | QA Contact: | Virtualization Bugs <virt-bugs> |
Severity: | unspecified | Docs Contact: | |
Priority: | unspecified | ||
Version: | 7.2 | CC: | dyuan, dzheng, gsun, lhuang, michen, mkletzan, rbalakri, xuhan, ypu, zhengtli, zhwang |
Target Milestone: | rc | Keywords: | Upstream |
Target Release: | --- | ||
Hardware: | ppc64le | ||
OS: | Unspecified | ||
Whiteboard: | |||
Fixed In Version: | libvirt-1.2.16-1.el7 | Doc Type: | Bug Fix |
Doc Text: | Story Points: | --- | |
Clone Of: | Environment: | ||
Last Closed: | 2015-11-19 06:31:04 UTC | Type: | Bug |
Regression: | --- | Mount Type: | --- |
Documentation: | --- | CRM: | |
Verified Versions: | Category: | --- | |
oVirt Team: | --- | RHEL 7.3 requirements from Atomic Host: | |
Cloudforms Team: | --- | Target Upstream Version: | |
Embargoed: |
Description
Gu Nini
2015-05-13 08:31:51 UTC
This is a result of: commit af2a1f0587d88656f2c14265a63fbc11ecbd924e Author: Martin Kletzander <mkletzan> AuthorDate: Fri Dec 12 15:29:48 2014 +0100 Commit: Martin Kletzander <mkletzan> CommitDate: Tue Dec 16 11:15:27 2014 +0100 qemu: Leave cpuset.mems in parent cgroup alone Instead of setting the value of cpuset.mems once when the domain starts and then re-calculating the value every time we need to change the child cgroup values, leave the cgroup alone and rather set the child data every time there is new cgroup created. We don't leave any task in the parent group anyway. This will ease both current and future code. Signed-off-by: Martin Kletzander <mkletzan> I'm investigating further. Patch posted upstream: https://www.redhat.com/archives/libvir-list/2015-May/msg00585.html Fixed upstream with v1.2.15-113-g9deb96f: commit 9deb96f9f0c061f21a1f37bdbb4f3820795343d6 Author: Martin Kletzander <mkletzan> Date: Mon May 18 14:55:10 2015 -0700 qemu: Fix numatune nodeset reporting Test on below packages: libvirt-1.2.17-2.el7.ppc64le kernel-3.10.0-292.el7.ppc64le qemu-kvm-rhev-2.3.0-9.el7.ppc64le Test 1: Get numa info Pass 1.1 # numactl --show policy: default preferred node: current physcpubind: 0 8 16 24 32 40 48 56 64 72 80 88 96 104 112 120 128 136 144 152 cpubind: 0 1 16 17 nodebind: 0 1 16 17 membind: 0 1 16 17 1.2 Edit XML as below and start the guest <vcpu placement='static'>2</vcpu> <numatune> <memory mode='strict' nodeset='0-1'/> <memnode cellid='0' mode='strict' nodeset='0'/> <memnode cellid='1' mode='preferred' nodeset='1'/> </numatune> <numa> <cell id='0' cpus='0' memory='1024000' unit='KiB'/> <cell id='1' cpus='1' memory='1024000' unit='KiB'/> </numa> 1.3 # virsh numatune dzhengvm2 numa_mode : strict numa_nodeset : 0-1 ===> Correct # cgget -g cpuset /machine.slice/machine-qemu\\x2ddzhengvm2.scope /machine.slice/machine-qemu\x2ddzhengvm2.scope: cpuset.memory_spread_slab: 0 cpuset.memory_spread_page: 0 cpuset.memory_pressure: 0 cpuset.memory_migrate: 1 cpuset.sched_relax_domain_level: -1 cpuset.sched_load_balance: 1 cpuset.mem_hardwall: 0 cpuset.mem_exclusive: 0 cpuset.cpu_exclusive: 0 cpuset.mems: 0-1,16-17 ===> Correct cpuset.cpus: 0,8,16,24,32,40,48,56,64,72,80,88,96,104,112,120,128,136,144,152 # cgget -g cpuset /machine.slice/machine-qemu\\x2ddzhengvm2.scope/emulator /machine.slice/machine-qemu\x2ddzhengvm2.scope/emulator: cpuset.memory_spread_slab: 0 cpuset.memory_spread_page: 0 cpuset.memory_pressure: 0 cpuset.memory_migrate: 1 cpuset.sched_relax_domain_level: -1 cpuset.sched_load_balance: 1 cpuset.mem_hardwall: 0 cpuset.mem_exclusive: 0 cpuset.cpu_exclusive: 0 cpuset.mems: 0-1 ===> Correct cpuset.cpus: 0,8,16,24,32,40,48,56,64,72,80,88,96,104,112,120,128,136,144,152 # cgget -g cpuset /machine.slice/machine-qemu\\x2ddzhengvm2.scope/vcpu0 /machine.slice/machine-qemu\x2ddzhengvm2.scope/vcpu0: cpuset.memory_spread_slab: 0 cpuset.memory_spread_page: 0 cpuset.memory_pressure: 0 cpuset.memory_migrate: 1 cpuset.sched_relax_domain_level: -1 cpuset.sched_load_balance: 1 cpuset.mem_hardwall: 0 cpuset.mem_exclusive: 0 cpuset.cpu_exclusive: 0 cpuset.mems: 0-1 ===> Correct cpuset.cpus: 0,8,16,24,32,40,48,56,64,72,80,88,96,104,112,120,128,136,144,152 # cgget -g cpuset /machine.slice/machine-qemu\\x2ddzhengvm2.scope/vcpu1 /machine.slice/machine-qemu\x2ddzhengvm2.scope/vcpu1: cpuset.memory_spread_slab: 0 cpuset.memory_spread_page: 0 cpuset.memory_pressure: 0 cpuset.memory_migrate: 1 cpuset.sched_relax_domain_level: -1 cpuset.sched_load_balance: 1 cpuset.mem_hardwall: 0 cpuset.mem_exclusive: 0 cpuset.cpu_exclusive: 0 cpuset.mems: 0-1 ===> Correct cpuset.cpus: 0,8,16,24,32,40,48,56,64,72,80,88,96,104,112,120,128,136,144,152 # cat /proc/`pidof -s qemu-kvm`/status ... Cpus_allowed: 01010101,01010101,01010101,01010101,01010101 Cpus_allowed_list: 0,8,16,24,32,40,48,56,64,72,80,88,96,104,112,120,128,136,144,152 Mems_allowed: 00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000003 Mems_allowed_list: 0-1 ===> Correct voluntary_ctxt_switches: 48066 nonvoluntary_ctxt_switches: 3 Test 2: set numa info Pass 2.1 # virsh numatune dzhengvm2 --nodeset 1 2.2 # cgget -g cpuset /machine.slice/machine-qemu\\x2ddzhengvm2.scope/vcpu1 /machine.slice/machine-qemu\x2ddzhengvm2.scope/vcpu1: ... cpuset.mems: 1 # cgget -g cpuset /machine.slice/machine-qemu\\x2ddzhengvm2.scope/vcpu0 /machine.slice/machine-qemu\x2ddzhengvm2.scope/vcpu0: cpuset.cpu_exclusive: 0 cpuset.mems: 1 cpuset.cpus: 0,8,16,24,32,40,48,56,64,72,80,88,96,104,112,120,128,136,144,152 # cgget -g cpuset /machine.slice/machine-qemu\\x2ddzhengvm2.scope /machine.slice/machine-qemu\x2ddzhengvm2.scope: ... cpuset.mems: 0-1,16-17 cpuset.cpus: 0,8,16,24,32,40,48,56,64,72,80,88,96,104,112,120,128,136,144,152 # virsh numatune dzhengvm2 numa_mode : strict numa_nodeset : 1 Test 3: change numa mode Pass 3.1 Start a guest with numa_mode strict 3.2 # virsh numatune dzhengvm2 --mode 1 error: Unable to change numa parameters error: Requested operation is not valid: can't change numatune mode for running domain 3.3 Destroy the guest 3.4 # virsh numatune dzhengvm2 --mode 1 # virsh numatune dzhengvm2 numa_mode : preferred numa_nodeset : 1 Test 4: --config Pass 4.1 Start a guest # virsh numatune dzhengvm2 numa_mode : strict numa_nodeset : 1 4.2 # virsh numatune dzhengvm2 --nodeset 0 --config # virsh numatune dzhengvm2 numa_mode : strict numa_nodeset : 1 4.3 restart the guest and check again. # virsh numatune dzhengvm2 numa_mode : strict numa_nodeset : 0 Test 5: --live for running and destroyed guest Pass 5.1 Start a guest # virsh numatune dzhengvm2 numa_mode : strict numa_nodeset : 0 # virsh numatune dzhengvm2 --nodeset 1 --live # virsh numatune dzhengvm2 numa_mode : strict numa_nodeset : 1 # virsh destroy dzhengvm2 Domain dzhengvm2 destroyed # virsh numatune dzhengvm2 --nodeset 0 --live error: Unable to change numa parameters error: Requested operation is not valid: domain is not running Test 6: --current for running and destroyed guest Pass 6.1 with running guest # virsh numatune dzhengvm2 numa_mode : strict numa_nodeset : 1 # virsh numatune dzhengvm2 --nodeset 0 --current # virsh numatune dzhengvm2 numa_mode : strict numa_nodeset : 0 # virsh destroy dzhengvm2 Domain dzhengvm2 destroyed # virsh numatune dzhengvm2 --nodeset 1 --current # virsh numatune dzhengvm2 numa_mode : strict numa_nodeset : 1 Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory, and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://rhn.redhat.com/errata/RHBA-2015-2202.html |