Hide Forgot
Description of problem: Changing the CPU Pinning of a virtual CPU is no longer possible if the new range is outside the previous range of the VM Version-Release number of selected component (if applicable): libvirt-0.10.2-36.el6 How reproducible: always Steps to Reproduce: 1. Create a VM with one VCPU that is pinned to Cores 0-1, e.g.: <vcpu placement='static' cpuset='0-1'>1</vcpu> 2. Start the VM 3. Change the pinning to VCPU 2-3: # virsh vcpupin rhel65 --vcpu 0 2-3 Actual results: An error is printed and the operation is not completed: error: Requested operation is not valid: failed to set cpuset.cpus in cgroup for vcpu 0 Expected results: The operation should complete without error. Additional info: Most probably this was introduced with https://bugzilla.redhat.com/show_bug.cgi?id=1012846. Please note the changes to the cgroups: before rhel6.6: /libvirt/qemu/testvm: cpuset.cpus: 0-47 /libvirt/qemu/testvm/emulator: cpuset.cpus: 0-11 /libvirt/qemu/testvm/vcpu0: cpuset.cpus: 0-11 /libvirt/qemu/testvm/vcpu1: cpuset.cpus: 0-11 with rhel6.6 /libvirt/qemu/testvm: cpuset.cpus: 0-11 /libvirt/qemu/testvm/emulator: cpuset.cpus: 0-11 /libvirt/qemu/testvm/vcpu0: cpuset.cpus: 0-11 /libvirt/qemu/testvm/vcpu1: cpuset.cpus: 0-11 Changeing the cpuset for the parent solves the problem: # cgset -r cpuset.cpus=0-47 /libvirt/qemu/testvm # virsh vcpupin testvm --vcpu 0 12-23 works. So libvirt needs to change the parent cgroup first.
I've posted an upstream fix for this issue: http://www.redhat.com/archives/libvir-list/2015-March/msg01456.html
Fixed upstream: commit f0fa9080d47b7aedad6f4884b8879d88688752a6 Author: Peter Krempa <pkrempa@redhat.com> Date: Fri Mar 27 10:23:19 2015 +0100 qemu: cgroup: Properly set up vcpu pinning When the default cpuset or automatic numa placement is used libvirt would place the whole parent cgroup in the specified cpuset. This then disallowed to re-pin the vcpus to a different cpu. This patch pins only the vcpu threads to the default cpuset and thus allows to re-pin them later. The following config would fail to start: <domain type='kvm'> ... <vcpu placement='static' cpuset='0-1' current='2'>4</vcpu> <cputune> <vcpupin vcpu='0' cpuset='2-3'/> ... This is a regression since a39f69d2b. v1.2.14-4-gf0fa908
The patch for this bug introduces a memory leak.
Verify this bug with libvirt-0.10.2-53.el6.x86_64: 1. usr valgrind track libvirtd. # valgrind --leak-check=full libvirtd ==7025== Memcheck, a memory error detector ==7025== Copyright (C) 2002-2012, and GNU GPL'd, by Julian Seward et al. ==7025== Using Valgrind-3.8.1 and LibVEX; rerun with -h for copyright info ==7025== Command: libvirtd ==7025== 2. open another terminal to start a vm: # virsh dumpxml r6 ... <vcpu placement='static' cpuset='0-1'>1</vcpu> ... # virsh start r6 Domain r6 started 3. check cgroup settings: # cgget -g cpuset /libvirt/qemu/r6 /libvirt/qemu/r6: cpuset.memory_spread_slab: 0 cpuset.memory_spread_page: 0 cpuset.memory_pressure: 0 cpuset.memory_migrate: 1 cpuset.sched_relax_domain_level: -1 cpuset.sched_load_balance: 1 cpuset.mem_hardwall: 0 cpuset.mem_exclusive: 0 cpuset.cpu_exclusive: 0 cpuset.mems: 0 cpuset.cpus: 0-7 # cgget -g cpuset /libvirt/qemu/r6/vcpu0 /libvirt/qemu/r6/vcpu0: cpuset.memory_spread_slab: 0 cpuset.memory_spread_page: 0 cpuset.memory_pressure: 0 cpuset.memory_migrate: 1 cpuset.sched_relax_domain_level: -1 cpuset.sched_load_balance: 1 cpuset.mem_hardwall: 0 cpuset.mem_exclusive: 0 cpuset.cpu_exclusive: 0 cpuset.mems: 0 cpuset.cpus: 0-1 # cgget -g cpuset /libvirt/qemu/r6/emulator /libvirt/qemu/r6/emulator: cpuset.memory_spread_slab: 0 cpuset.memory_spread_page: 0 cpuset.memory_pressure: 0 cpuset.memory_migrate: 1 cpuset.sched_relax_domain_level: -1 cpuset.sched_load_balance: 1 cpuset.mem_hardwall: 0 cpuset.mem_exclusive: 0 cpuset.cpu_exclusive: 0 cpuset.mems: 0 cpuset.cpus: 0-1 4. change vcpupin range out of <cpuset>: # virsh vcpupin r6 --vcpu 0 2-3 5. recheck cgroup : # cgget -g cpuset /libvirt/qemu/r6/vcpu0 /libvirt/qemu/r6/vcpu0: cpuset.memory_spread_slab: 0 cpuset.memory_spread_page: 0 cpuset.memory_pressure: 0 cpuset.memory_migrate: 1 cpuset.sched_relax_domain_level: -1 cpuset.sched_load_balance: 1 cpuset.mem_hardwall: 0 cpuset.mem_exclusive: 0 cpuset.cpu_exclusive: 0 cpuset.mems: 0 cpuset.cpus: 2-3 # cgget -g cpuset /libvirt/qemu/r6 /libvirt/qemu/r6: cpuset.memory_spread_slab: 0 cpuset.memory_spread_page: 0 cpuset.memory_pressure: 0 cpuset.memory_migrate: 1 cpuset.sched_relax_domain_level: -1 cpuset.sched_load_balance: 1 cpuset.mem_hardwall: 0 cpuset.mem_exclusive: 0 cpuset.cpu_exclusive: 0 cpuset.mems: 0 cpuset.cpus: 0-7 # cgget -g cpuset /libvirt/qemu/r6/emulator /libvirt/qemu/r6/emulator: cpuset.memory_spread_slab: 0 cpuset.memory_spread_page: 0 cpuset.memory_pressure: 0 cpuset.memory_migrate: 1 cpuset.sched_relax_domain_level: -1 cpuset.sched_load_balance: 1 cpuset.mem_hardwall: 0 cpuset.mem_exclusive: 0 cpuset.cpu_exclusive: 0 cpuset.mems: 0 cpuset.cpus: 0-1 6. check valgrind no memory leak around this issue.
Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory, and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://rhn.redhat.com/errata/RHBA-2015-1252.html