Bug 1198096
Summary: | changing the pinning with virsh setvcpupin does not work | |||
---|---|---|---|---|
Product: | Red Hat Enterprise Linux 6 | Reporter: | Martin Tessun <mtessun> | |
Component: | libvirt | Assignee: | Peter Krempa <pkrempa> | |
Status: | CLOSED ERRATA | QA Contact: | Virtualization Bugs <virt-bugs> | |
Severity: | urgent | Docs Contact: | ||
Priority: | urgent | |||
Version: | 6.6 | CC: | dyuan, honzhang, jdenemar, jherrman, jsuchane, lhuang, mtessun, mzhan, pkrempa, pmanzell, rbalakri, rhodain | |
Target Milestone: | rc | Keywords: | Regression, ZStream | |
Target Release: | --- | |||
Hardware: | x86_64 | |||
OS: | All | |||
Whiteboard: | ||||
Fixed In Version: | libvirt-0.10.2-53.el6 | Doc Type: | Bug Fix | |
Doc Text: |
Previously, when the default CPU mask was specified while using Non-Uniform Memory Access (NUMA) pinning, virtual CPUs (vCPUs) could not be pinned to physical CPUs that were not contained in the default node mask. With this update, the control groups (cgroups) code correctly attaches only vCPU threads instead of the entire domain group, and using NUMA pinning with the default cpuset subsystem now works as expected.
|
Story Points: | --- | |
Clone Of: | ||||
: | 1207257 1209891 (view as bug list) | Environment: | ||
Last Closed: | 2015-07-22 05:48:38 UTC | Type: | Bug | |
Regression: | --- | Mount Type: | --- | |
Documentation: | --- | CRM: | ||
Verified Versions: | Category: | --- | ||
oVirt Team: | --- | RHEL 7.3 requirements from Atomic Host: | ||
Cloudforms Team: | --- | Target Upstream Version: | ||
Embargoed: | ||||
Bug Depends On: | ||||
Bug Blocks: | 1207257, 1209891 |
Description
Martin Tessun
2015-03-03 11:48:11 UTC
I've posted an upstream fix for this issue: http://www.redhat.com/archives/libvir-list/2015-March/msg01456.html Fixed upstream: commit f0fa9080d47b7aedad6f4884b8879d88688752a6 Author: Peter Krempa <pkrempa> Date: Fri Mar 27 10:23:19 2015 +0100 qemu: cgroup: Properly set up vcpu pinning When the default cpuset or automatic numa placement is used libvirt would place the whole parent cgroup in the specified cpuset. This then disallowed to re-pin the vcpus to a different cpu. This patch pins only the vcpu threads to the default cpuset and thus allows to re-pin them later. The following config would fail to start: <domain type='kvm'> ... <vcpu placement='static' cpuset='0-1' current='2'>4</vcpu> <cputune> <vcpupin vcpu='0' cpuset='2-3'/> ... This is a regression since a39f69d2b. v1.2.14-4-gf0fa908 The patch for this bug introduces a memory leak. Verify this bug with libvirt-0.10.2-53.el6.x86_64: 1. usr valgrind track libvirtd. # valgrind --leak-check=full libvirtd ==7025== Memcheck, a memory error detector ==7025== Copyright (C) 2002-2012, and GNU GPL'd, by Julian Seward et al. ==7025== Using Valgrind-3.8.1 and LibVEX; rerun with -h for copyright info ==7025== Command: libvirtd ==7025== 2. open another terminal to start a vm: # virsh dumpxml r6 ... <vcpu placement='static' cpuset='0-1'>1</vcpu> ... # virsh start r6 Domain r6 started 3. check cgroup settings: # cgget -g cpuset /libvirt/qemu/r6 /libvirt/qemu/r6: cpuset.memory_spread_slab: 0 cpuset.memory_spread_page: 0 cpuset.memory_pressure: 0 cpuset.memory_migrate: 1 cpuset.sched_relax_domain_level: -1 cpuset.sched_load_balance: 1 cpuset.mem_hardwall: 0 cpuset.mem_exclusive: 0 cpuset.cpu_exclusive: 0 cpuset.mems: 0 cpuset.cpus: 0-7 # cgget -g cpuset /libvirt/qemu/r6/vcpu0 /libvirt/qemu/r6/vcpu0: cpuset.memory_spread_slab: 0 cpuset.memory_spread_page: 0 cpuset.memory_pressure: 0 cpuset.memory_migrate: 1 cpuset.sched_relax_domain_level: -1 cpuset.sched_load_balance: 1 cpuset.mem_hardwall: 0 cpuset.mem_exclusive: 0 cpuset.cpu_exclusive: 0 cpuset.mems: 0 cpuset.cpus: 0-1 # cgget -g cpuset /libvirt/qemu/r6/emulator /libvirt/qemu/r6/emulator: cpuset.memory_spread_slab: 0 cpuset.memory_spread_page: 0 cpuset.memory_pressure: 0 cpuset.memory_migrate: 1 cpuset.sched_relax_domain_level: -1 cpuset.sched_load_balance: 1 cpuset.mem_hardwall: 0 cpuset.mem_exclusive: 0 cpuset.cpu_exclusive: 0 cpuset.mems: 0 cpuset.cpus: 0-1 4. change vcpupin range out of <cpuset>: # virsh vcpupin r6 --vcpu 0 2-3 5. recheck cgroup : # cgget -g cpuset /libvirt/qemu/r6/vcpu0 /libvirt/qemu/r6/vcpu0: cpuset.memory_spread_slab: 0 cpuset.memory_spread_page: 0 cpuset.memory_pressure: 0 cpuset.memory_migrate: 1 cpuset.sched_relax_domain_level: -1 cpuset.sched_load_balance: 1 cpuset.mem_hardwall: 0 cpuset.mem_exclusive: 0 cpuset.cpu_exclusive: 0 cpuset.mems: 0 cpuset.cpus: 2-3 # cgget -g cpuset /libvirt/qemu/r6 /libvirt/qemu/r6: cpuset.memory_spread_slab: 0 cpuset.memory_spread_page: 0 cpuset.memory_pressure: 0 cpuset.memory_migrate: 1 cpuset.sched_relax_domain_level: -1 cpuset.sched_load_balance: 1 cpuset.mem_hardwall: 0 cpuset.mem_exclusive: 0 cpuset.cpu_exclusive: 0 cpuset.mems: 0 cpuset.cpus: 0-7 # cgget -g cpuset /libvirt/qemu/r6/emulator /libvirt/qemu/r6/emulator: cpuset.memory_spread_slab: 0 cpuset.memory_spread_page: 0 cpuset.memory_pressure: 0 cpuset.memory_migrate: 1 cpuset.sched_relax_domain_level: -1 cpuset.sched_load_balance: 1 cpuset.mem_hardwall: 0 cpuset.mem_exclusive: 0 cpuset.cpu_exclusive: 0 cpuset.mems: 0 cpuset.cpus: 0-1 6. check valgrind no memory leak around this issue. Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory, and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://rhn.redhat.com/errata/RHBA-2015-1252.html |