Bug 1198096 - changing the pinning with virsh setvcpupin does not work
Summary: changing the pinning with virsh setvcpupin does not work
Status: CLOSED ERRATA
Alias: None
Product: Red Hat Enterprise Linux 6
Classification: Red Hat
Component: libvirt
Version: 6.6
Hardware: x86_64
OS: All
urgent
urgent
Target Milestone: rc
: ---
Assignee: Peter Krempa
QA Contact: Virtualization Bugs
URL:
Whiteboard:
Keywords: Regression, ZStream
Depends On:
Blocks: 1207257 1209891
TreeView+ depends on / blocked
 
Reported: 2015-03-03 11:48 UTC by Martin Tessun
Modified: 2019-06-13 08:16 UTC (History)
12 users (show)

(edit)
Previously, when the default CPU mask was specified while using Non-Uniform Memory Access (NUMA) pinning, virtual CPUs (vCPUs) could not be pinned to physical CPUs that were not contained in the default node mask. With this update, the control groups (cgroups) code correctly attaches only vCPU threads instead of the entire domain group, and using NUMA pinning with the default cpuset subsystem now works as expected.
Clone Of:
: 1207257 1209891 (view as bug list)
(edit)
Last Closed: 2015-07-22 05:48:38 UTC


Attachments (Terms of Use)


External Trackers
Tracker ID Priority Status Summary Last Updated
Red Hat Product Errata RHBA-2015:1252 normal SHIPPED_LIVE libvirt bug fix update 2015-07-20 17:50:06 UTC

Description Martin Tessun 2015-03-03 11:48:11 UTC
Description of problem:
Changing the CPU Pinning of a virtual CPU is no longer possible if the new range is outside the previous range of the VM

Version-Release number of selected component (if applicable):
libvirt-0.10.2-36.el6

How reproducible:
always

Steps to Reproduce:
1. Create a VM with one VCPU that is pinned to Cores 0-1, e.g.:
     <vcpu placement='static' cpuset='0-1'>1</vcpu>
2. Start the VM
3. Change the pinning to VCPU 2-3:
   # virsh vcpupin rhel65 --vcpu 0 2-3

Actual results:
An error is printed and the operation is not completed:

error: Requested operation is not valid: failed to set cpuset.cpus in cgroup for vcpu 0


Expected results:
The operation should complete without error.

Additional info:
Most probably this was introduced with https://bugzilla.redhat.com/show_bug.cgi?id=1012846.

Please note the changes to the cgroups:

before rhel6.6:
/libvirt/qemu/testvm:
cpuset.cpus: 0-47
/libvirt/qemu/testvm/emulator:
cpuset.cpus: 0-11
/libvirt/qemu/testvm/vcpu0:
cpuset.cpus: 0-11
/libvirt/qemu/testvm/vcpu1:
cpuset.cpus: 0-11

with rhel6.6
/libvirt/qemu/testvm:
cpuset.cpus: 0-11
/libvirt/qemu/testvm/emulator:
cpuset.cpus: 0-11
/libvirt/qemu/testvm/vcpu0:
cpuset.cpus: 0-11
/libvirt/qemu/testvm/vcpu1:
cpuset.cpus: 0-11

Changeing the cpuset for the parent solves the problem:
# cgset -r cpuset.cpus=0-47 /libvirt/qemu/testvm
# virsh vcpupin testvm --vcpu 0 12-23

works.
So libvirt needs to change the parent cgroup first.

Comment 5 Peter Krempa 2015-03-27 13:20:07 UTC
I've posted an upstream fix for this issue:

http://www.redhat.com/archives/libvir-list/2015-March/msg01456.html

Comment 12 Peter Krempa 2015-04-02 08:28:38 UTC
Fixed upstream:

commit f0fa9080d47b7aedad6f4884b8879d88688752a6
Author: Peter Krempa <pkrempa@redhat.com>
Date:   Fri Mar 27 10:23:19 2015 +0100

    qemu: cgroup: Properly set up vcpu pinning
    
    When the default cpuset or automatic numa placement is used libvirt
    would place the whole parent cgroup in the specified cpuset. This then
    disallowed to re-pin the vcpus to a different cpu.
    
    This patch pins only the vcpu threads to the default cpuset and thus
    allows to re-pin them later.
    
    The following config would fail to start:
    <domain type='kvm'>
      ...
      <vcpu placement='static' cpuset='0-1' current='2'>4</vcpu>
      <cputune>
        <vcpupin vcpu='0' cpuset='2-3'/>
        ...
    
    This is a regression since a39f69d2b.

v1.2.14-4-gf0fa908

Comment 20 Jiri Denemark 2015-04-10 11:39:25 UTC
The patch for this bug introduces a memory leak.

Comment 23 Luyao Huang 2015-04-13 02:39:31 UTC
Verify this bug with libvirt-0.10.2-53.el6.x86_64:

1. usr valgrind track libvirtd.
# valgrind --leak-check=full libvirtd
==7025== Memcheck, a memory error detector
==7025== Copyright (C) 2002-2012, and GNU GPL'd, by Julian Seward et al.
==7025== Using Valgrind-3.8.1 and LibVEX; rerun with -h for copyright info
==7025== Command: libvirtd
==7025== 

2. open another terminal to start a vm:

# virsh dumpxml r6
...
  <vcpu placement='static' cpuset='0-1'>1</vcpu>
...

# virsh start r6
Domain r6 started

3. check cgroup settings:

# cgget -g cpuset /libvirt/qemu/r6
/libvirt/qemu/r6:
cpuset.memory_spread_slab: 0
cpuset.memory_spread_page: 0
cpuset.memory_pressure: 0
cpuset.memory_migrate: 1
cpuset.sched_relax_domain_level: -1
cpuset.sched_load_balance: 1
cpuset.mem_hardwall: 0
cpuset.mem_exclusive: 0
cpuset.cpu_exclusive: 0
cpuset.mems: 0
cpuset.cpus: 0-7

# cgget -g cpuset /libvirt/qemu/r6/vcpu0
/libvirt/qemu/r6/vcpu0:
cpuset.memory_spread_slab: 0
cpuset.memory_spread_page: 0
cpuset.memory_pressure: 0
cpuset.memory_migrate: 1
cpuset.sched_relax_domain_level: -1
cpuset.sched_load_balance: 1
cpuset.mem_hardwall: 0
cpuset.mem_exclusive: 0
cpuset.cpu_exclusive: 0
cpuset.mems: 0
cpuset.cpus: 0-1

# cgget -g cpuset /libvirt/qemu/r6/emulator
/libvirt/qemu/r6/emulator:
cpuset.memory_spread_slab: 0
cpuset.memory_spread_page: 0
cpuset.memory_pressure: 0
cpuset.memory_migrate: 1
cpuset.sched_relax_domain_level: -1
cpuset.sched_load_balance: 1
cpuset.mem_hardwall: 0
cpuset.mem_exclusive: 0
cpuset.cpu_exclusive: 0
cpuset.mems: 0
cpuset.cpus: 0-1

4. change vcpupin range out of <cpuset>:

# virsh vcpupin r6 --vcpu 0 2-3


5. recheck cgroup :

# cgget -g cpuset /libvirt/qemu/r6/vcpu0
/libvirt/qemu/r6/vcpu0:
cpuset.memory_spread_slab: 0
cpuset.memory_spread_page: 0
cpuset.memory_pressure: 0
cpuset.memory_migrate: 1
cpuset.sched_relax_domain_level: -1
cpuset.sched_load_balance: 1
cpuset.mem_hardwall: 0
cpuset.mem_exclusive: 0
cpuset.cpu_exclusive: 0
cpuset.mems: 0
cpuset.cpus: 2-3

# cgget -g cpuset /libvirt/qemu/r6
/libvirt/qemu/r6:
cpuset.memory_spread_slab: 0
cpuset.memory_spread_page: 0
cpuset.memory_pressure: 0
cpuset.memory_migrate: 1
cpuset.sched_relax_domain_level: -1
cpuset.sched_load_balance: 1
cpuset.mem_hardwall: 0
cpuset.mem_exclusive: 0
cpuset.cpu_exclusive: 0
cpuset.mems: 0
cpuset.cpus: 0-7

# cgget -g cpuset /libvirt/qemu/r6/emulator
/libvirt/qemu/r6/emulator:
cpuset.memory_spread_slab: 0
cpuset.memory_spread_page: 0
cpuset.memory_pressure: 0
cpuset.memory_migrate: 1
cpuset.sched_relax_domain_level: -1
cpuset.sched_load_balance: 1
cpuset.mem_hardwall: 0
cpuset.mem_exclusive: 0
cpuset.cpu_exclusive: 0
cpuset.mems: 0
cpuset.cpus: 0-1

6. check valgrind no memory leak around this issue.

Comment 25 errata-xmlrpc 2015-07-22 05:48:38 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://rhn.redhat.com/errata/RHBA-2015-1252.html


Note You need to log in before you can comment on or make changes to this bug.