RHEL Engineering is moving the tracking of its product development work on RHEL 6 through RHEL 9 to Red Hat Jira (issues.redhat.com). If you're a Red Hat customer, please continue to file support cases via the Red Hat customer portal. If you're not, please head to the "RHEL project" in Red Hat Jira and file new tickets here. Individual Bugzilla bugs in the statuses "NEW", "ASSIGNED", and "POST" are being migrated throughout September 2023. Bugs of Red Hat partners with an assigned Engineering Partner Manager (EPM) are migrated in late September as per pre-agreed dates. Bugs against components "kernel", "kernel-rt", and "kpatch" are only migrated if still in "NEW" or "ASSIGNED". If you cannot log in to RH Jira, please consult article #7032570. That failing, please send an e-mail to the RH Jira admins at rh-issues@redhat.com to troubleshoot your issue as a user management inquiry. The email creates a ServiceNow ticket with Red Hat. Individual Bugzilla bugs that are migrated will be moved to status "CLOSED", resolution "MIGRATED", and set with "MigratedToJIRA" in "Keywords". The link to the successor Jira issue will be found under "Links", have a little "two-footprint" icon next to it, and direct you to the "RHEL project" in Red Hat Jira (issue links are of type "https://issues.redhat.com/browse/RHEL-XXXX", where "X" is a digit). This same link will be available in a blue banner at the top of the page informing you that that bug has been migrated.
Bug 1207257 - changing the pinning with virsh setvcpupin does not work
Summary: changing the pinning with virsh setvcpupin does not work
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: Red Hat Enterprise Linux 7
Classification: Red Hat
Component: libvirt
Version: 7.1
Hardware: x86_64
OS: All
high
high
Target Milestone: rc
: ---
Assignee: Peter Krempa
QA Contact: Virtualization Bugs
URL:
Whiteboard:
Depends On: 1198096
Blocks:
TreeView+ depends on / blocked
 
Reported: 2015-03-30 14:16 UTC by Peter Krempa
Modified: 2019-06-13 08:21 UTC (History)
9 users (show)

Fixed In Version: libvirt-1.2.15-1.el7
Doc Type: Bug Fix
Doc Text:
Previously, when the default CPU mask was specified while using Non-Uniform Memory Access (NUMA) pinning, virtual CPUs (vCPUs) could not be pinned to physical CPUs that were not contained in the default node mask. With this update, the control groups (cgroups) code correctly attaches only vCPU threads instead of the entire domain group, and using NUMA pinning with the default cpuset subsystem now works as expected.
Clone Of: 1198096
Environment:
Last Closed: 2015-11-19 06:26:24 UTC
Target Upstream Version:
Embargoed:


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
Red Hat Product Errata RHBA-2015:2202 0 normal SHIPPED_LIVE libvirt bug fix and enhancement update 2015-11-19 08:17:58 UTC

Description Peter Krempa 2015-03-30 14:16:30 UTC
+++ This bug was initially created as a clone of Bug #1198096 +++

Description of problem:
Changing the CPU Pinning of a virtual CPU is no longer possible if the new range is outside the previous range of the VM

Version-Release number of selected component (if applicable):
libvirt-0.10.2-36.el6
libvirt-1.2.13 (upstream)

How reproducible:
always

Steps to Reproduce:
1. Create a VM with one VCPU that is pinned to Cores 0-1, e.g.:
     <vcpu placement='static' cpuset='0-1'>1</vcpu>
2. Start the VM
3. Change the pinning to VCPU 2-3:
   # virsh vcpupin rhel65 --vcpu 0 2-3

Actual results:
An error is printed and the operation is not completed:

error: Requested operation is not valid: failed to set cpuset.cpus in cgroup for vcpu 0


Expected results:
The operation should complete without error.

Additional info:
Most probably this was introduced in upstream commit a39f69d2bb5494d661be917956baa437d01a4d13

Please note the changes to the cgroups:

before rhel6.6:
/libvirt/qemu/testvm:
cpuset.cpus: 0-47
/libvirt/qemu/testvm/emulator:
cpuset.cpus: 0-11
/libvirt/qemu/testvm/vcpu0:
cpuset.cpus: 0-11
/libvirt/qemu/testvm/vcpu1:
cpuset.cpus: 0-11

with rhel6.6
/libvirt/qemu/testvm:
cpuset.cpus: 0-11
/libvirt/qemu/testvm/emulator:
cpuset.cpus: 0-11
/libvirt/qemu/testvm/vcpu0:
cpuset.cpus: 0-11
/libvirt/qemu/testvm/vcpu1:
cpuset.cpus: 0-11

Changeing the cpuset for the parent solves the problem:
# cgset -r cpuset.cpus=0-47 /libvirt/qemu/testvm
# virsh vcpupin testvm --vcpu 0 12-23

works.
So libvirt needs to change the parent cgroup first.

--- Additional comment from Peter Krempa on 2015-03-27 14:20:07 CET ---

I've posted an upstream fix for this issue:

http://www.redhat.com/archives/libvir-list/2015-March/msg01456.html

Comment 1 Peter Krempa 2015-04-02 08:28:44 UTC
Fixed upstream:

commit f0fa9080d47b7aedad6f4884b8879d88688752a6
Author: Peter Krempa <pkrempa>
Date:   Fri Mar 27 10:23:19 2015 +0100

    qemu: cgroup: Properly set up vcpu pinning
    
    When the default cpuset or automatic numa placement is used libvirt
    would place the whole parent cgroup in the specified cpuset. This then
    disallowed to re-pin the vcpus to a different cpu.
    
    This patch pins only the vcpu threads to the default cpuset and thus
    allows to re-pin them later.
    
    The following config would fail to start:
    <domain type='kvm'>
      ...
      <vcpu placement='static' cpuset='0-1' current='2'>4</vcpu>
      <cputune>
        <vcpupin vcpu='0' cpuset='2-3'/>
        ...
    
    This is a regression since a39f69d2b.

v1.2.14-4-gf0fa908

Comment 3 Luyao Huang 2015-05-28 03:37:39 UTC
I can reproduce this issue with libvirt-1.2.13-1.el7.x86_64:

1. start a vm has cpuset like this:
...
  <vcpu placement='static' cpuset='0-1' current='2'>4</vcpu>
...

2. check the cgroup:

# cgget -g cpuset /machine.slice/machine-qemu\\x2dtest3.scope
/machine.slice/machine-qemu\x2dtest3.scope:
cpuset.isolcpus: 
cpuset.memory_spread_slab: 0
cpuset.memory_spread_page: 0
cpuset.memory_pressure: 0
cpuset.memory_migrate: 0
cpuset.sched_relax_domain_level: -1
cpuset.sched_load_balance: 1
cpuset.mem_hardwall: 0
cpuset.mem_exclusive: 0
cpuset.cpu_exclusive: 0
cpuset.mems: 0
cpuset.cpus: 0-1

# cgget -g cpuset /machine.slice/machine-qemu\\x2dtest3.scope/vcpu0
/machine.slice/machine-qemu\x2dtest3.scope/vcpu0:
cpuset.isolcpus: 
cpuset.memory_spread_slab: 0
cpuset.memory_spread_page: 0
cpuset.memory_pressure: 0
cpuset.memory_migrate: 0
cpuset.sched_relax_domain_level: -1
cpuset.sched_load_balance: 1
cpuset.mem_hardwall: 0
cpuset.mem_exclusive: 0
cpuset.cpu_exclusive: 0
cpuset.mems: 0
cpuset.cpus: 0-1

3. try to bind vcpu out of 0-1:

# virsh vcpupin test3 1 2-3
error: Requested operation is not valid: failed to set cpuset.cpus in cgroup for vcpu 1

And verify this issue with libvirt-1.2.15-2.el7.x86_64:

1. start a vm has cpuset like this:
...
  <vcpu placement='static' cpuset='0-1' current='2'>4</vcpu>
...

2. check the cgroup:

# cgget -g cpuset /machine.slice/machine-qemu\\x2dtest3.scope
/machine.slice/machine-qemu\x2dtest3.scope:
cpuset.isolcpus: 2-3
cpuset.memory_spread_slab: 0
cpuset.memory_spread_page: 0
cpuset.memory_pressure: 0
cpuset.memory_migrate: 1
cpuset.sched_relax_domain_level: -1
cpuset.sched_load_balance: 1
cpuset.mem_hardwall: 0
cpuset.mem_exclusive: 0
cpuset.cpu_exclusive: 0
cpuset.mems: 0
cpuset.cpus: 0-3

# cgget -g cpuset /machine.slice/machine-qemu\\x2dtest3.scope/vcpu0
/machine.slice/machine-qemu\x2dtest3.scope/vcpu0:
cpuset.isolcpus: 
cpuset.memory_spread_slab: 0
cpuset.memory_spread_page: 0
cpuset.memory_pressure: 0
cpuset.memory_migrate: 1
cpuset.sched_relax_domain_level: -1
cpuset.sched_load_balance: 1
cpuset.mem_hardwall: 0
cpuset.mem_exclusive: 0
cpuset.cpu_exclusive: 0
cpuset.mems: 0
cpuset.cpus: 0-1

3. bind vcpu 1 out of 0-1:

# virsh vcpupin test3 1 2-3

4. recheck the vcpupin and cgroup:

# virsh vcpupin test3
VCPU: CPU Affinity
----------------------------------
   0: 0-1
   1: 2-3

# cgget -g cpuset /machine.slice/machine-qemu\\x2dtest3.scope/vcpu1
/machine.slice/machine-qemu\x2dtest3.scope/vcpu1:
cpuset.isolcpus: 2-3
cpuset.memory_spread_slab: 0
cpuset.memory_spread_page: 0
cpuset.memory_pressure: 0
cpuset.memory_migrate: 1
cpuset.sched_relax_domain_level: -1
cpuset.sched_load_balance: 1
cpuset.mem_hardwall: 0
cpuset.mem_exclusive: 0
cpuset.cpu_exclusive: 0
cpuset.mems: 0
cpuset.cpus: 2-3

Comment 5 errata-xmlrpc 2015-11-19 06:26:24 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://rhn.redhat.com/errata/RHBA-2015-2202.html


Note You need to log in before you can comment on or make changes to this bug.