RHEL Engineering is moving the tracking of its product development work on RHEL 6 through RHEL 9 to Red Hat Jira (issues.redhat.com). If you're a Red Hat customer, please continue to file support cases via the Red Hat customer portal. If you're not, please head to the "RHEL project" in Red Hat Jira and file new tickets here. Individual Bugzilla bugs in the statuses "NEW", "ASSIGNED", and "POST" are being migrated throughout September 2023. Bugs of Red Hat partners with an assigned Engineering Partner Manager (EPM) are migrated in late September as per pre-agreed dates. Bugs against components "kernel", "kernel-rt", and "kpatch" are only migrated if still in "NEW" or "ASSIGNED". If you cannot log in to RH Jira, please consult article #7032570. That failing, please send an e-mail to the RH Jira admins at rh-issues@redhat.com to troubleshoot your issue as a user management inquiry. The email creates a ServiceNow ticket with Red Hat. Individual Bugzilla bugs that are migrated will be moved to status "CLOSED", resolution "MIGRATED", and set with "MigratedToJIRA" in "Keywords". The link to the successor Jira issue will be found under "Links", have a little "two-footprint" icon next to it, and direct you to the "RHEL project" in Red Hat Jira (issue links are of type "https://issues.redhat.com/browse/RHEL-XXXX", where "X" is a digit). This same link will be available in a blue banner at the top of the page informing you that that bug has been migrated.
Bug 1306556 - [RFE] Allow specifying cpu pinning for inactive vcpus
Summary: [RFE] Allow specifying cpu pinning for inactive vcpus
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: Red Hat Enterprise Linux 7
Classification: Red Hat
Component: libvirt
Version: 7.2
Hardware: Unspecified
OS: Unspecified
unspecified
unspecified
Target Milestone: rc
: ---
Assignee: Peter Krempa
QA Contact: Virtualization Bugs
Jiri Herrmann
URL:
Whiteboard:
Depends On:
Blocks: 1175463 1193173 1305606 1313485 1328072
TreeView+ depends on / blocked
 
Reported: 2016-02-11 09:43 UTC by Peter Krempa
Modified: 2016-11-03 18:37 UTC (History)
7 users (show)

Fixed In Version: libvirt-1.3.3-1.el7
Doc Type: Release Note
Doc Text:
Kindly developers, please add some info here, mainly: What is the new feature: How it helps the user:
Clone Of:
: 1328072 (view as bug list)
Environment:
Last Closed: 2016-11-03 18:37:51 UTC
Target Upstream Version:
Embargoed:


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
Red Hat Product Errata RHSA-2016:2577 0 normal SHIPPED_LIVE Moderate: libvirt security, bug fix, and enhancement update 2016-11-03 12:07:06 UTC

Description Peter Krempa 2016-02-11 09:43:36 UTC
Description of problem:
Currently cpu pinning information can be configured only for active VCPUs. On cpu hotplug the cpu has to be pinned explicitly. On vCPU hot-unplug the information is deleted. Libvirt should be able to keep the information for vCPUs that are configured as inactive and reuse them afterwards.

Note: we do it with scheduler type and priority currently

Comment 1 Peter Krempa 2016-03-09 09:37:49 UTC
This feature was added upstream by:

commit 02ae21deb3b5d91a6bd91d773265b6622a102985
Author: Peter Krempa <pkrempa>
Date:   Fri Feb 12 14:57:45 2016 +0100

    qemu: add support for offline vcpupin
    
    Allow pinning for inactive vcpus. The pinning mask will be automatically
    applied as we would apply the default mask in case of a cpu hotplug.
    
    Setting the scheduler settings for a vcpu has the same semantics.

.. and a lot of refactors necessary for that change.

v1.3.2-96-g02ae21d

Comment 2 Mike McCune 2016-03-28 23:19:36 UTC
This bug was accidentally moved from POST to MODIFIED via an error in automation, please see mmccune with any questions

Comment 4 Luyao Huang 2016-08-15 09:49:26 UTC
Verify this bug with libvirt-2.0.0-5.el7.x86_64:

1. prepare a inactive guest:

# virsh dumpxml r7
...
  <vcpu placement='static' current='1'>10</vcpu>
...

2. check vcpupin:

# virsh vcpupin r7 --config
VCPU: CPU Affinity
----------------------------------
   0: 0-23
   1: 0-23
   2: 0-23
   3: 0-23
   4: 0-23
   5: 0-23
   6: 0-23
   7: 0-23
   8: 0-23
   9: 0-23

3. bind vcpu to cpu
# virsh vcpupin r7 2 1-3 --config

# virsh vcpupin r7 9 23 --config


4. check inactive xml:

# virsh dumpxml r7
...
  <vcpu placement='static' current='1'>10</vcpu>
  <cputune>
    <vcpupin vcpu='2' cpuset='1-3'/>
    <vcpupin vcpu='9' cpuset='23'/>
  </cputune>
...

# virsh vcpupin r7
VCPU: CPU Affinity
----------------------------------
   0: 0-23
   1: 0-23
   2: 1-3
   3: 0-23
   4: 0-23
   5: 0-23
   6: 0-23
   7: 0-23
   8: 0-23
   9: 23

5. start guest and recheck guest xml and vcpupin:

# virsh start r7
Domain r7 started

# virsh vcpupin r7
VCPU: CPU Affinity
----------------------------------
   0: 0-23
   1: 0-23
   2: 1-3
   3: 0-23
   4: 0-23
   5: 0-23
   6: 0-23
   7: 0-23
   8: 0-23
   9: 23

# virsh dumpxml r7
...
  <vcpu placement='static' current='1'>10</vcpu>
  <cputune>
    <vcpupin vcpu='2' cpuset='1-3'/>
    <vcpupin vcpu='9' cpuset='23'/>
  </cputune>
...

6. hotplug vcpu 2:

# virsh setvcpus r7 3

7. check vcpupin,taskset and cgroup:

# virsh vcpupin r7
VCPU: CPU Affinity
----------------------------------
   0: 0-23
   1: 0-23
   2: 1-3
   3: 0-23
   4: 0-23
   5: 0-23
   6: 0-23
   7: 0-23
   8: 0-23
   9: 23

# cat /proc/`pidof qemu-kvm`/task/*/status
Name:	CPU 2/KVM
...
Cpus_allowed:	00000e
Cpus_allowed_list:	1-3
...

# cgget -g cpuset /machine.slice/machine-qemu\\x2d2\\x2dr7.scope/vcpu2
/machine.slice/machine-qemu\x2d2\x2dr7.scope/vcpu2:
cpuset.memory_spread_slab: 0
cpuset.memory_spread_page: 0
cpuset.memory_pressure: 0
cpuset.memory_migrate: 1
cpuset.sched_relax_domain_level: -1
cpuset.sched_load_balance: 1
cpuset.mem_hardwall: 0
cpuset.mem_exclusive: 0
cpuset.cpu_exclusive: 0
cpuset.mems: 0-3
cpuset.cpus: 1-3

8. change vcpu 8 vcpupin on live:

# virsh vcpupin r7 8 1

# virsh vcpupin r7
VCPU: CPU Affinity
----------------------------------
   0: 0-23
   1: 0-23
   2: 1-3
   3: 0-23
   4: 0-23
   5: 0-23
   6: 0-23
   7: 0-23
   8: 1
   9: 23

9. hot-plug vpu 8:

# virsh setvcpus r7 9 

# virsh vcpupin r7
VCPU: CPU Affinity
----------------------------------
   0: 0-23
   1: 0-23
   2: 1-3
   3: 0-23
   4: 0-23
   5: 0-23
   6: 0-23
   7: 0-23
   8: 1
   9: 23

10. check taskset and cgroup:

# taskset -c -p 23596
pid 23596's current affinity list: 1

# cgget -g cpuset /machine.slice/machine-qemu\\x2d2\\x2dr7.scope/vcpu8
/machine.slice/machine-qemu\x2d2\x2dr7.scope/vcpu8:
cpuset.memory_spread_slab: 0
cpuset.memory_spread_page: 0
cpuset.memory_pressure: 0
cpuset.memory_migrate: 1
cpuset.sched_relax_domain_level: -1
cpuset.sched_load_balance: 1
cpuset.mem_hardwall: 0
cpuset.mem_exclusive: 0
cpuset.cpu_exclusive: 0
cpuset.mems: 0-3
cpuset.cpus: 1

11. restart libvirtd and recheck:

# service libvirtd restart
Redirecting to /bin/systemctl restart  libvirtd.service

# virsh vcpupin r7
VCPU: CPU Affinity
----------------------------------
   0: 0-23
   1: 0-23
   2: 1-3
   3: 0-23
   4: 0-23
   5: 0-23
   6: 0-23
   7: 0-23
   8: 1
   9: 23

# virsh dumpxml r7
...
  <vcpu placement='static' current='9'>10</vcpu>
  <cputune>
    <vcpupin vcpu='2' cpuset='1-3'/>
    <vcpupin vcpu='8' cpuset='1'/>
    <vcpupin vcpu='9' cpuset='23'/>
  </cputune>
...

Comment 6 errata-xmlrpc 2016-11-03 18:37:51 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://rhn.redhat.com/errata/RHSA-2016-2577.html


Note You need to log in before you can comment on or make changes to this bug.