Bug 1306556 - [RFE] Allow specifying cpu pinning for inactive vcpus
[RFE] Allow specifying cpu pinning for inactive vcpus
Status: CLOSED ERRATA
Product: Red Hat Enterprise Linux 7
Classification: Red Hat
Component: libvirt (Show other bugs)
7.2
Unspecified Unspecified
unspecified Severity unspecified
: rc
: ---
Assigned To: Peter Krempa
Virtualization Bugs
Jiri Herrmann
: FutureFeature
Depends On:
Blocks: 1175463 1193173 1305606 1313485 1328072
  Show dependency treegraph
 
Reported: 2016-02-11 04:43 EST by Peter Krempa
Modified: 2016-11-03 14:37 EDT (History)
7 users (show)

See Also:
Fixed In Version: libvirt-1.3.3-1.el7
Doc Type: Release Note
Doc Text:
Kindly developers, please add some info here, mainly: What is the new feature: How it helps the user:
Story Points: ---
Clone Of:
: 1328072 (view as bug list)
Environment:
Last Closed: 2016-11-03 14:37:51 EDT
Type: Bug
Regression: ---
Mount Type: ---
Documentation: ---
CRM:
Verified Versions:
Category: ---
oVirt Team: ---
RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: ---


Attachments (Terms of Use)

  None (edit)
Description Peter Krempa 2016-02-11 04:43:36 EST
Description of problem:
Currently cpu pinning information can be configured only for active VCPUs. On cpu hotplug the cpu has to be pinned explicitly. On vCPU hot-unplug the information is deleted. Libvirt should be able to keep the information for vCPUs that are configured as inactive and reuse them afterwards.

Note: we do it with scheduler type and priority currently
Comment 1 Peter Krempa 2016-03-09 04:37:49 EST
This feature was added upstream by:

commit 02ae21deb3b5d91a6bd91d773265b6622a102985
Author: Peter Krempa <pkrempa@redhat.com>
Date:   Fri Feb 12 14:57:45 2016 +0100

    qemu: add support for offline vcpupin
    
    Allow pinning for inactive vcpus. The pinning mask will be automatically
    applied as we would apply the default mask in case of a cpu hotplug.
    
    Setting the scheduler settings for a vcpu has the same semantics.

.. and a lot of refactors necessary for that change.

v1.3.2-96-g02ae21d
Comment 2 Mike McCune 2016-03-28 19:19:36 EDT
This bug was accidentally moved from POST to MODIFIED via an error in automation, please see mmccune@redhat.com with any questions
Comment 4 Luyao Huang 2016-08-15 05:49:26 EDT
Verify this bug with libvirt-2.0.0-5.el7.x86_64:

1. prepare a inactive guest:

# virsh dumpxml r7
...
  <vcpu placement='static' current='1'>10</vcpu>
...

2. check vcpupin:

# virsh vcpupin r7 --config
VCPU: CPU Affinity
----------------------------------
   0: 0-23
   1: 0-23
   2: 0-23
   3: 0-23
   4: 0-23
   5: 0-23
   6: 0-23
   7: 0-23
   8: 0-23
   9: 0-23

3. bind vcpu to cpu
# virsh vcpupin r7 2 1-3 --config

# virsh vcpupin r7 9 23 --config


4. check inactive xml:

# virsh dumpxml r7
...
  <vcpu placement='static' current='1'>10</vcpu>
  <cputune>
    <vcpupin vcpu='2' cpuset='1-3'/>
    <vcpupin vcpu='9' cpuset='23'/>
  </cputune>
...

# virsh vcpupin r7
VCPU: CPU Affinity
----------------------------------
   0: 0-23
   1: 0-23
   2: 1-3
   3: 0-23
   4: 0-23
   5: 0-23
   6: 0-23
   7: 0-23
   8: 0-23
   9: 23

5. start guest and recheck guest xml and vcpupin:

# virsh start r7
Domain r7 started

# virsh vcpupin r7
VCPU: CPU Affinity
----------------------------------
   0: 0-23
   1: 0-23
   2: 1-3
   3: 0-23
   4: 0-23
   5: 0-23
   6: 0-23
   7: 0-23
   8: 0-23
   9: 23

# virsh dumpxml r7
...
  <vcpu placement='static' current='1'>10</vcpu>
  <cputune>
    <vcpupin vcpu='2' cpuset='1-3'/>
    <vcpupin vcpu='9' cpuset='23'/>
  </cputune>
...

6. hotplug vcpu 2:

# virsh setvcpus r7 3

7. check vcpupin,taskset and cgroup:

# virsh vcpupin r7
VCPU: CPU Affinity
----------------------------------
   0: 0-23
   1: 0-23
   2: 1-3
   3: 0-23
   4: 0-23
   5: 0-23
   6: 0-23
   7: 0-23
   8: 0-23
   9: 23

# cat /proc/`pidof qemu-kvm`/task/*/status
Name:	CPU 2/KVM
...
Cpus_allowed:	00000e
Cpus_allowed_list:	1-3
...

# cgget -g cpuset /machine.slice/machine-qemu\\x2d2\\x2dr7.scope/vcpu2
/machine.slice/machine-qemu\x2d2\x2dr7.scope/vcpu2:
cpuset.memory_spread_slab: 0
cpuset.memory_spread_page: 0
cpuset.memory_pressure: 0
cpuset.memory_migrate: 1
cpuset.sched_relax_domain_level: -1
cpuset.sched_load_balance: 1
cpuset.mem_hardwall: 0
cpuset.mem_exclusive: 0
cpuset.cpu_exclusive: 0
cpuset.mems: 0-3
cpuset.cpus: 1-3

8. change vcpu 8 vcpupin on live:

# virsh vcpupin r7 8 1

# virsh vcpupin r7
VCPU: CPU Affinity
----------------------------------
   0: 0-23
   1: 0-23
   2: 1-3
   3: 0-23
   4: 0-23
   5: 0-23
   6: 0-23
   7: 0-23
   8: 1
   9: 23

9. hot-plug vpu 8:

# virsh setvcpus r7 9 

# virsh vcpupin r7
VCPU: CPU Affinity
----------------------------------
   0: 0-23
   1: 0-23
   2: 1-3
   3: 0-23
   4: 0-23
   5: 0-23
   6: 0-23
   7: 0-23
   8: 1
   9: 23

10. check taskset and cgroup:

# taskset -c -p 23596
pid 23596's current affinity list: 1

# cgget -g cpuset /machine.slice/machine-qemu\\x2d2\\x2dr7.scope/vcpu8
/machine.slice/machine-qemu\x2d2\x2dr7.scope/vcpu8:
cpuset.memory_spread_slab: 0
cpuset.memory_spread_page: 0
cpuset.memory_pressure: 0
cpuset.memory_migrate: 1
cpuset.sched_relax_domain_level: -1
cpuset.sched_load_balance: 1
cpuset.mem_hardwall: 0
cpuset.mem_exclusive: 0
cpuset.cpu_exclusive: 0
cpuset.mems: 0-3
cpuset.cpus: 1

11. restart libvirtd and recheck:

# service libvirtd restart
Redirecting to /bin/systemctl restart  libvirtd.service

# virsh vcpupin r7
VCPU: CPU Affinity
----------------------------------
   0: 0-23
   1: 0-23
   2: 1-3
   3: 0-23
   4: 0-23
   5: 0-23
   6: 0-23
   7: 0-23
   8: 1
   9: 23

# virsh dumpxml r7
...
  <vcpu placement='static' current='9'>10</vcpu>
  <cputune>
    <vcpupin vcpu='2' cpuset='1-3'/>
    <vcpupin vcpu='8' cpuset='1'/>
    <vcpupin vcpu='9' cpuset='23'/>
  </cputune>
...
Comment 6 errata-xmlrpc 2016-11-03 14:37:51 EDT
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://rhn.redhat.com/errata/RHSA-2016-2577.html

Note You need to log in before you can comment on or make changes to this bug.