Bug 1097930 - [RFE] Hot Un-Plug CPU - Support dynamic virtual CPU deallocation - libvirt
Summary: [RFE] Hot Un-Plug CPU - Support dynamic virtual CPU deallocation - libvirt
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: Red Hat Enterprise Linux 7
Classification: Red Hat
Component: libvirt
Version: 7.1
Hardware: Unspecified
OS: Unspecified
medium
medium
Target Milestone: rc
: ---
Assignee: Peter Krempa
QA Contact: Virtualization Bugs
URL:
Whiteboard: virt
: 918283 (view as bug list)
Depends On: 918282 1087672 1097929 1146944 1167336 1167392
Blocks: 851497 1203710 1205796 950268 955396 RHEV_HOT_UNPLUG_CPU 1099775 1105185 1289173
TreeView+ depends on / blocked
 
Reported: 2014-05-14 21:52 UTC by Karen Noel
Modified: 2020-08-13 08:08 UTC (History)
19 users (show)

Fixed In Version: libvirt-2.0.0-8.el7
Doc Type: Enhancement
Doc Text:
Clone Of: 1097929
Environment:
Last Closed: 2016-11-03 18:07:58 UTC
Target Upstream Version:


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
Red Hat Product Errata RHSA-2016:2577 0 normal SHIPPED_LIVE Moderate: libvirt security, bug fix, and enhancement update 2016-11-03 12:07:06 UTC

Comment 2 Peter Krempa 2015-03-04 10:34:15 UTC
Description of problem:
libvirt should add/fix support for vCPU unplug once qemu implements it

Comment 3 Peter Krempa 2015-03-04 12:44:00 UTC
*** Bug 918283 has been marked as a duplicate of this bug. ***

Comment 8 Scott Herold 2016-08-26 13:22:49 UTC
Ack'd

Comment 12 Luyao Huang 2016-09-19 01:25:12 UTC
Verify this bug with libvirt-2.0.0-9.el7.x86_64:

S1) Basic check for new vcpus element:

1. add unpluggable vcpu to guest xml:

  <vcpu placement='static' current='5'>12</vcpu>
  <vcpus>
    <vcpu id='0' enabled='yes' hotpluggable='no'/>
    <vcpu id='1' enabled='yes' hotpluggable='yes'/>
    <vcpu id='2' enabled='yes' hotpluggable='yes'/>
    <vcpu id='3' enabled='yes' hotpluggable='yes'/>
    <vcpu id='4' enabled='no' hotpluggable='yes'/>
    <vcpu id='5' enabled='yes' hotpluggable='yes'/>
    <vcpu id='6' enabled='no' hotpluggable='yes'/>
    <vcpu id='7' enabled='no' hotpluggable='yes'/>
    <vcpu id='8' enabled='no' hotpluggable='yes'/>
    <vcpu id='9' enabled='no' hotpluggable='yes'/>
    <vcpu id='10' enabled='no' hotpluggable='yes'/>
    <vcpu id='11' enabled='no' hotpluggable='yes'/>
  </vcpus>



	
2. recheck guest xml:

# virsh dumpxml r7 --inactive
...
  <vcpu placement='static' current='5'>12</vcpu>
  <vcpus>
    <vcpu id='0' enabled='yes' hotpluggable='no'/>
    <vcpu id='1' enabled='yes' hotpluggable='yes'/>
    <vcpu id='2' enabled='yes' hotpluggable='yes'/>
    <vcpu id='3' enabled='yes' hotpluggable='yes'/>
    <vcpu id='4' enabled='no' hotpluggable='yes'/>
    <vcpu id='5' enabled='yes' hotpluggable='yes'/>
    <vcpu id='6' enabled='no' hotpluggable='yes'/>
    <vcpu id='7' enabled='no' hotpluggable='yes'/>
    <vcpu id='8' enabled='no' hotpluggable='yes'/>
    <vcpu id='9' enabled='no' hotpluggable='yes'/>
    <vcpu id='10' enabled='no' hotpluggable='yes'/>
    <vcpu id='11' enabled='no' hotpluggable='yes'/>
  </vcpus>
...

	
3. start guest and check qemu cmd line and libvirt debug log (or stap):


# virsh dumpxml r7
...
  <vcpu placement='static' current='5'>12</vcpu>
  <vcpus>
    <vcpu id='0' enabled='yes' hotpluggable='no' order='1'/>
    <vcpu id='1' enabled='yes' hotpluggable='yes' order='2'/>
    <vcpu id='2' enabled='yes' hotpluggable='yes' order='3'/>
    <vcpu id='3' enabled='yes' hotpluggable='yes' order='4'/>
    <vcpu id='4' enabled='no' hotpluggable='yes'/>
    <vcpu id='5' enabled='yes' hotpluggable='yes' order='5'/>
    <vcpu id='6' enabled='no' hotpluggable='yes'/>
    <vcpu id='7' enabled='no' hotpluggable='yes'/>
    <vcpu id='8' enabled='no' hotpluggable='yes'/>
    <vcpu id='9' enabled='no' hotpluggable='yes'/>
    <vcpu id='10' enabled='no' hotpluggable='yes'/>
    <vcpu id='11' enabled='no' hotpluggable='yes'/>
  </vcpus>

# ps aux|grep qemu |grep cpu
... -realtime mlock=off -smp 1,maxcpus=12,sockets=6,cores=2,threads=1...

#  stap qemu-monitor.stp
...
 51.572 > 0x7fdac401ac60 {"execute":"device_add","arguments":{"driver":"Opteron_G4-x86_64-cpu","id":"vcpu1","socket-id":0,"core-id":1,"thread-id":0},"id":"libvirt-8"}
 51.594 < 0x7fdac401ac60 {"return": {}, "id": "libvirt-8"}
 51.595 > 0x7fdac401ac60 {"execute":"device_add","arguments":{"driver":"Opteron_G4-x86_64-cpu","id":"vcpu2","socket-id":1,"core-id":0,"thread-id":0},"id":"libvirt-9"}
 51.617 < 0x7fdac401ac60 {"return": {}, "id": "libvirt-9"}
 51.617 > 0x7fdac401ac60 {"execute":"device_add","arguments":{"driver":"Opteron_G4-x86_64-cpu","id":"vcpu3","socket-id":1,"core-id":1,"thread-id":0},"id":"libvirt-10"}
 51.640 < 0x7fdac401ac60 {"return": {}, "id": "libvirt-10"}
 51.640 > 0x7fdac401ac60 {"execute":"device_add","arguments":{"driver":"Opteron_G4-x86_64-cpu","id":"vcpu5","socket-id":2,"core-id":1,"thread-id":0},"id":"libvirt-11"}
 51.662 < 0x7fdac401ac60 {"return": {}, "id": "libvirt-11"}


	
4. restart libvirtd and recheck guest xml:

# service libvirtd restart
Redirecting to /bin/systemctl restart  libvirtd.service

# virsh dumpxml r7
...
  <vcpu placement='static' current='5'>12</vcpu>
  <vcpus>
    <vcpu id='0' enabled='yes' hotpluggable='no' order='1'/>
    <vcpu id='1' enabled='yes' hotpluggable='yes' order='2'/>
    <vcpu id='2' enabled='yes' hotpluggable='yes' order='3'/>
    <vcpu id='3' enabled='yes' hotpluggable='yes' order='4'/>
    <vcpu id='4' enabled='no' hotpluggable='yes'/>
    <vcpu id='5' enabled='yes' hotpluggable='yes' order='5'/>
    <vcpu id='6' enabled='no' hotpluggable='yes'/>
    <vcpu id='7' enabled='no' hotpluggable='yes'/>
    <vcpu id='8' enabled='no' hotpluggable='yes'/>
    <vcpu id='9' enabled='no' hotpluggable='yes'/>
    <vcpu id='10' enabled='no' hotpluggable='yes'/>
    <vcpu id='11' enabled='no' hotpluggable='yes'/>
  </vcpus>


S2) test vcpu hot-plug and cold-plug:

 1. prepare a guest

# virsh dumpxml r7
...
  <vcpu placement='static' current='5'>12</vcpu>
...

	
2. hot-plug vcpu:

# virsh setvcpus r7 7


	
3. recheck xml and guest status:
# virsh dumpxml r7
...
  <vcpu placement='static' current='7'>12</vcpu>
  <vcpus>
    <vcpu id='0' enabled='yes' hotpluggable='no' order='1'/>
    <vcpu id='1' enabled='yes' hotpluggable='no' order='2'/>
    <vcpu id='2' enabled='yes' hotpluggable='no' order='3'/>
    <vcpu id='3' enabled='yes' hotpluggable='no' order='4'/>
    <vcpu id='4' enabled='yes' hotpluggable='no' order='5'/>
    <vcpu id='5' enabled='yes' hotpluggable='yes' order='6'/>
    <vcpu id='6' enabled='yes' hotpluggable='yes' order='7'/>
    <vcpu id='7' enabled='no' hotpluggable='yes'/>
    <vcpu id='8' enabled='no' hotpluggable='yes'/>
    <vcpu id='9' enabled='no' hotpluggable='yes'/>
    <vcpu id='10' enabled='no' hotpluggable='yes'/>
    <vcpu id='11' enabled='no' hotpluggable='yes'/>
  </vcpus>
...

IN GUEST:
# lscpu
Architecture:          x86_64
CPU op-mode(s):        32-bit, 64-bit
Byte Order:            Little Endian
CPU(s):                7
On-line CPU(s) list:   0-6
Thread(s) per core:    1
Core(s) per socket:    1
Socket(s):             4
NUMA node(s):          2
Vendor ID:             AuthenticAMD
CPU family:            21
Model:                 1
Model name:            AMD Opteron 62xx class CPU
Stepping:              2
CPU MHz:               2400.046
BogoMIPS:              4800.09
Hypervisor vendor:     KVM
Virtualization type:   full
L1d cache:             64K
L1i cache:             64K
L2 cache:              512K
NUMA node0 CPU(s):     0,2
NUMA node1 CPU(s):     1,3-6



	
4. check vcpu related cmd output:
# virsh vcpucount r7
maximum      config        12
maximum      live          12
current      config         5
current      live           7


# virsh vcpuinfo r7
VCPU:           0
CPU:            1
State:          running
CPU time:       14.5s
CPU Affinity:   -yyyyyyyyyy-------------

VCPU:           1
CPU:            5
State:          running
CPU time:       5.6s
CPU Affinity:   yyyyyyyyyyyyyyyyyyyyyyyy

VCPU:           2
CPU:            13
State:          running
CPU time:       7.5s
CPU Affinity:   yyyyyyyyyyyyyyyyyyyyyyyy

VCPU:           3
CPU:            21
State:          running
CPU time:       6.3s
CPU Affinity:   yyyyyyyyyyyyyyyyyyyyyyyy

VCPU:           4
CPU:            11
State:          running
CPU time:       4.2s
CPU Affinity:   yyyyyyyyyyyyyyyyyyyyyyyy

VCPU:           5
CPU:            23
State:          running
CPU time:       3.0s
CPU Affinity:   yyyyyyyyyyyyyyyyyyyyyyyy

VCPU:           6
CPU:            19
State:          running
CPU time:       2.8s
CPU Affinity:   yyyyyyyyyyyyyyyyyyyyyyyy


	
5. restart libvirtd and recheck xml:

# service libvirtd restart
Redirecting to /bin/systemctl restart  libvirtd.service

# virsh dumpxml r7
...
  <vcpu placement='static' current='7'>12</vcpu>
  <vcpus>
    <vcpu id='0' enabled='yes' hotpluggable='no' order='1'/>
    <vcpu id='1' enabled='yes' hotpluggable='no' order='2'/>
    <vcpu id='2' enabled='yes' hotpluggable='no' order='3'/>
    <vcpu id='3' enabled='yes' hotpluggable='no' order='4'/>
    <vcpu id='4' enabled='yes' hotpluggable='no' order='5'/>
    <vcpu id='5' enabled='yes' hotpluggable='yes' order='6'/>
    <vcpu id='6' enabled='yes' hotpluggable='yes' order='7'/>
    <vcpu id='7' enabled='no' hotpluggable='yes'/>
    <vcpu id='8' enabled='no' hotpluggable='yes'/>
    <vcpu id='9' enabled='no' hotpluggable='yes'/>
    <vcpu id='10' enabled='no' hotpluggable='yes'/>
    <vcpu id='11' enabled='no' hotpluggable='yes'/>
  </vcpus>
...

	
6. check cgroup info:
# lscgroup

...
cpuset:/machine.slice/machine-qemu\x2d3\x2dr7.scope/vcpu6
cpuset:/machine.slice/machine-qemu\x2d3\x2dr7.scope/vcpu5
cpuset:/machine.slice/machine-qemu\x2d3\x2dr7.scope/iothread4
cpuset:/machine.slice/machine-qemu\x2d3\x2dr7.scope/iothread3
cpuset:/machine.slice/machine-qemu\x2d3\x2dr7.scope/iothread2
cpuset:/machine.slice/machine-qemu\x2d3\x2dr7.scope/iothread1
cpuset:/machine.slice/machine-qemu\x2d3\x2dr7.scope/vcpu4
cpuset:/machine.slice/machine-qemu\x2d3\x2dr7.scope/vcpu3
cpuset:/machine.slice/machine-qemu\x2d3\x2dr7.scope/vcpu2
cpuset:/machine.slice/machine-qemu\x2d3\x2dr7.scope/vcpu1
cpuset:/machine.slice/machine-qemu\x2d3\x2dr7.scope/vcpu0
...

	
7. test cold-plug:

# virsh setvcpus r7  8 --config


	
8. check vcpu:
# virsh vcpucount r7
maximum      config        12
maximum      live          12
current      config         8
current      live           7


	
9.
# virsh dumpxml r7 --inactive
...
  <vcpu placement='static' current='8'>12</vcpu>
...

Comment 13 Luyao Huang 2016-09-19 01:29:09 UTC
S3) test vcpu hot-unplug and cold-unplug:

 1. start a guest with unpluggable vcpu in xml:

  <vcpu placement='static' current='5'>12</vcpu>
  <vcpus>
    <vcpu id='0' enabled='yes' hotpluggable='no'/>
    <vcpu id='1' enabled='yes' hotpluggable='yes'/>
    <vcpu id='2' enabled='yes' hotpluggable='yes'/>
    <vcpu id='3' enabled='yes' hotpluggable='yes'/>
    <vcpu id='4' enabled='yes' hotpluggable='yes'/>
    <vcpu id='5' enabled='no' hotpluggable='yes'/>
    <vcpu id='6' enabled='no' hotpluggable='yes'/>
    <vcpu id='7' enabled='no' hotpluggable='yes'/>
    <vcpu id='8' enabled='no' hotpluggable='yes'/>
    <vcpu id='9' enabled='no' hotpluggable='yes'/>
    <vcpu id='10' enabled='no' hotpluggable='yes'/>
    <vcpu id='11' enabled='no' hotpluggable='yes'/>
  </vcpus>


# virsh start r7
Domain r7 started
	
2. check guest vcpu count:
# virsh vcpucount r7
maximum      config        12
maximum      live          12
current      config         5
current      live           5

# virsh vcpuinfo r7
VCPU:           0
CPU:            7
State:          running
CPU time:       14.5s
CPU Affinity:   -yyyyyyyyyy-------------

VCPU:           1
CPU:            17
State:          running
CPU time:       7.1s
CPU Affinity:   yyyyyyyyyyyyyyyyyyyyyyyy

VCPU:           2
CPU:            13
State:          running
CPU time:       7.6s
CPU Affinity:   yyyyyyyyyyyyyyyyyyyyyyyy

VCPU:           3
CPU:            21
State:          running
CPU time:       7.7s
CPU Affinity:   yyyyyyyyyyyyyyyyyyyyyyyy

VCPU:           4
CPU:            5
State:          running
CPU time:       5.8s
CPU Affinity:   yyyyyyyyyyyyyyyyyyyyyyyy

	
3. check cpu number in guest:
IN GUEST:

# lscpu
Architecture:          x86_64
CPU op-mode(s):        32-bit, 64-bit
Byte Order:            Little Endian
CPU(s):                5
On-line CPU(s) list:   0-4
Thread(s) per core:    1
Core(s) per socket:    1
Socket(s):             3
NUMA node(s):          2
Vendor ID:             AuthenticAMD
CPU family:            21
Model:                 1
Model name:            AMD Opteron 62xx class CPU
Stepping:              2
CPU MHz:               2400.046
BogoMIPS:              4800.09
Hypervisor vendor:     KVM
Virtualization type:   full
L1d cache:             64K
L1i cache:             64K
L2 cache:              512K
NUMA node0 CPU(s):     0-2
NUMA node1 CPU(s):     3,4


	
4. hot-unplug vcpu:

# virsh setvcpus r7 2


	
5. recheck xml, status
# virsh dumpxml r7
...
  <vcpu placement='static' current='2'>12</vcpu>
  <vcpus>
    <vcpu id='0' enabled='yes' hotpluggable='no' order='1'/>
    <vcpu id='1' enabled='yes' hotpluggable='yes' order='2'/>
    <vcpu id='2' enabled='no' hotpluggable='yes'/>
    <vcpu id='3' enabled='no' hotpluggable='yes'/>
    <vcpu id='4' enabled='no' hotpluggable='yes'/>
    <vcpu id='5' enabled='no' hotpluggable='yes'/>
    <vcpu id='6' enabled='no' hotpluggable='yes'/>
    <vcpu id='7' enabled='no' hotpluggable='yes'/>
    <vcpu id='8' enabled='no' hotpluggable='yes'/>
    <vcpu id='9' enabled='no' hotpluggable='yes'/>
    <vcpu id='10' enabled='no' hotpluggable='yes'/>
    <vcpu id='11' enabled='no' hotpluggable='yes'/>
  </vcpus>
...

# virsh vcpucount r7
maximum      config        12
maximum      live          12
current      config         5
current      live           2

# virsh vcpuinfo r7
VCPU:           0
CPU:            9
State:          running
CPU time:       19.4s
CPU Affinity:   -yyyyyyyyyy-------------

VCPU:           1
CPU:            1
State:          running
CPU time:       12.4s
CPU Affinity:   yyyyyyyyyyyyyyyyyyyyyyyy


IN GUEST:

# lscpu
Architecture:          x86_64
CPU op-mode(s):        32-bit, 64-bit
Byte Order:            Little Endian
CPU(s):                2
On-line CPU(s) list:   0,1
Thread(s) per core:    1
Core(s) per socket:    2
Socket(s):             1
NUMA node(s):          2
Vendor ID:             AuthenticAMD
CPU family:            21
Model:                 1
Model name:            AMD Opteron 62xx class CPU
Stepping:              2
CPU MHz:               2400.046
BogoMIPS:              4800.09
Hypervisor vendor:     KVM
Virtualization type:   full
L1d cache:             64K
L1i cache:             64K
L2 cache:              512K
NUMA node0 CPU(s):     0,1
NUMA node1 CPU(s):     

6 restart libvirtd and recheck guest:

# service libvirtd restart
Redirecting to /bin/systemctl restart  libvirtd.service

# virsh dumpxml r7
...
  <vcpu placement='static' current='2'>12</vcpu>
  <vcpus>
    <vcpu id='0' enabled='yes' hotpluggable='no' order='1'/>
    <vcpu id='1' enabled='yes' hotpluggable='yes' order='2'/>
    <vcpu id='2' enabled='no' hotpluggable='yes'/>
    <vcpu id='3' enabled='no' hotpluggable='yes'/>
    <vcpu id='4' enabled='no' hotpluggable='yes'/>
    <vcpu id='5' enabled='no' hotpluggable='yes'/>
    <vcpu id='6' enabled='no' hotpluggable='yes'/>
    <vcpu id='7' enabled='no' hotpluggable='yes'/>
    <vcpu id='8' enabled='no' hotpluggable='yes'/>
    <vcpu id='9' enabled='no' hotpluggable='yes'/>
    <vcpu id='10' enabled='no' hotpluggable='yes'/>
    <vcpu id='11' enabled='no' hotpluggable='yes'/>
  </vcpus>
...
	
6. cold-unplug vcpus:

# virsh vcpucount r7
maximum      config        12
maximum      live          12
current      config         5
current      live           2

# virsh dumpxml r7 --inactive
...
  <vcpu placement='static' current='5'>12</vcpu>
  <vcpus>
    <vcpu id='0' enabled='yes' hotpluggable='no'/>
    <vcpu id='1' enabled='yes' hotpluggable='yes'/>
    <vcpu id='2' enabled='yes' hotpluggable='yes'/>
    <vcpu id='3' enabled='yes' hotpluggable='yes'/>
    <vcpu id='4' enabled='yes' hotpluggable='yes'/>
    <vcpu id='5' enabled='no' hotpluggable='yes'/>
    <vcpu id='6' enabled='no' hotpluggable='yes'/>
    <vcpu id='7' enabled='no' hotpluggable='yes'/>
    <vcpu id='8' enabled='no' hotpluggable='yes'/>
    <vcpu id='9' enabled='no' hotpluggable='yes'/>
    <vcpu id='10' enabled='no' hotpluggable='yes'/>
    <vcpu id='11' enabled='no' hotpluggable='yes'/>
  </vcpus>
...

# virsh setvcpus r7 3 --config


	
7. recheck status:
# virsh dumpxml r7 --inactive
...
  <vcpu placement='static' current='3'>12</vcpu>
  <vcpus>
    <vcpu id='0' enabled='yes' hotpluggable='no'/>
    <vcpu id='1' enabled='yes' hotpluggable='yes'/>
    <vcpu id='2' enabled='yes' hotpluggable='yes'/>
    <vcpu id='3' enabled='no' hotpluggable='yes'/>
    <vcpu id='4' enabled='no' hotpluggable='yes'/>
    <vcpu id='5' enabled='no' hotpluggable='yes'/>
    <vcpu id='6' enabled='no' hotpluggable='yes'/>
    <vcpu id='7' enabled='no' hotpluggable='yes'/>
    <vcpu id='8' enabled='no' hotpluggable='yes'/>
    <vcpu id='9' enabled='no' hotpluggable='yes'/>
    <vcpu id='10' enabled='no' hotpluggable='yes'/>
    <vcpu id='11' enabled='no' hotpluggable='yes'/>
  </vcpus>
...

# virsh vcpucount r7
maximum      config        12
maximum      live          12
current      config         3
current      live           2

Comment 14 Luyao Huang 2016-09-19 01:57:44 UTC
S4) test vcpus hot-plug order:

1. prepare inactive guest xml like this:
...
  <vcpu placement='static' current='4'>10</vcpu>
  <vcpus>
    <vcpu id='0' enabled='yes' hotpluggable='no' order='1'/>
    <vcpu id='1' enabled='yes' hotpluggable='yes' order='4'/>
    <vcpu id='2' enabled='no' hotpluggable='yes'/>
    <vcpu id='3' enabled='no' hotpluggable='yes'/>
    <vcpu id='4' enabled='yes' hotpluggable='yes' order='3'/>
    <vcpu id='5' enabled='no' hotpluggable='yes'/>
    <vcpu id='6' enabled='yes' hotpluggable='yes' order='2'/>
    <vcpu id='7' enabled='no' hotpluggable='yes'/>
    <vcpu id='8' enabled='no' hotpluggable='yes'/>
    <vcpu id='9' enabled='no' hotpluggable='yes'/>
  </vcpus>
...

	
2. start guest:
# virsh start r7
Domain r7 started

	
3. check systemtap output:
...
 17.024 > 0x7f90fc012540 {"execute":"device_add","arguments":{"driver":"qemu64-x86_64-cpu","id":"vcpu6","socket-id":6,"core-id":0,"thread-id":0},"id":"libvirt-8"}
 17.033 < 0x7f90fc012540 {"return": {}, "id": "libvirt-8"}
 17.033 > 0x7f90fc012540 {"execute":"device_add","arguments":{"driver":"qemu64-x86_64-cpu","id":"vcpu4","socket-id":4,"core-id":0,"thread-id":0},"id":"libvirt-9"}
 17.041 < 0x7f90fc012540 {"return": {}, "id": "libvirt-9"}
 17.041 > 0x7f90fc012540 {"execute":"device_add","arguments":{"driver":"qemu64-x86_64-cpu","id":"vcpu1","socket-id":1,"core-id":0,"thread-id":0},"id":"libvirt-10"}
 17.049 < 0x7f90fc012540 {"return": {}, "id": "libvirt-10"}
...

	
4. check active guest xml:

# virsh dumpxml r7
...
  <vcpu placement='static' current='4'>10</vcpu>
  <vcpus>
    <vcpu id='0' enabled='yes' hotpluggable='no' order='1'/>
    <vcpu id='1' enabled='yes' hotpluggable='yes' order='4'/>
    <vcpu id='2' enabled='no' hotpluggable='yes'/>
    <vcpu id='3' enabled='no' hotpluggable='yes'/>
    <vcpu id='4' enabled='yes' hotpluggable='yes' order='3'/>
    <vcpu id='5' enabled='no' hotpluggable='yes'/>
    <vcpu id='6' enabled='yes' hotpluggable='yes' order='2'/>
    <vcpu id='7' enabled='no' hotpluggable='yes'/>
    <vcpu id='8' enabled='no' hotpluggable='yes'/>
    <vcpu id='9' enabled='no' hotpluggable='yes'/>
  </vcpus>
...

5. login guest and check guest cpu:

IN GUEST:
# lscpu
Architecture:          x86_64
CPU op-mode(s):        32-bit, 64-bit
Byte Order:            Little Endian
CPU(s):                4
On-line CPU(s) list:   0-3
Thread(s) per core:    1
Core(s) per socket:    1
Socket(s):             4
NUMA node(s):          2
Vendor ID:             AuthenticAMD
CPU family:            6
Model:                 13
Model name:            QEMU Virtual CPU version 2.5+
Stepping:              3
CPU MHz:               2399.996
BogoMIPS:              4800.09
Hypervisor vendor:     KVM
Virtualization type:   full
L1d cache:             64K
L1i cache:             64K
L2 cache:              512K
NUMA node0 CPU(s):     0,2
NUMA node1 CPU(s):     1,3


6. restart libvirtd and recheck guest xml:

# service libvirtd restart
Redirecting to /bin/systemctl restart  libvirtd.service

# virsh dumpxml r7
...
  <vcpu placement='static' current='4'>10</vcpu>
  <vcpus>
    <vcpu id='0' enabled='yes' hotpluggable='no' order='1'/>
    <vcpu id='1' enabled='yes' hotpluggable='yes' order='4'/>
    <vcpu id='2' enabled='no' hotpluggable='yes'/>
    <vcpu id='3' enabled='no' hotpluggable='yes'/>
    <vcpu id='4' enabled='yes' hotpluggable='yes' order='3'/>
    <vcpu id='5' enabled='no' hotpluggable='yes'/>
    <vcpu id='6' enabled='yes' hotpluggable='yes' order='2'/>
    <vcpu id='7' enabled='no' hotpluggable='yes'/>
    <vcpu id='8' enabled='no' hotpluggable='yes'/>
    <vcpu id='9' enabled='no' hotpluggable='yes'/>
  </vcpus>
...


S5) test with guest with numa binding:

1. prepare a numa machine:

# numactl --hard
available: 4 nodes (0-3)
node 0 cpus: 0 2 4 6 8 10
node 0 size: 10205 MB
node 0 free: 9204 MB
node 1 cpus: 12 14 16 18 20 22
node 1 size: 8192 MB
node 1 free: 4706 MB
node 2 cpus: 1 3 5 7 9 11
node 2 size: 6144 MB
node 2 free: 5717 MB
node 3 cpus: 13 15 17 19 21 23
node 3 size: 8175 MB
node 3 free: 7371 MB
node distances:
node   0   1   2   3
  0:  10  20  20  20
  1:  20  10  20  20
  2:  20  20  10  20
  3:  20  20  20  10

# cat /proc/zoneinfo |grep DMA
Node 0, zone      DMA
Node 0, zone    DMA32
	
2. start a guest which memory bind to a numa node which do not have DMA:

# virsh numatune r7
numa_mode      : strict
numa_nodeset   : 1

# virsh dumpxml r7
..
  <vcpu placement='static' current='4'>10</vcpu>
  <vcpus>
    <vcpu id='0' enabled='yes' hotpluggable='no' order='1'/>
    <vcpu id='1' enabled='yes' hotpluggable='yes' order='4'/>
    <vcpu id='2' enabled='no' hotpluggable='yes'/>
    <vcpu id='3' enabled='no' hotpluggable='yes'/>
    <vcpu id='4' enabled='yes' hotpluggable='yes' order='3'/>
    <vcpu id='5' enabled='no' hotpluggable='yes'/>
    <vcpu id='6' enabled='yes' hotpluggable='yes' order='2'/>
    <vcpu id='7' enabled='no' hotpluggable='yes'/>
    <vcpu id='8' enabled='no' hotpluggable='yes'/>
    <vcpu id='9' enabled='no' hotpluggable='yes'/>
  </vcpus>

# virsh start r7
Domain r7 started

	
3. check cgroup:

# cgget -g cpuset /machine.slice/machine-qemu\\x2d9\\x2dr7.scope/emulator
/machine.slice/machine-qemu\x2d9\x2dr7.scope/emulator:
cpuset.memory_spread_slab: 0
cpuset.memory_spread_page: 0
cpuset.memory_pressure: 0
cpuset.memory_migrate: 1
cpuset.sched_relax_domain_level: -1
cpuset.sched_load_balance: 1
cpuset.mem_hardwall: 0
cpuset.mem_exclusive: 0
cpuset.cpu_exclusive: 0
cpuset.mems: 1
cpuset.cpus: 0-23


	
4. hot-plug vcpu and check guest status:

# virsh setvcpus r7 10

# virsh dominfo r7
Id:             13
Name:           r7
UUID:           67c7a123-5415-4136-af62-a2ee098ba6cd
OS Type:        hvm
State:          running
CPU(s):         10
CPU time:       21.8s
Max memory:     1179648 KiB
Used memory:    1179648 KiB
Persistent:     yes
Autostart:      disable
Managed save:   no
Security model: selinux
Security DOI:   0
Security label: system_u:system_r:svirt_t:s0:c288,c475 (enforcing)


	
5. recheck cgroup:

# cgget -g cpuset /machine.slice/machine-qemu\\x2d9\\x2dr7.scope/emulator
/machine.slice/machine-qemu\x2d9\x2dr7.scope/emulator:
cpuset.memory_spread_slab: 0
cpuset.memory_spread_page: 0
cpuset.memory_pressure: 0
cpuset.memory_migrate: 1
cpuset.sched_relax_domain_level: -1
cpuset.sched_load_balance: 1
cpuset.mem_hardwall: 0
cpuset.mem_exclusive: 0
cpuset.cpu_exclusive: 0
cpuset.mems: 1
cpuset.cpus: 0-23

# cgget -g cpuset /machine.slice/machine-qemu\\x2d9\\x2dr7.scope/vcpu9
/machine.slice/machine-qemu\x2d9\x2dr7.scope/vcpu9:
cpuset.memory_spread_slab: 0
cpuset.memory_spread_page: 0
cpuset.memory_pressure: 0
cpuset.memory_migrate: 1
cpuset.sched_relax_domain_level: -1
cpuset.sched_load_balance: 1
cpuset.mem_hardwall: 0
cpuset.mem_exclusive: 0
cpuset.cpu_exclusive: 0
cpuset.mems: 1
cpuset.cpus: 0-23

# cgget -g cpuset /machine.slice/machine-qemu\\x2d9\\x2dr7.scope/vcpu8
/machine.slice/machine-qemu\x2d9\x2dr7.scope/vcpu8:
cpuset.memory_spread_slab: 0
cpuset.memory_spread_page: 0
cpuset.memory_pressure: 0
cpuset.memory_migrate: 1
cpuset.sched_relax_domain_level: -1
cpuset.sched_load_balance: 1
cpuset.mem_hardwall: 0
cpuset.mem_exclusive: 0
cpuset.cpu_exclusive: 0
cpuset.mems: 1
cpuset.cpus: 0-23

....

Comment 15 Luyao Huang 2016-09-19 02:42:18 UTC
S6) Test migrate to host which libvirt not support hotpluggable vcpu:

1. start a running guest with hotpluggable vcpu is non-contiguous :

# virsh dumpxml r7-mig
...
  <vcpu placement='static' current='4'>10</vcpu>
  <vcpus>
    <vcpu id='0' enabled='yes' hotpluggable='no' order='1'/>
    <vcpu id='1' enabled='yes' hotpluggable='yes' order='4'/>
    <vcpu id='2' enabled='no' hotpluggable='yes'/>
    <vcpu id='3' enabled='no' hotpluggable='yes'/>
    <vcpu id='4' enabled='yes' hotpluggable='yes' order='3'/>
    <vcpu id='5' enabled='no' hotpluggable='yes'/>
    <vcpu id='6' enabled='yes' hotpluggable='yes' order='2'/>
    <vcpu id='7' enabled='no' hotpluggable='yes'/>
    <vcpu id='8' enabled='no' hotpluggable='yes'/>
    <vcpu id='9' enabled='no' hotpluggable='yes'/>
  </vcpus>

	
2. migrate to a old libvirt not support hotpluggable vcpu:

# virsh migrate r7-mig qemu+ssh://dest/system --live  --verbose
error: internal error: Unknown migration cookie feature cpu-hotplug

	
3. start a running guest with hotpluggable vcpu is contiguous:

# virsh dumpxml r7-mig
...
  <vcpu placement='static' current='4'>10</vcpu>
  <vcpus>
    <vcpu id='0' enabled='yes' hotpluggable='no' order='1'/>
    <vcpu id='1' enabled='yes' hotpluggable='yes' order='2'/>
    <vcpu id='2' enabled='yes' hotpluggable='yes' order='3'/>
    <vcpu id='3' enabled='yes' hotpluggable='yes' order='4'/>
    <vcpu id='4' enabled='no' hotpluggable='yes'/>
    <vcpu id='5' enabled='no' hotpluggable='yes'/>
    <vcpu id='6' enabled='no' hotpluggable='yes'/>
    <vcpu id='7' enabled='no' hotpluggable='yes'/>
    <vcpu id='8' enabled='no' hotpluggable='yes'/>
    <vcpu id='9' enabled='no' hotpluggable='yes'/>
  </vcpus>


	
4. migrate to a old libvirt not support hotpluggable vcpu:
# virsh migrate r7-mig qemu+ssh://dest/system --verbose --live
Migration: [100 %]


	
5. check guest status in dest host:
# virsh dumpxml r7-mig
...
  <vcpu placement='static' current='4'>10</vcpu>
...

# ps aux|grep qemu
...-smp 4,maxcpus=10,sockets=10,cores=1,threads=1...

IN GUEST:
# lscpu
Architecture:          x86_64
CPU op-mode(s):        32-bit, 64-bit
Byte Order:            Little Endian
CPU(s):                4
On-line CPU(s) list:   0-3
Thread(s) per core:    1
Core(s) per socket:    1
Socket(s):             4
NUMA node(s):          2
Vendor ID:             AuthenticAMD
CPU family:            6
Model:                 13
Model name:            QEMU Virtual CPU version 2.5+
Stepping:              3
CPU MHz:               2400.027
BogoMIPS:              4800.09
Hypervisor vendor:     KVM
Virtualization type:   full
L1d cache:             64K
L1i cache:             64K
L2 cache:              512K
NUMA node0 CPU(s):     0,2,3
NUMA node1 CPU(s):     1

	
6. migrate back to source host:
# virsh migrate r7-mig qemu+ssh://source/system --verbose --live
Migration: [100 %]


	
7. check guest status on source host:

IN GUEST:
# lscpu
Architecture:          x86_64
CPU op-mode(s):        32-bit, 64-bit
Byte Order:            Little Endian
CPU(s):                4
On-line CPU(s) list:   0-3
Thread(s) per core:    1
Core(s) per socket:    1
Socket(s):             4
NUMA node(s):          2
Vendor ID:             AuthenticAMD
CPU family:            6
Model:                 13
Model name:            QEMU Virtual CPU version 2.5+
Stepping:              3
CPU MHz:               2400.027
BogoMIPS:              4800.09
Hypervisor vendor:     KVM
Virtualization type:   full
L1d cache:             64K
L1i cache:             64K
L2 cache:              512K
NUMA node0 CPU(s):     0,2,3
NUMA node1 CPU(s):     1

# virsh dumpxml r7-mig
...
  <vcpu placement='static' current='4'>10</vcpu>
...

	
8. hot-plug vcpu and recheck xml:
# virsh setvcpus r7-mig 7

# virsh dumpxml r7-mig
...
  <vcpu placement='static' current='7'>10</vcpu>
  <vcpus>
    <vcpu id='0' enabled='yes' hotpluggable='no' order='1'/>
    <vcpu id='1' enabled='yes' hotpluggable='no' order='2'/>
    <vcpu id='2' enabled='yes' hotpluggable='no' order='3'/>
    <vcpu id='3' enabled='yes' hotpluggable='no' order='4'/>
    <vcpu id='4' enabled='yes' hotpluggable='yes' order='5'/>
    <vcpu id='5' enabled='yes' hotpluggable='yes' order='6'/>
    <vcpu id='6' enabled='yes' hotpluggable='yes' order='7'/>
    <vcpu id='7' enabled='no' hotpluggable='yes'/>
    <vcpu id='8' enabled='no' hotpluggable='yes'/>
    <vcpu id='9' enabled='no' hotpluggable='yes'/>
  </vcpus>


	
9. migrate to a host with old libvirt (not support hotpluggable vcpus):
# virsh migrate r7-mig qemu+ssh://dest/system --verbose --live
Migration: [100 %]

	
10. check guest status on dest host:
IN GUEST:
# lscpu
Architecture:          x86_64
CPU op-mode(s):        32-bit, 64-bit
Byte Order:            Little Endian
CPU(s):                7
On-line CPU(s) list:   0-6
Thread(s) per core:    1
Core(s) per socket:    1
Socket(s):             7
NUMA node(s):          2
Vendor ID:             AuthenticAMD
CPU family:            6
Model:                 13
Model name:            QEMU Virtual CPU version 2.5+
Stepping:              3
CPU MHz:               2400.027
BogoMIPS:              4800.09
Hypervisor vendor:     KVM
Virtualization type:   full
L1d cache:             64K
L1i cache:             64K
L2 cache:              512K
NUMA node0 CPU(s):     0,2-5
NUMA node1 CPU(s):     1,6

Comment 16 Steffen Froemer 2016-09-19 11:34:31 UTC
Hi, could you also verify, that setting vcpu count persist over guest reboot, as it's mentioned if BZ#1112686.

I know, this is addressed for RHEL6, but the appropriated RHEL7 BZ#1146944 only contains a workaround solution. If the solution presented here, will fix it all, I would advise my customer to upgrade to RHEL 7.3 for hypervisors.

This test has to be verified:

[mtessun@mtessun ~]$ virsh vcpucount rhel6 --guest
3

[mtessun@mtessun ~]$ virsh setvcpus rhel6 2 --guest

[mtessun@mtessun ~]$ virsh vcpucount rhel6 --guest
2
<======== Reboot of VM happens here ==========>
[mtessun@mtessun ~]$ virsh vcpucount rhel6 --guest
3

Thanks.
Steffen

Comment 17 Peter Krempa 2016-09-19 12:10:14 UTC
To persist the change the '--guest' flag can't be used since that changes the vcpu count using the guest agent. The scenario should work as desired when not providing the --guest flag, given that vcpus were either hotplugged or properly marked as hotpluggable in the XML.

Comment 18 Steffen Froemer 2016-09-19 12:40:21 UTC
Thanks for clarifying. So I will give my customer these information, as it will properly work in RHEL 7.3, right?

Comment 19 Luyao Huang 2016-09-20 01:42:12 UTC
(In reply to Steffen Froemer from comment #16)
> Hi, could you also verify, that setting vcpu count persist over guest
> reboot, as it's mentioned if BZ#1112686.
> 
> I know, this is addressed for RHEL6, but the appropriated RHEL7 BZ#1146944
> only contains a workaround solution. If the solution presented here, will
> fix it all, I would advise my customer to upgrade to RHEL 7.3 for
> hypervisors.
> 
> This test has to be verified:
> 
> [mtessun@mtessun ~]$ virsh vcpucount rhel6 --guest
> 3
> 
> [mtessun@mtessun ~]$ virsh setvcpus rhel6 2 --guest
> 
> [mtessun@mtessun ~]$ virsh vcpucount rhel6 --guest
> 2
> <======== Reboot of VM happens here ==========>
> [mtessun@mtessun ~]$ virsh vcpucount rhel6 --guest
> 3
> 
> Thanks.
> Steffen

This bug cannot fix hot-unplug via guest agent problem, but you can use new way (still the same api) to unplug guest vcpu, steps is here:

1. prepare a guest xml with hotpluggable vcpu:

# virsh dumpxml r7
...
  <vcpu placement='static' current='4'>10</vcpu>
  <vcpus>
    <vcpu id='0' enabled='yes' hotpluggable='no'/>
    <vcpu id='1' enabled='yes' hotpluggable='yes'/>
    <vcpu id='2' enabled='yes' hotpluggable='yes'/>
    <vcpu id='3' enabled='yes' hotpluggable='yes'/>
    <vcpu id='4' enabled='no' hotpluggable='yes'/>
    <vcpu id='5' enabled='no' hotpluggable='yes'/>
    <vcpu id='6' enabled='no' hotpluggable='yes'/>
    <vcpu id='7' enabled='no' hotpluggable='yes'/>
    <vcpu id='8' enabled='no' hotpluggable='yes'/>
    <vcpu id='9' enabled='no' hotpluggable='yes'/>
  </vcpus>
...

2. start guest:

# virsh start r7
Domain r7 started

3. check guest cpu number:

# virsh vcpucount r7 --guest
4

IN GUEST:

# lscpu
Architecture:          x86_64
CPU op-mode(s):        32-bit, 64-bit
Byte Order:            Little Endian
CPU(s):                4
On-line CPU(s) list:   0-3
Thread(s) per core:    1
Core(s) per socket:    1
Socket(s):             4
NUMA node(s):          2
Vendor ID:             AuthenticAMD
CPU family:            6
Model:                 13
Model name:            QEMU Virtual CPU version 2.5+
Stepping:              3
CPU MHz:               2399.977
BogoMIPS:              4800.09
Hypervisor vendor:     KVM
Virtualization type:   full
L1d cache:             64K
L1i cache:             64K
L2 cache:              512K
NUMA node0 CPU(s):     0,2,3
NUMA node1 CPU(s):     1

4. unplug vcpu:

# virsh setvcpus r7 2


5. recheck vcpu number:

# virsh vcpucount r7 --guest
2

IN GUEST:

# lscpu
Architecture:          x86_64
CPU op-mode(s):        32-bit, 64-bit
Byte Order:            Little Endian
CPU(s):                2
On-line CPU(s) list:   0,1
Thread(s) per core:    1
Core(s) per socket:    1
Socket(s):             2
NUMA node(s):          2
Vendor ID:             AuthenticAMD
CPU family:            6
Model:                 13
Model name:            QEMU Virtual CPU version 2.5+
Stepping:              3
CPU MHz:               2399.977
BogoMIPS:              4800.09
Hypervisor vendor:     KVM
Virtualization type:   full
L1d cache:             64K
L1i cache:             64K
L2 cache:              512K
NUMA node0 CPU(s):     0
NUMA node1 CPU(s):     1

6. reboot guest:

IN GUEST:

# reboot

7. recheck vcpu number:

# virsh vcpucount r7 --guest
2

IN GUEST:

# lscpu
Architecture:          x86_64
CPU op-mode(s):        32-bit, 64-bit
Byte Order:            Little Endian
CPU(s):                2
On-line CPU(s) list:   0,1
Thread(s) per core:    1
Core(s) per socket:    1
Socket(s):             2
NUMA node(s):          2
Vendor ID:             AuthenticAMD
CPU family:            6
Model:                 13
Model name:            QEMU Virtual CPU version 2.5+
Stepping:              3
CPU MHz:               2400.049
BogoMIPS:              4800.09
Hypervisor vendor:     KVM
Virtualization type:   full
L1d cache:             64K
L1i cache:             64K
L2 cache:              512K
NUMA node0 CPU(s):     0
NUMA node1 CPU(s):     1

Comment 20 Luyao Huang 2016-09-20 02:09:32 UTC
(In reply to Steffen Froemer from comment #18)
> Thanks for clarifying. So I will give my customer these information, as it
> will properly work in RHEL 7.3, right?

Looks like you want to ask Peter for more information, i will help to set needinfo to Peter.

Comment 21 Peter Krempa 2016-09-20 05:43:58 UTC
(In reply to Steffen Froemer from comment #18)
> Thanks for clarifying. So I will give my customer these information, as it
> will properly work in RHEL 7.3, right?

Yes, this introduces real cpu hotunplug. But as I've noted, the guest VM needs to be configured properly to support removing vCPUs that were present at boot. (hotplugged vcpus are configured properly on hotplug time automatically)

Comment 23 errata-xmlrpc 2016-11-03 18:07:58 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://rhn.redhat.com/errata/RHSA-2016-2577.html


Note You need to log in before you can comment on or make changes to this bug.