Bug 1370357

Summary: cold plug vcpu will make guest have broken settings
Product: Red Hat Enterprise Linux 7 Reporter: Luyao Huang <lhuang>
Component: libvirtAssignee: Peter Krempa <pkrempa>
Status: CLOSED ERRATA QA Contact: Jingjing Shao <jishao>
Severity: medium Docs Contact:
Priority: medium    
Version: 7.4CC: dyuan, jdenemar, libvirt-maint, pkrempa, rbalakri, xuzhang
Target Milestone: rc   
Target Release: 7.4   
Hardware: x86_64   
OS: Linux   
Whiteboard:
Fixed In Version: libvirt-3.0.0-1.el7 Doc Type: If docs needed, set a value
Doc Text:
Story Points: ---
Clone Of: Environment:
Last Closed: 2017-08-01 17:14:13 UTC Type: Bug
Regression: --- Mount Type: ---
Documentation: --- CRM:
Verified Versions: Category: ---
oVirt Team: --- RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: --- Target Upstream Version:
Embargoed:
Bug Depends On:    
Bug Blocks: 1401400    

Description Luyao Huang 2016-08-26 03:27:24 UTC
Description of problem:
cold plug vcpu will make guest have broken settings 

Version-Release number of selected component (if applicable):
v2.1.0-209-ge3229f6

How reproducible:
100%

Steps to Reproduce:
1. prepare a inactive guest have hot-pluggable vcpus with order:

# virsh dumpxml r7
...
  <vcpu placement='auto' current='3'>12</vcpu>
  <vcpus>
    <vcpu id='0' enabled='yes' hotpluggable='no' order='1'/>
    <vcpu id='1' enabled='yes' hotpluggable='yes' order='2'/>
    <vcpu id='2' enabled='yes' hotpluggable='yes' order='3'/>
    <vcpu id='3' enabled='no' hotpluggable='yes'/>
    <vcpu id='4' enabled='no' hotpluggable='yes'/>
    <vcpu id='5' enabled='no' hotpluggable='yes'/>
    <vcpu id='6' enabled='no' hotpluggable='yes'/>
    <vcpu id='7' enabled='no' hotpluggable='yes'/>
    <vcpu id='8' enabled='no' hotpluggable='yes'/>
    <vcpu id='9' enabled='no' hotpluggable='yes'/>
    <vcpu id='10' enabled='no' hotpluggable='yes'/>
    <vcpu id='11' enabled='no' hotpluggable='yes'/>
  </vcpus>
...

2. cold plug vcpu

# virsh setvcpus r7 5 --config


3. this xml will make li

# virsh dumpxml r7
...
  <vcpu placement='auto' current='5'>12</vcpu>
  <vcpus>
    <vcpu id='0' enabled='yes' hotpluggable='no' order='1'/>
    <vcpu id='1' enabled='yes' hotpluggable='yes' order='2'/>
    <vcpu id='2' enabled='yes' hotpluggable='yes' order='3'/>
    <vcpu id='3' enabled='yes' hotpluggable='yes'/>
    <vcpu id='4' enabled='yes' hotpluggable='yes'/>
    <vcpu id='5' enabled='no' hotpluggable='yes'/>
    <vcpu id='6' enabled='no' hotpluggable='yes'/>
    <vcpu id='7' enabled='no' hotpluggable='yes'/>
    <vcpu id='8' enabled='no' hotpluggable='yes'/>
    <vcpu id='9' enabled='no' hotpluggable='yes'/>
    <vcpu id='10' enabled='no' hotpluggable='yes'/>
    <vcpu id='11' enabled='no' hotpluggable='yes'/>
  </vcpus>
...

4. restart libvirtd

# service libvirtd restart
Redirecting to /bin/systemctl restart  libvirtd.service


# virsh edit r7
error: failed to get domain 'r7'
error: Domain not found: no domain with matching name 'r7'


Actual results:

cold plug vcpu will make guest have broken settings

Expected results:

report error or auto add order for new plug vcpu

Additional info:

Comment 2 Luyao Huang 2016-09-18 08:11:54 UTC
Also cold-unplug vcpu will make guest have broken settings:

# virsh setvcpus r7 1 --config

# virsh dumpxml r7
...
  <vcpu placement='static' current='1'>10</vcpu>
  <vcpus>
    <vcpu id='0' enabled='yes' hotpluggable='no' order='1'/>
    <vcpu id='1' enabled='no' hotpluggable='no' order='3'/>  <--- here
    <vcpu id='2' enabled='no' hotpluggable='yes'/>
    <vcpu id='3' enabled='no' hotpluggable='yes' order='4'/>
    <vcpu id='4' enabled='no' hotpluggable='yes'/>
    <vcpu id='5' enabled='no' hotpluggable='yes'/>
    <vcpu id='6' enabled='no' hotpluggable='yes' order='5'/>
    <vcpu id='7' enabled='no' hotpluggable='yes'/>
    <vcpu id='8' enabled='no' hotpluggable='yes'/>
    <vcpu id='9' enabled='no' hotpluggable='yes'/>
  </vcpus>

# service libvirtd restart
Redirecting to /bin/systemctl restart  libvirtd.service

# virsh edit r7
error: failed to get domain 'r7'
error: Domain not found: no domain with matching name 'r7'

Comment 3 Peter Krempa 2016-09-30 10:55:20 UTC
commit a88c65e490d45e73715823b455799a58869ddd0e
Author: Peter Krempa <pkrempa>
Date:   Wed Sep 21 07:59:57 2016 +0200

    qemu: vcpu: Clear vcpu order information rather than making it invalid
    
    Certain operations may make the vcpu order information invalid. Since
    the order is primarily used to ensure migration compatibility and has
    basically no other user benefits, clear the order prior to certain
    operations and document that it may be cleared.
    
    All the operations that would clear the order can still be properly
    executed by defining a new domain configuration rather than using the
    helper APIs.

Comment 5 Jingjing Shao 2016-12-26 09:48:54 UTC
I try to verify this bug, but get the result as below ,cold-unplug vcpu will make guest have broken settings


# rpm -q qemu-kvm-rhev
qemu-kvm-rhev-2.6.0-29.el7.x86_64

# rpm -q libvirt
libvirt-2.5.0-1.el7.x86_64



# virsh dumpxml rhel7.3
...
  <vcpu placement='auto' current='3'>12</vcpu>
  <vcpus>
    <vcpu id='0' enabled='yes' hotpluggable='no' order='1'/>
    <vcpu id='1' enabled='yes' hotpluggable='yes' order='2'/>
    <vcpu id='2' enabled='yes' hotpluggable='yes' order='3'/>
    <vcpu id='3' enabled='no' hotpluggable='yes'/>
    <vcpu id='4' enabled='no' hotpluggable='yes'/>
    <vcpu id='5' enabled='no' hotpluggable='yes'/>
    <vcpu id='6' enabled='no' hotpluggable='yes'/>
    <vcpu id='7' enabled='no' hotpluggable='yes'/>
    <vcpu id='8' enabled='no' hotpluggable='yes'/>
    <vcpu id='9' enabled='no' hotpluggable='yes'/>
    <vcpu id='10' enabled='no' hotpluggable='yes'/>
    <vcpu id='11' enabled='no' hotpluggable='yes'/>
  </vcpus>


# virsh setvcpus rhel7.3  5 --config

 <vcpu placement='auto' current='5'>12</vcpu>
  <vcpus>
    <vcpu id='0' enabled='yes' hotpluggable='no'/>
    <vcpu id='1' enabled='yes' hotpluggable='yes'/>
    <vcpu id='2' enabled='yes' hotpluggable='yes'/>
    <vcpu id='3' enabled='yes' hotpluggable='no'/>
    <vcpu id='4' enabled='yes' hotpluggable='no'/>
    <vcpu id='5' enabled='no' hotpluggable='yes'/>
    <vcpu id='6' enabled='no' hotpluggable='yes'/>
    <vcpu id='7' enabled='no' hotpluggable='yes'/>
    <vcpu id='8' enabled='no' hotpluggable='yes'/>
    <vcpu id='9' enabled='no' hotpluggable='yes'/>
    <vcpu id='10' enabled='no' hotpluggable='yes'/>
    <vcpu id='11' enabled='no' hotpluggable='yes'/>
  </vcpus>

# service libvirtd restart
Redirecting to /bin/systemctl restart  libvirtd.service


# virsh list --all
 Id    Name                           State
----------------------------------------------------


# virsh edit r7.3
error: failed to get domain 'r7.3'
error: Domain not found: no domain with matching name 'r7.3'

Comment 6 Peter Krempa 2017-01-10 09:47:30 UTC
The last corner case is resolved by:

commit a946ea1a334f601bfa0ad4402113544546948de1
Author: Peter Krempa <pkrempa>
Date:   Mon Jan 9 13:50:26 2017 +0100

    qemu: setvcpus: Properly coldplug vcpus when hotpluggable vcpus are present
    
    When coldplugging vcpus to a VM that already has a few hotpluggable
    vcpus the code might generate invalid configuration as
    non-hotpluggable cpus need to be clustered starting from vcpu 0.
    
    This fix forces the added vcpus to be hotpluggable in such case.

Comment 7 Jingjing Shao 2017-02-10 06:56:22 UTC
Verified this bug as below, the setvcpus will clear the order and not change the configuration of hotpluggable


# rpm -q libvirt
libvirt-3.0.0-1.el7.x86_64

(1)# virsh dumpxml vm1
...
  <vcpu placement='auto' current='3'>12</vcpu>
  <vcpus>
    <vcpu id='0' enabled='yes' hotpluggable='no' order='1'/>
    <vcpu id='1' enabled='yes' hotpluggable='yes' order='2'/>
    <vcpu id='2' enabled='yes' hotpluggable='yes' order='3'/>
    <vcpu id='3' enabled='no' hotpluggable='yes'/>
    <vcpu id='4' enabled='no' hotpluggable='yes'/>
    <vcpu id='5' enabled='no' hotpluggable='yes'/>
    <vcpu id='6' enabled='no' hotpluggable='yes'/>
    <vcpu id='7' enabled='no' hotpluggable='yes'/>
    <vcpu id='8' enabled='no' hotpluggable='yes'/>
    <vcpu id='9' enabled='no' hotpluggable='yes'/>
    <vcpu id='10' enabled='no' hotpluggable='yes'/>
    <vcpu id='11' enabled='no' hotpluggable='yes'/>
  </vcpus>

(2)# virsh setvcpus vm1  5 --config

   # virsh dumpxml vm1
<vcpus>
    <vcpu id='0' enabled='yes' hotpluggable='no'/>
    <vcpu id='1' enabled='yes' hotpluggable='yes'/>
    <vcpu id='2' enabled='yes' hotpluggable='yes'/>
    <vcpu id='3' enabled='yes' hotpluggable='yes'/>
    <vcpu id='4' enabled='yes' hotpluggable='yes'/>
    <vcpu id='5' enabled='no'  hotpluggable='yes'/>
    <vcpu id='6' enabled='no'  hotpluggable='yes'/>
    <vcpu id='7' enabled='no'  hotpluggable='yes'/>
    <vcpu id='8' enabled='no'  hotpluggable='yes'/>
    <vcpu id='9' enabled='no'  hotpluggable='yes'/>
    <vcpu id='10' enabled='no' hotpluggable='yes'/>
    <vcpu id='11' enabled='no' hotpluggable='yes'/>
  </vcpus>

# service libvirtd restart
Redirecting to /bin/systemctl restart  libvirtd.service

# virsh list --all
 Id    Name                           State
----------------------------------------------------
 -     vm1                            shut off


(3) Get from the step 1
# virsh setvcpus vm1  2 --config

# virsh dumpxml vm1
 <vcpus>
    <vcpu id='0' enabled='yes' hotpluggable='no'/>
    <vcpu id='1' enabled='yes' hotpluggable='yes'/>
    <vcpu id='2' enabled='no' hotpluggable='yes'/>
    <vcpu id='3' enabled='no' hotpluggable='yes'/>
    <vcpu id='4' enabled='no' hotpluggable='yes'/>
    <vcpu id='5' enabled='no' hotpluggable='yes'/>
    <vcpu id='6' enabled='no' hotpluggable='yes'/>
    <vcpu id='7' enabled='no' hotpluggable='yes'/>
    <vcpu id='8' enabled='no' hotpluggable='yes'/>
    <vcpu id='9' enabled='no' hotpluggable='yes'/>
    <vcpu id='10' enabled='no' hotpluggable='yes'/>
    <vcpu id='11' enabled='no' hotpluggable='yes'/>
  </vcpus>



# service libvirtd restart
Redirecting to /bin/systemctl restart  libvirtd.service

# virsh list --all
 Id    Name                           State
----------------------------------------------------
 -     vm1                            shut off


(4)Get from the step 1
# virsh setvcpus vm1 3 --config

# virsh dumpxml vm1
....
 <vcpus>
    <vcpu id='0' enabled='yes' hotpluggable='no'/>
    <vcpu id='1' enabled='yes' hotpluggable='yes'/>
    <vcpu id='2' enabled='yes' hotpluggable='yes'/>
    <vcpu id='3' enabled='no' hotpluggable='yes'/>
    <vcpu id='4' enabled='no' hotpluggable='yes'/>
    <vcpu id='5' enabled='no' hotpluggable='yes'/>
    <vcpu id='6' enabled='no' hotpluggable='yes'/>
    <vcpu id='7' enabled='no' hotpluggable='yes'/>
    <vcpu id='8' enabled='no' hotpluggable='yes'/>
    <vcpu id='9' enabled='no' hotpluggable='yes'/>
    <vcpu id='10' enabled='no' hotpluggable='yes'/>
    <vcpu id='11' enabled='no' hotpluggable='yes'/>
  </vcpus>

# service libvirtd restart
Redirecting to /bin/systemctl restart  libvirtd.service

# virsh list --all
 Id    Name                           State
----------------------------------------------------
 -     vm1                            shut off

Comment 8 errata-xmlrpc 2017-08-01 17:14:13 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHEA-2017:1846

Comment 9 errata-xmlrpc 2017-08-01 23:55:08 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHEA-2017:1846

Comment 10 errata-xmlrpc 2017-08-02 01:27:35 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHEA-2017:1846