RHEL Engineering is moving the tracking of its product development work on RHEL 6 through RHEL 9 to Red Hat Jira (issues.redhat.com). If you're a Red Hat customer, please continue to file support cases via the Red Hat customer portal. If you're not, please head to the "RHEL project" in Red Hat Jira and file new tickets here. Individual Bugzilla bugs in the statuses "NEW", "ASSIGNED", and "POST" are being migrated throughout September 2023. Bugs of Red Hat partners with an assigned Engineering Partner Manager (EPM) are migrated in late September as per pre-agreed dates. Bugs against components "kernel", "kernel-rt", and "kpatch" are only migrated if still in "NEW" or "ASSIGNED". If you cannot log in to RH Jira, please consult article #7032570. That failing, please send an e-mail to the RH Jira admins at rh-issues@redhat.com to troubleshoot your issue as a user management inquiry. The email creates a ServiceNow ticket with Red Hat. Individual Bugzilla bugs that are migrated will be moved to status "CLOSED", resolution "MIGRATED", and set with "MigratedToJIRA" in "Keywords". The link to the successor Jira issue will be found under "Links", have a little "two-footprint" icon next to it, and direct you to the "RHEL project" in Red Hat Jira (issue links are of type "https://issues.redhat.com/browse/RHEL-XXXX", where "X" is a digit). This same link will be available in a blue banner at the top of the page informing you that that bug has been migrated.
Bug 1088703 - libvirt loses track of hotplugged vcpus after daemon restart
Summary: libvirt loses track of hotplugged vcpus after daemon restart
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: Red Hat Enterprise Linux 6
Classification: Red Hat
Component: libvirt
Version: 6.6
Hardware: Unspecified
OS: Unspecified
medium
medium
Target Milestone: rc
: ---
Assignee: Ján Tomko
QA Contact: Virtualization Bugs
URL:
Whiteboard:
Depends On:
Blocks: 1097677
TreeView+ depends on / blocked
 
Reported: 2014-04-17 03:16 UTC by Xuesong Zhang
Modified: 2016-04-26 16:33 UTC (History)
6 users (show)

Fixed In Version: libvirt-0.10.2-34.el6
Doc Type: Bug Fix
Doc Text:
Clone Of:
: 1097677 (view as bug list)
Environment:
Last Closed: 2014-10-14 04:21:24 UTC
Target Upstream Version:
Embargoed:


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
Red Hat Product Errata RHBA-2014:1374 0 normal SHIPPED_LIVE libvirt bug fix and enhancement update 2014-10-14 08:11:54 UTC

Description Xuesong Zhang 2014-04-17 03:16:51 UTC
Description of problem:
the hot-plug cpu disappear in hypervisor after restart libvirtd service

Version-Release number of selected component (if applicable):
libvirt-0.10.2-32.el6.x86_64
qemu-kvm-rhev-0.12.1.2-2.423.el6.x86_64
kernel-2.6.32-457.el6.x86_64

How reproducible:
100%

Steps to Reproduce:
1. prepare one shutoff guest, configure the channel for qemu agent.
......
  <vcpu placement='static' current='2'>4</vcpu>
......
<channel type='unix'>
      <source mode='bind' path='/var/lib/libvirt/qemu/rhel6.5.agent'/>
      <target type='virtio' name='org.qemu.guest_agent.0'/>
      <alias name='channel0'/>
      <address type='virtio-serial' controller='0' bus='0' port='1'/>
    </channel>
......

2. start the guest

3. check the vcpu info, the current live number in command "vcpucount" is 2, and there are 2 vcpus via command "vcpuinfo".
# virsh vcpucount rhel6.5
maximum      config         4
maximum      live           4
current      config         2
current      live           2

# virsh vcpuinfo rhel6.5
VCPU:           0
CPU:            2
State:          running
CPU time:       83.1s
CPU Affinity:   yyyy

VCPU:           1
CPU:            0
State:          running
CPU time:       60.4s
CPU Affinity:   yyyy


4. after the guest started up, start the qemu-ga service, and check the vcpu number in guest.
In guest:
# service qemu-ga status
qemu-ga (pid 1550) is running...
# cat /proc/cpuinfo|grep processor|wc -l
2

5. On host, hot-plug the vcpus, the current live number in command "vcpucount" is 3, and there are 3 vcpus via command "vcpuinfo".
# virsh setvcpus rhel6.5 3

# virsh vcpucount rhel6.5
maximum      config         4
maximum      live           4
current      config         2
current      live           3

# virsh vcpuinfo rhel6.5
VCPU:           0
CPU:            2
State:          running
CPU time:       83.1s
CPU Affinity:   yyyy

VCPU:           1
CPU:            1
State:          running
CPU time:       60.4s
CPU Affinity:   yyyy

VCPU:           2
CPU:            3
State:          running
CPU time:       30.6s
CPU Affinity:   yyyy


4. check the vcpus number in guest
# cat /proc/cpuinfo|grep processor|wc -l
3

5. restart the libvirtd service
# service libvirtd restart
Stopping libvirtd daemon:                                  [  OK  ]
Starting libvirtd daemon:                                  [  OK  ]

6. check the vcpu info host host. the current live number in command "vcpucount" is changed back to the original number, also via command "vcpuinfo", they are all back to original 2 vcpus. The hot-plug vcpu disappear in the hypervisor.
# virsh vcpucount rhel6.5
maximum      config         4
maximum      live           4
current      config         2
current      live           2

# virsh vcpuinfo rhel6.5
VCPU:           0
CPU:            2
State:          running
CPU time:       83.1s
CPU Affinity:   yyyy

VCPU:           1
CPU:            0
State:          running
CPU time:       60.4s
CPU Affinity:   yyyy


7. check the vcpus number in guest, it's be same with the one before restarting libvirtd.
# cat /proc/cpuinfo|grep processor|wc -l
3



Actual results:
As step 6

Expected results:
the hot-plug vcpu should still be there after restart the libvirtd service.

Addtional info:
Test some other feature while restarting libvirtd, such as the pci hot-plug.
After restarting the libvirtd servie, the pci is still on the running guest, don't disappear.

Comment 1 Ján Tomko 2014-04-22 14:06:46 UTC
The cpu hot-plug can be made persistent by calling 'setvcpus --live --config'

It's not a bug that 'virsh setvcpus' without '--config' doesn't change the persistent config.

attach-device is the odd one here (and it has been fixed upstream to only change the live domain with --live).

Comment 2 Xuesong Zhang 2014-04-23 06:51:04 UTC
hi, Jan, 

I try with some other scenarios, seems still some problems existed.
1. As for the bug description, if it is not a bug, the current live vcpu number in step 6 is not same with step 7. It's strange, then the output of vcpucount can not show the accurate vcpu number. If it is as expected, would you please give some details explanation why they can be dirrerence? Thanks.

2. As for the scenario 1 in this comments, I try to use "--live --config" as you suggest, as you can see in step 8, the current live vcpu number is still 2, doesn't keep 3 after I used "--config". Only the current config kept same after restart libvirtd.

3. As for the scenario 2, if set vcpu and mem at the same time, then restart the libvirtd, you can see in step 5, the vcpu number will not changed back. The result is conflict with the bug description and scenario 1.


From my option, the result of scenario 2 is right. As you can see the mem info will also be kept same with the one before restarting libvirtd.

If you still think, the bug description and scenario 1 is as expected, then the scenario 2 is one bug... 
So, I reopen this bug, since there will be one bug finally, either bug description or the scenario 2.



Scenario 1: hot-plug vcpus with flag "--live --config", restart libvirtd to see the vcpucount info.
1. check the original vcpus count info.
# virsh dumpxml rhel6.5|grep cpu
  <vcpu placement='static' current='2'>4</vcpu>
# virsh vcpucount rhel6.5
maximum      config         4
maximum      live           4
current      config         2
current      live           2

2. hot-plug vcpu with flag "--live --config"
# virsh setvcpus rhel6.5 3 --live --config

3. check the vcpucount info, the current config and live vcpu number is changed as expected.
# virsh vcpucount rhel6.5
maximum      config         4
maximum      live           4
current      config         3   ----------------------------the config vcpu number is changed as expected.
current      live           3   ----------------------------the live vcpu number is changed as expected.

4. check in the guest, the vcpu number is changed as expected.
# cat /proc/cpuinfo |grep processor |wc -l
3

5. restart the libvirtd service
# service libvirtd restart
Stopping libvirtd daemon:                                  [  OK  ]
Starting libvirtd daemon:                                  [  OK  ]

6. check the vcpucount info again, the current config vcpu number is kept same with step 3 as expected the current live vcpu number is changed back to 2.
[root@xuzhangtest2 ~]# virsh vcpucount rhel6.5
maximum      config         4
maximum      live           4
current      config         3   ----------------------------the config vcpu number is kept same with step 3 as expected.
current      live           2   ----------------------------the live vcpu number is changed back to 2.

7. login the guest, check the vcpu number, it is still 3.
# cat /proc/cpuinfo |grep processor |wc -l
3




Scenario 2: set vcpu and memory, restart libvirtd service, the vcpu number will not loss.
1. check the cpu and mem info of one running guest.
# virsh vcpucount  rhel6.5
maximum      config         4
maximum      live           4
current      config         2
current      live           2

# virsh dommemstat rhel6.5
actual 1048576
rss 29948


2. set the vcpu and mem.
# virsh setvcpus rhel6.5 3

# virsh setmem rhel6.5 1000000 

3. check the vcpucount and dommemstat info, they are all changed as expected.
# virsh vcpucount rhel6.5
maximum      config         4
maximum      live           4
current      config         2
current      live           3

# virsh dommemstat rhel6.5
actual 1000000
rss 387892

4. restart the libvirtd service
# service libvirtd restart
Stopping libvirtd daemon:                                  [  OK  ]
Starting libvirtd daemon:                                  [  OK  ]

5. check the vcpucount and dommemstat again, the vcpu and mem info are same with step 3.
# virsh dommemstat rhel6.5
actual 1000000
rss 385368

# virsh vcpucount rhel6.5
maximum      config         4
maximum      live           4
current      config         2
current      live           3

Comment 3 Xuesong Zhang 2014-04-23 07:08:38 UTC
(In reply to Zhang Xuesong from comment #2)
> hi, Jan, 
> 
> I try with some other scenarios, seems still some problems existed.
> 1. As for the bug description, if it is not a bug, the current live vcpu
> number in step 6 is not same with step 7. It's strange, then the output of
> vcpucount can not show the accurate vcpu number. If it is as expected, would
> you please give some details explanation why they can be dirrerence? Thanks.
> 

As you know, while restarting the libvirtd service, the guest is still kept running, didn't be reboot. So IMO, it's better to keep the guest setting stable, not changed after restart the libvirtd service.


> 2. As for the scenario 1 in this comments, I try to use "--live --config" as
> you suggest, as you can see in step 8, the current live vcpu number is still
> 2, doesn't keep 3 after I used "--config". Only the current config kept same
> after restart libvirtd.
> 
> 3. As for the scenario 2, if set vcpu and mem at the same time, then restart
> the libvirtd, you can see in step 5, the vcpu number will not changed back.
> The result is conflict with the bug description and scenario 1.
> 
> 
> From my option, the result of scenario 2 is right. As you can see the mem
> info will also be kept same with the one before restarting libvirtd.
> 
> If you still think, the bug description and scenario 1 is as expected, then
> the scenario 2 is one bug... 
> So, I reopen this bug, since there will be one bug finally, either bug
> description or the scenario 2.
> 
> 
> 
> Scenario 1: hot-plug vcpus with flag "--live --config", restart libvirtd to
> see the vcpucount info.
> 1. check the original vcpus count info.
> # virsh dumpxml rhel6.5|grep cpu
>   <vcpu placement='static' current='2'>4</vcpu>
> # virsh vcpucount rhel6.5
> maximum      config         4
> maximum      live           4
> current      config         2
> current      live           2
> 
> 2. hot-plug vcpu with flag "--live --config"
> # virsh setvcpus rhel6.5 3 --live --config
> 
> 3. check the vcpucount info, the current config and live vcpu number is
> changed as expected.
> # virsh vcpucount rhel6.5
> maximum      config         4
> maximum      live           4
> current      config         3   ----------------------------the config vcpu
> number is changed as expected.
> current      live           3   ----------------------------the live vcpu
> number is changed as expected.
> 
> 4. check in the guest, the vcpu number is changed as expected.
> # cat /proc/cpuinfo |grep processor |wc -l
> 3
> 
> 5. restart the libvirtd service
> # service libvirtd restart
> Stopping libvirtd daemon:                                  [  OK  ]
> Starting libvirtd daemon:                                  [  OK  ]
> 
> 6. check the vcpucount info again, the current config vcpu number is kept
> same with step 3 as expected the current live vcpu number is changed back to
> 2.
> [root@xuzhangtest2 ~]# virsh vcpucount rhel6.5
> maximum      config         4
> maximum      live           4
> current      config         3   ----------------------------the config vcpu
> number is kept same with step 3 as expected.
> current      live           2   ----------------------------the live vcpu
> number is changed back to 2.
> 
> 7. login the guest, check the vcpu number, it is still 3.
> # cat /proc/cpuinfo |grep processor |wc -l
> 3
> 
> 
> 
> 
> Scenario 2: set vcpu and memory, restart libvirtd service, the vcpu number
> will not loss.
> 1. check the cpu and mem info of one running guest.
> # virsh vcpucount  rhel6.5
> maximum      config         4
> maximum      live           4
> current      config         2
> current      live           2
> 
> # virsh dommemstat rhel6.5
> actual 1048576
> rss 29948
> 
> 
> 2. set the vcpu and mem.
> # virsh setvcpus rhel6.5 3
> 
> # virsh setmem rhel6.5 1000000 
> 
> 3. check the vcpucount and dommemstat info, they are all changed as expected.
> # virsh vcpucount rhel6.5
> maximum      config         4
> maximum      live           4
> current      config         2
> current      live           3
> 
> # virsh dommemstat rhel6.5
> actual 1000000
> rss 387892
> 
> 4. restart the libvirtd service
> # service libvirtd restart
> Stopping libvirtd daemon:                                  [  OK  ]
> Starting libvirtd daemon:                                  [  OK  ]
> 
> 5. check the vcpucount and dommemstat again, the vcpu and mem info are same
> with step 3.
> # virsh dommemstat rhel6.5
> actual 1000000
> rss 385368
> 
> # virsh vcpucount rhel6.5
> maximum      config         4
> maximum      live           4
> current      config         2
> current      live           3

Comment 4 Ján Tomko 2014-04-23 12:11:01 UTC
Right, my comment 1 is unrelated to this bug report.

Patch sent upstream:
https://www.redhat.com/archives/libvir-list/2014-April/msg00872.html

Comment 5 Ján Tomko 2014-04-23 12:39:33 UTC
Now pushed upstream:
commit b396e602c97ab69c86dfd84d2f9e48b4c04a48a4
Author:     Ján Tomko <jtomko>
AuthorDate: 2014-04-23 12:43:24 +0200
Commit:     Ján Tomko <jtomko>
CommitDate: 2014-04-23 14:24:21 +0200

    Save domain status after cpu hotplug
    
    The live change of vcpus was not reflected in the domain status
    xml and it got lost during libvirtd restart.
    
    https://bugzilla.redhat.com/show_bug.cgi?id=1088703

git describe: v1.2.3-149-gb396e60

Downstream patch:
http://post-office.corp.redhat.com/archives/rhvirt-patches/2014-April/msg00477.html

Comment 7 Jincheng Miao 2014-05-04 07:43:50 UTC
The latest libvirt-0.10.2-34.el6 can restore cpu info after libvirtd restarted.

# virsh vcpucount r6
maximum      config         4
maximum      live           4
current      config         2
current      live           2

# virsh vcpucount r6 --guest
2

# virsh setvcpus r6 3

# virsh vcpucount r6 --guest
3

# virsh vcpuinfo r6
VCPU:           0
CPU:            5
State:          running
CPU time:       9.5s
CPU Affinity:   yyyyyyyy

VCPU:           1
CPU:            3
State:          running
CPU time:       3.6s
CPU Affinity:   yyyyyyyy

VCPU:           2
CPU:            7
State:          running
CPU time:       0.0s
CPU Affinity:   yyyyyyyy

# service libvirtd restart
Stopping libvirtd daemon:                                  [  OK  ]
Starting libvirtd daemon:                                  [  OK  ]

# virsh vcpucount r6 
maximum      config         4
maximum      live           4
current      config         2
current      live           3

# virsh vcpucount r6 --guest
3

# virsh vcpuinfo r6
VCPU:           0
CPU:            5
State:          running
CPU time:       9.6s
CPU Affinity:   yyyyyyyy

VCPU:           1
CPU:            4
State:          running
CPU time:       3.9s
CPU Affinity:   yyyyyyyy

VCPU:           2
CPU:            3
State:          running
CPU time:       0.1s
CPU Affinity:   yyyyyyyy

# virsh vcpucount r6
maximum      config         4
maximum      live           4
current      config         2
current      live           3

Comment 9 errata-xmlrpc 2014-10-14 04:21:24 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

http://rhn.redhat.com/errata/RHBA-2014-1374.html


Note You need to log in before you can comment on or make changes to this bug.