RHEL Engineering is moving the tracking of its product development work on RHEL 6 through RHEL 9 to Red Hat Jira (issues.redhat.com). If you're a Red Hat customer, please continue to file support cases via the Red Hat customer portal. If you're not, please head to the "RHEL project" in Red Hat Jira and file new tickets here. Individual Bugzilla bugs in the statuses "NEW", "ASSIGNED", and "POST" are being migrated throughout September 2023. Bugs of Red Hat partners with an assigned Engineering Partner Manager (EPM) are migrated in late September as per pre-agreed dates. Bugs against components "kernel", "kernel-rt", and "kpatch" are only migrated if still in "NEW" or "ASSIGNED". If you cannot log in to RH Jira, please consult article #7032570. That failing, please send an e-mail to the RH Jira admins at rh-issues@redhat.com to troubleshoot your issue as a user management inquiry. The email creates a ServiceNow ticket with Red Hat. Individual Bugzilla bugs that are migrated will be moved to status "CLOSED", resolution "MIGRATED", and set with "MigratedToJIRA" in "Keywords". The link to the successor Jira issue will be found under "Links", have a little "two-footprint" icon next to it, and direct you to the "RHEL project" in Red Hat Jira (issue links are of type "https://issues.redhat.com/browse/RHEL-XXXX", where "X" is a digit). This same link will be available in a blue banner at the top of the page informing you that that bug has been migrated.
Bug 1081462 - [Intel 6.5.z Bug] virsh setvcpus can not setup correct vcpu number - rhev clone
Summary: [Intel 6.5.z Bug] virsh setvcpus can not setup correct vcpu number - rhev clone
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: Red Hat Enterprise Linux 6
Classification: Red Hat
Component: qemu-kvm
Version: 6.5
Hardware: x86_64
OS: Linux
high
medium
Target Milestone: rc
: ---
Assignee: Virtualization Maintenance
QA Contact: Virtualization Bugs
URL:
Whiteboard:
Depends On: 1017858
Blocks:
TreeView+ depends on / blocked
 
Reported: 2014-03-27 11:53 UTC by Jan Kurik
Modified: 2018-12-05 17:53 UTC (History)
44 users (show)

Fixed In Version: qemu-kvm-rhev-0.12.1.2-2.415.el6_5.7
Doc Type: Bug Fix
Doc Text:
When hot unplugging a virtual CPU (vCPU) from a guest using libvirt, the current Red Hat Enterprise Linux QEMU implementation does not remove the corresponding vCPU thread. Because of this, libvirt previously did not correctly perceive the vCPU count after a vCPU had been hot unplugged. Consequently, an error occured in libvirt, which prevented increasing the vCPU count after the hot unplug. In this update, information from QEMU is used to filter out inactive vCPU threads of disabled vCPUs, and the internal checks now pass and allow the hot plug.
Clone Of:
Environment:
Last Closed: 2014-04-03 14:01:01 UTC
Target Upstream Version:
Embargoed:


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
Red Hat Product Errata RHBA-2014:0357 0 normal SHIPPED_LIVE qemu-kvm-rhev, qemu-kvm-rhev-tools, qemu-img-rhev bug fix update 2014-04-03 18:00:32 UTC

Description Jan Kurik 2014-03-27 11:53:07 UTC
This bug has been copied from bug #1017858 and has been proposed
to be backported to 6.5 z-stream (EUS).

Comment 5 Qunfang Zhang 2014-03-28 07:02:05 UTC
As this bug was reproduced in bug 1017858#c12 already, so I just verify it this time on the latest qemu-kvm-rhev-0.12.1.2-2.415.el6_5.7.

Host:
kernel-2.6.32-431.11.2.el6.x86_64
qemu-kvm-rhev-0.12.1.2-2.415.el6_5.7.x86_64
seabios-0.6.1.2-28.el6.x86_64

Guest:
kernel-2.6.32-431.el6.x86_64

Steps:
1. Boot up a guest with virt-manager with CPU maximum allocation 4 and current allocation is 2. 

[root@dell-per415-03 images]# ps ax | grep kvm
 1151 ?        S      0:00 [kvm-irqfd-clean]
20824 ?        Sl     0:44 /usr/libexec/qemu-kvm -name rhel6 -S -M rhel6.5.0 -enable-kvm -m 2048 -realtime mlock=off -smp 2,maxcpus=4,sockets=4,cores=1,threads=1 -uuid 8299278b-f924-8c5b-0a2b-255abbdd356b -nodefconfig -nodefaults -chardev socket,id=charmonitor,path=/var/lib/libvirt/qemu/rhel6.monitor,server,nowait -mon chardev=charmonitor,id=monitor,mode=control -rtc base=utc -no-shutdown -device piix3-usb-uhci,id=usb,bus=pci.0,addr=0x1.0x2 -drive file=/var/lib/libvirt/images/RHEL-Server-6.5-64-virtio.qcow2,if=none,id=drive-virtio-disk0,format=qcow2,cache=none -device virtio-blk-pci,scsi=off,bus=pci.0,addr=0x5,drive=drive-virtio-disk0,id=virtio-disk0,bootindex=1 -netdev tap,fd=22,id=hostnet0,vhost=on,vhostfd=23 -device virtio-net-pci,__com_redhat_macvtap_compat=on,netdev=hostnet0,id=net0,mac=52:54:00:0b:94:a6,bus=pci.0,addr=0x3 -chardev pty,id=charserial0 -device isa-serial,chardev=charserial0,id=serial0 -device usb-tablet,id=input0 -vnc 127.0.0.1:0 -vga cirrus -device intel-hda,id=sound0,bus=pci.0,addr=0x4 -device hda-duplex,id=sound0-codec0,bus=sound0.0,cad=0 -device virtio-balloon-pci,id=balloon0,bus=pci.0,addr=0x6

2. 
virsh # list 
 Id    Name                           State
----------------------------------------------------
 3     rhel6                          running

virsh # 
virsh # qemu-monitor-command rhel6 '{ "execute" : "query-cpus" }'
{"return":[{"enabled-in-acpi":true,"current":true,"CPU":0,"pc":-2130449717,"halted":true,"thread_id":20830},{"enabled-in-acpi":true,"current":false,"CPU":1,"pc":-2130449717,"halted":true,"thread_id":20831}],"id":"libvirt-8"}

3. Plug another cpu and check it.

virsh # setvcpus --live rhel6 3

virsh # 
virsh # qemu-monitor-command rhel6 '{ "execute" : "query-cpus" }'
{"return":[{"enabled-in-acpi":true,"current":true,"CPU":0,"pc":-2130449717,"halted":true,"thread_id":20830},{"enabled-in-acpi":true,"current":false,"CPU":1,"pc":-2130449717,"halted":true,"thread_id":20831},{"enabled-in-acpi":true,"current":false,"CPU":2,"pc":-2130449717,"halted":true,"thread_id":21049}],"id":"libvirt-12"}

All the 3 vcpus "enabled-in-acpi" field is "true".

4. Plug the 4th cpu again and check it.

virsh # setvcpus --live rhel6 4

virsh # 
virsh # 
virsh # qemu-monitor-command rhel6 '{ "execute" : "query-cpus" }'
{"return":[{"enabled-in-acpi":true,"current":true,"CPU":0,"pc":-2130449717,"halted":true,"thread_id":20830},{"enabled-in-acpi":true,"current":false,"CPU":1,"pc":-2130449717,"halted":true,"thread_id":20831},{"enabled-in-acpi":true,"current":false,"CPU":2,"pc":-2130449717,"halted":true,"thread_id":21049},{"enabled-in-acpi":true,"current":false,"CPU":3,"pc":-2130449717,"halted":true,"thread_id":21097}],"id":"libvirt-16"}

All the 4 vcpus "enabled-in-acpi" field is "true".

5. Hot unplug 2 vcpu and check it.

virsh # setvcpus --live rhel6 2
error: Operation not supported: qemu didn't unplug the vCPUs properly

virsh # 
virsh # qemu-monitor-command rhel6 '{ "execute" : "query-cpus" }'
{"return":[{"enabled-in-acpi":true,"current":true,"CPU":0,"pc":-2130449717,"halted":true,"thread_id":20830},{"enabled-in-acpi":true,"current":false,"CPU":1,"pc":-2130449717,"halted":true,"thread_id":20831},{"enabled-in-acpi":false,"current":false,"CPU":2,"pc":-2130505407,"halted":true,"thread_id":21049},{"enabled-in-acpi":false,"current":false,"CPU":3,"pc":-2130505407,"halted":true,"thread_id":21097}],"id":"libvirt-22"}

Now, the cpu 2 and cpu 3 "enabled-in-acpi" field is "false".


Hi, Laszlo

I'm testing with your test steps in bug 1017858#c29, so is this the expected result to verify this bug, right? 

Another question want to confirm with you, fix me if I should post this question to another guy:). From the current result, seems cpu hotunplug already works, why we still call it not supported?  Maybe there's still some potential issues so it's not supported completely? 


Thanks, 
Qunfang

Comment 6 Laszlo Ersek 2014-03-28 10:40:24 UTC
(In reply to Qunfang Zhang from comment #5)

> 5. Hot unplug 2 vcpu and check it.
> 
> virsh # setvcpus --live rhel6 2
> error: Operation not supported: qemu didn't unplug the vCPUs properly
> 
> virsh # 
> virsh # qemu-monitor-command rhel6 '{ "execute" : "query-cpus" }'
> {"return":[{"enabled-in-acpi":true,"current":true,"CPU":0,"pc":-2130449717,
> "halted":true,"thread_id":20830},{"enabled-in-acpi":true,"current":false,
> "CPU":1,"pc":-2130449717,"halted":true,"thread_id":20831},{"enabled-in-acpi":
> false,"current":false,"CPU":2,"pc":-2130505407,"halted":true,"thread_id":
> 21049},{"enabled-in-acpi":false,"current":false,"CPU":3,"pc":-2130505407,
> "halted":true,"thread_id":21097}],"id":"libvirt-22"}
> 
> Now, the cpu 2 and cpu 3 "enabled-in-acpi" field is "false".
> 
> 
> Hi, Laszlo
> 
> I'm testing with your test steps in bug 1017858#c29, so is this the expected
> result to verify this bug, right? 

Yes, this is the expected result of the "query-cpus" QMP command.

"virsh" reports the error because you didn't include the libvirt fix in your testing, so libvirt doesn't know to look for the "enabled-in-acpi". But, as far as qemu-kvm is concerned in isolation, the test is successful.

> Another question want to confirm with you, fix me if I should post this
> question to another guy:). From the current result, seems cpu hotunplug
> already works, why we still call it not supported?  Maybe there's still some
> potential issues so it's not supported completely? 

If by "calling it not supported" you mean the error message from virsh, then please see above. For an end-to-end test, you need to upgrade both libvirt (so that it *consumes* the new field) and qemu-kvm (so that it *produces* the new field).

Thanks
Laszlo

Comment 7 Qunfang Zhang 2014-03-31 06:47:27 UTC
Thank you, Laszlo. I tested again with the latest rhel6.5-z libvirt-0.10.2-29.el6_5.7.x86_64 installed. Now when hot unplug vcpu, virsh does not report error any more.

Test steps:

Same as comment 10.

Host version:
kernel-2.6.32-431.7.1.el6.x86_64
qemu-kvm-rhev-0.12.1.2-2.415.el6_5.7.x86_64
libvirt-0.10.2-29.el6_5.7.x86_64

[root@localhost ~]# virsh list 
 Id    Name                           State
----------------------------------------------------
 1     rhel6                          running

[root@localhost ~]# 
[root@localhost ~]# virsh 
Welcome to virsh, the virtualization interactive terminal.

Type:  'help' for help with commands
       'quit' to quit

virsh # 
virsh # qemu-monitor-command rhel6 '{ "execute" : "query-cpus" }'
{"return":[{"enabled-in-acpi":true,"current":true,"CPU":0,"pc":-2130449717,"halted":true,"thread_id":2499},{"enabled-in-acpi":true,"current":false,"CPU":1,"pc":-2130449717,"halted":true,"thread_id":2500}],"id":"libvirt-8"}

virsh # 
virsh # 
virsh #  setvcpus --live rhel6 3

virsh # 
virsh # qemu-monitor-command rhel6 '{ "execute" : "query-cpus" }'
{"return":[{"enabled-in-acpi":true,"current":true,"CPU":0,"pc":-2130449717,"halted":true,"thread_id":2499},{"enabled-in-acpi":true,"current":false,"CPU":1,"pc":-2130449717,"halted":true,"thread_id":2500},{"enabled-in-acpi":true,"current":false,"CPU":2,"pc":-2130449717,"halted":true,"thread_id":3205}],"id":"libvirt-12"}

virsh # 
virsh # 
virsh # setvcpus --live rhel6 4

virsh # 
virsh # 
virsh # qemu-monitor-command rhel6 '{ "execute" : "query-cpus" }'
{"return":[{"enabled-in-acpi":true,"current":true,"CPU":0,"pc":-2130449717,"halted":true,"thread_id":2499},{"enabled-in-acpi":true,"current":false,"CPU":1,"pc":-2130449717,"halted":true,"thread_id":2500},{"enabled-in-acpi":true,"current":false,"CPU":2,"pc":-2130449717,"halted":true,"thread_id":3205},{"enabled-in-acpi":true,"current":false,"CPU":3,"pc":-2130449717,"halted":true,"thread_id":3348}],"id":"libvirt-16"}

virsh # 
virsh # 
virsh # setvcpus --live rhel6 2

virsh # 
virsh # 
virsh # qemu-monitor-command rhel6 '{ "execute" : "query-cpus" }'
{"return":[{"enabled-in-acpi":true,"current":true,"CPU":0,"pc":-2130449717,"halted":true,"thread_id":2499},{"enabled-in-acpi":true,"current":false,"CPU":1,"pc":-2130449717,"halted":true,"thread_id":2500},{"enabled-in-acpi":false,"current":false,"CPU":2,"pc":-2130505407,"halted":true,"thread_id":3205},{"enabled-in-acpi":false,"current":false,"CPU":3,"pc":-2130505407,"halted":true,"thread_id":3348}],"id":"libvirt-22"}

virsh #

Comment 10 errata-xmlrpc 2014-04-03 14:01:01 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

http://rhn.redhat.com/errata/RHBA-2014-0357.html


Note You need to log in before you can comment on or make changes to this bug.