Bug 1428893
Summary: | domain definition is not updated after a delayed VCPU unplug by the guest | ||
---|---|---|---|
Product: | Red Hat Enterprise Linux 7 | Reporter: | Peter Krempa <pkrempa> |
Component: | libvirt | Assignee: | Peter Krempa <pkrempa> |
Status: | CLOSED ERRATA | QA Contact: | Luyao Huang <lhuang> |
Severity: | medium | Docs Contact: | |
Priority: | unspecified | ||
Version: | 7.3 | CC: | dyuan, libvirt-maint, pkrempa, rbalakri, sathnaga, xuzhang, yalzhang |
Target Milestone: | rc | ||
Target Release: | --- | ||
Hardware: | All | ||
OS: | Linux | ||
Whiteboard: | |||
Fixed In Version: | libvirt-3.2.0-1.el7 | Doc Type: | If docs needed, set a value |
Doc Text: | Story Points: | --- | |
Clone Of: | 1427801 | Environment: | |
Last Closed: | 2017-08-01 17:24:15 UTC | Type: | Bug |
Regression: | --- | Mount Type: | --- |
Documentation: | --- | CRM: | |
Verified Versions: | Category: | --- | |
oVirt Team: | --- | RHEL 7.3 requirements from Atomic Host: | |
Cloudforms Team: | --- | Target Upstream Version: | |
Embargoed: | |||
Bug Depends On: | 1427801 | ||
Bug Blocks: |
Description
Peter Krempa
2017-03-03 15:00:40 UTC
Upstream fixed by: commit 8af68ea47830b8d32907dc50c6ca4869d14bb862 Author: Peter Krempa <pkrempa> Date: Fri Mar 3 16:04:57 2017 +0100 qemu: hotplug: Reset device removal waiting code after vCPU unplug If the delivery of the DEVICE_DELETED event for the vCPU being deleted would time out, the code would not call 'qemuDomainResetDeviceRemoval'. Since the waiting thread did not unregister itself prior to stopping the waiting the monitor code would try to wake it up instead of dispatching it to the event worker. As a result the unplug process would not be completed and the definition would not be updated. Verify this bug with libvirt-3.2.0-4.el7.x86_64: 1. prepare a guest which have hotpluggable vcpu: <vcpu placement='static'>10</vcpu> <vcpus> <vcpu id='0' enabled='yes' hotpluggable='no' order='1'/> <vcpu id='1' enabled='yes' hotpluggable='no' order='2'/> <vcpu id='2' enabled='yes' hotpluggable='no' order='3'/> <vcpu id='3' enabled='yes' hotpluggable='no' order='4'/> <vcpu id='4' enabled='yes' hotpluggable='no' order='5'/> <vcpu id='5' enabled='yes' hotpluggable='yes' order='6'/> <vcpu id='6' enabled='yes' hotpluggable='yes' order='7'/> <vcpu id='7' enabled='yes' hotpluggable='yes' order='8'/> <vcpu id='8' enabled='yes' hotpluggable='yes' order='9'/> <vcpu id='9' enabled='yes' hotpluggable='yes' order='10'/> 2. login guest and make a big workload for guest cpu: IN GUEST: # ./memcpueater 3. hot-unplug vcpu: # virsh setvcpus r7 5 error: operation failed: vcpu unplug request timed out 4. recheck guest xml: # virsh dumpxml r7 <vcpus> <vcpu id='0' enabled='yes' hotpluggable='no' order='1'/> <vcpu id='1' enabled='yes' hotpluggable='no' order='2'/> <vcpu id='2' enabled='yes' hotpluggable='no' order='3'/> <vcpu id='3' enabled='yes' hotpluggable='no' order='4'/> <vcpu id='4' enabled='yes' hotpluggable='no' order='5'/> <vcpu id='5' enabled='yes' hotpluggable='yes' order='6'/> <vcpu id='6' enabled='yes' hotpluggable='yes' order='7'/> <vcpu id='7' enabled='yes' hotpluggable='yes' order='8'/> <vcpu id='8' enabled='yes' hotpluggable='yes' order='9'/> <vcpu id='9' enabled='yes' hotpluggable='yes' order='10'/> </vcpus> 5. stop the cpu eater in guest and do hot-unplug again: # virsh setvcpus r7 5 6. recheck guest xml: # virsh dumpxml r7 ... <vcpu placement='static' current='5'>10</vcpu> <vcpus> <vcpu id='0' enabled='yes' hotpluggable='no' order='1'/> <vcpu id='1' enabled='yes' hotpluggable='no' order='2'/> <vcpu id='2' enabled='yes' hotpluggable='no' order='3'/> <vcpu id='3' enabled='yes' hotpluggable='no' order='4'/> <vcpu id='4' enabled='yes' hotpluggable='no' order='5'/> <vcpu id='5' enabled='no' hotpluggable='yes'/> <vcpu id='6' enabled='no' hotpluggable='yes'/> <vcpu id='7' enabled='no' hotpluggable='yes'/> <vcpu id='8' enabled='no' hotpluggable='yes'/> <vcpu id='9' enabled='no' hotpluggable='yes'/> </vcpus> Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory, and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://access.redhat.com/errata/RHEA-2017:1846 Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory, and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://access.redhat.com/errata/RHEA-2017:1846 |