Bug 1365779
Summary: | libvirt show wrong vcpupin/emulatorpin configure on a guest which is automatic nodeset | |||
---|---|---|---|---|
Product: | Red Hat Enterprise Linux 7 | Reporter: | Luyao Huang <lhuang> | |
Component: | libvirt | Assignee: | Peter Krempa <pkrempa> | |
Status: | CLOSED ERRATA | QA Contact: | chhu | |
Severity: | medium | Docs Contact: | ||
Priority: | medium | |||
Version: | 7.3 | CC: | dyuan, jdenemar, jishao, pkrempa, rbalakri, xuzhang | |
Target Milestone: | rc | |||
Target Release: | --- | |||
Hardware: | x86_64 | |||
OS: | Linux | |||
Whiteboard: | ||||
Fixed In Version: | libvirt-2.5.0-1.el7 | Doc Type: | If docs needed, set a value | |
Doc Text: | Story Points: | --- | ||
Clone Of: | ||||
: | 1445325 (view as bug list) | Environment: | ||
Last Closed: | 2017-08-01 17:11:42 UTC | Type: | Bug | |
Regression: | --- | Mount Type: | --- | |
Documentation: | --- | CRM: | ||
Verified Versions: | Category: | --- | ||
oVirt Team: | --- | RHEL 7.3 requirements from Atomic Host: | ||
Cloudforms Team: | --- | Target Upstream Version: | ||
Embargoed: | ||||
Bug Depends On: | ||||
Bug Blocks: | 1445325 |
Description
Luyao Huang
2016-08-10 08:16:50 UTC
Fixed upstream: commit 006a532cc082baa28191d66d378e7e946b787e85 Author: Peter Krempa <pkrempa> Date: Wed Sep 14 07:37:16 2016 +0200 qemu: driver: Don't return automatic NUMA emulator pinning data for persistentDef Calling virDomainGetEmulatorPinInfo on a live VM with automatic NUMA pinning and VIR_DOMAIN_AFFECT_CONFIG would return the automatic pinning data in some cases which is bogus. Use the autoCpuset property only when called on a live definition. Resolves: https://bugzilla.redhat.com/show_bug.cgi?id=1365779 commit 552892c59d887b7e24c18b20b208141913fa99d4 Author: Peter Krempa <pkrempa> Date: Wed Sep 14 07:37:16 2016 +0200 qemu: driver: Don't return automatic NUMA vCPU pinning data for persistentDef Calling virDomainGetVcpuPinInfo on a live VM with automatic NUMA pinning and VIR_DOMAIN_AFFECT_CONFIG would return the automatic pinning data in some cases which is bogus. Use the autoCpuset property only when called on a live definition. Resolves: https://bugzilla.redhat.com/show_bug.cgi?id=1365779 Hi, Peter When set the memory mode='strict' placement='auto' in numatune part, the return of virsh vcpupin/emulatorpin --config are correct. However, the return of virsh numatune <> --config is wrong, the numa_nodeset should be null. Please see more details as below: Try to verify on packages: libvirt-3.2.0-2.el7.x86_64 qemu-kvm-rhev-2.8.0-6.el7.x86_64 Test steps: 1. Prepare a numa machine with 4 numa nodes. 2. Prepare an inactive guest. # virsh dumpxml vm1 --inactive|grep vcpu -A 3 <vcpu placement='auto' current='6'>10</vcpu> <numatune> <memory mode='strict' placement='auto'/> </numatune> 3. Start the guest. # virsh start vm1 Domain vm1 started 4. check numatune/vcpupin/emulatorpin configure via virsh cmd: # virsh numatune vm1 numa_mode : strict numa_nodeset : 2-3 # virsh numatune vm1 --config numa_mode : strict numa_nodeset : 2-3 # virsh emulatorpin vm1 emulator: CPU Affinity ---------------------------------- *: 1,3,5,7,9,11,13,15,17,19,21,23 # virsh emulatorpin vm1 --config emulator: CPU Affinity ---------------------------------- *: 0-23 # virsh vcpupin vm1 VCPU: CPU Affinity ---------------------------------- 0: 1,3,5,7,9,11,13,15,17,19,21,23 1: 1,3,5,7,9,11,13,15,17,19,21,23 2: 1,3,5,7,9,11,13,15,17,19,21,23 3: 1,3,5,7,9,11,13,15,17,19,21,23 4: 1,3,5,7,9,11,13,15,17,19,21,23 5: 1,3,5,7,9,11,13,15,17,19,21,23 6: 1,3,5,7,9,11,13,15,17,19,21,23 7: 1,3,5,7,9,11,13,15,17,19,21,23 8: 1,3,5,7,9,11,13,15,17,19,21,23 9: 1,3,5,7,9,11,13,15,17,19,21,23 # virsh vcpupin vm1 --config VCPU: CPU Affinity ---------------------------------- 0: 0-23 1: 0-23 2: 0-23 3: 0-23 4: 0-23 5: 0-23 6: 0-23 7: 0-23 8: 0-23 9: 0-23 Actual results: libvirt show wrong numatune configure on a guest which is automatic nodeset. Expected results: # virsh numatune vm1 --config numa_mode : strict numa_nodeset : Destroy the guest, check the numatune/vcpupin/emulatorpin configure via virsh cmd: # virsh destroy vm1 Domain vm1 destroyed # virsh dumpxml vm1 --inactive| grep vcpu -A 5 <vcpu placement='auto' current='6'>10</vcpu> <numatune> <memory mode='strict' placement='auto'/> </numatune> <os> <type arch='x86_64' machine='pc-i440fx-rhel7.4.0'>hvm</type> # virsh numatune vm1 --config numa_mode : strict numa_nodeset : 2-3 # virsh numatune vm1 numa_mode : strict numa_nodeset : 2-3 # virsh vcpupin vm1 VCPU: CPU Affinity ---------------------------------- 0: 0-23 1: 0-23 2: 0-23 3: 0-23 4: 0-23 5: 0-23 6: 0-23 7: 0-23 8: 0-23 9: 0-23 # virsh vcpupin vm1 --config VCPU: CPU Affinity ---------------------------------- 0: 0-23 1: 0-23 2: 0-23 3: 0-23 4: 0-23 5: 0-23 6: 0-23 7: 0-23 8: 0-23 9: 0-23 # virsh emulatorpin vm1 --config emulator: CPU Affinity ---------------------------------- *: 0-23 # virsh emulatorpin vm1 emulator: CPU Affinity ---------------------------------- *: 0-23 Actual results: libvirt show wrong numatune configure on a guest which is automatic nodeset. Expected results: # virsh numatune vm1 --config numa_mode : strict numa_nodeset : I cloned this as https://bugzilla.redhat.com/show_bug.cgi?id=1445325 to track the issue. According to the comment 3,4,5, the left issue will be tracked in bug1445325. So set the bug status to VERIFIED. Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory, and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://access.redhat.com/errata/RHEA-2017:1846 Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory, and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://access.redhat.com/errata/RHEA-2017:1846 |