RHEL Engineering is moving the tracking of its product development work on RHEL 6 through RHEL 9 to Red Hat Jira (issues.redhat.com). If you're a Red Hat customer, please continue to file support cases via the Red Hat customer portal. If you're not, please head to the "RHEL project" in Red Hat Jira and file new tickets here. Individual Bugzilla bugs in the statuses "NEW", "ASSIGNED", and "POST" are being migrated throughout September 2023. Bugs of Red Hat partners with an assigned Engineering Partner Manager (EPM) are migrated in late September as per pre-agreed dates. Bugs against components "kernel", "kernel-rt", and "kpatch" are only migrated if still in "NEW" or "ASSIGNED". If you cannot log in to RH Jira, please consult article #7032570. That failing, please send an e-mail to the RH Jira admins at rh-issues@redhat.com to troubleshoot your issue as a user management inquiry. The email creates a ServiceNow ticket with Red Hat. Individual Bugzilla bugs that are migrated will be moved to status "CLOSED", resolution "MIGRATED", and set with "MigratedToJIRA" in "Keywords". The link to the successor Jira issue will be found under "Links", have a little "two-footprint" icon next to it, and direct you to the "RHEL project" in Red Hat Jira (issue links are of type "https://issues.redhat.com/browse/RHEL-XXXX", where "X" is a digit). This same link will be available in a blue banner at the top of the page informing you that that bug has been migrated.
Bug 1365779 - libvirt show wrong vcpupin/emulatorpin configure on a guest which is automatic nodeset
Summary: libvirt show wrong vcpupin/emulatorpin configure on a guest which is automati...
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: Red Hat Enterprise Linux 7
Classification: Red Hat
Component: libvirt
Version: 7.3
Hardware: x86_64
OS: Linux
medium
medium
Target Milestone: rc
: ---
Assignee: Peter Krempa
QA Contact: chhu
URL:
Whiteboard:
Depends On:
Blocks: 1445325
TreeView+ depends on / blocked
 
Reported: 2016-08-10 08:16 UTC by Luyao Huang
Modified: 2017-08-01 23:53 UTC (History)
6 users (show)

Fixed In Version: libvirt-2.5.0-1.el7
Doc Type: If docs needed, set a value
Doc Text:
Clone Of:
: 1445325 (view as bug list)
Environment:
Last Closed: 2017-08-01 17:11:42 UTC
Target Upstream Version:
Embargoed:


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
Red Hat Product Errata RHEA-2017:1846 0 normal SHIPPED_LIVE libvirt bug fix and enhancement update 2017-08-01 18:02:50 UTC

Description Luyao Huang 2016-08-10 08:16:50 UTC
Description of problem:
libvirt show wrong vcpupin/emulatorpin configure on a guest which is automatic nodeset

Version-Release number of selected component (if applicable):
libvirt-2.0.0-4.el7.x86_64

How reproducible:
100%

Steps to Reproduce:
1.prepare a numa machine:
# numactl --har
available: 4 nodes (0-3)
node 0 cpus: 0 2 4 6 8 10
node 0 size: 10205 MB
node 0 free: 6045 MB
node 1 cpus: 12 14 16 18 20 22
node 1 size: 8192 MB
node 1 free: 5246 MB
node 2 cpus: 1 3 5 7 9 11
node 2 size: 6144 MB
node 2 free: 3819 MB
node 3 cpus: 13 15 17 19 21 23
node 3 size: 8175 MB
node 3 free: 5799 MB
node distances:
node   0   1   2   3 
  0:  10  20  20  20 
  1:  20  10  20  20 
  2:  20  20  10  20 
  3:  20  20  20  10 


2. prepare a inactive guest:

# virsh dumpxml r7 --inactive
<domain type='kvm'>
  <name>r7</name>
  <uuid>67c7a123-5415-4136-af62-a2ee098ba6cd</uuid>
  <maxMemory slots='16' unit='KiB'>15243264</maxMemory>
  <memory unit='KiB'>2097152</memory>
  <currentMemory unit='KiB'>2097152</currentMemory>
  <vcpu placement='auto' current='6'>10</vcpu>
  <numatune>
    <memory mode='strict' placement='auto'/>
  </numatune>


3. start guest:

# virsh start r7
Domain r7 started

4. check numatune/vcpupin/emulatorpin configure via virsh cmd:

# virsh numatune r7 --config
numa_mode      : strict
numa_nodeset   : 0-1

# virsh vcpupin r7 --config
VCPU: CPU Affinity
----------------------------------
   0: 0,2,4,6,8,10,12,14,16,18,20,22
   1: 0,2,4,6,8,10,12,14,16,18,20,22
   2: 0,2,4,6,8,10,12,14,16,18,20,22
   3: 0,2,4,6,8,10,12,14,16,18,20,22
   4: 0,2,4,6,8,10,12,14,16,18,20,22
   5: 0,2,4,6,8,10,12,14,16,18,20,22
   6: 0,2,4,6,8,10,12,14,16,18,20,22
   7: 0,2,4,6,8,10,12,14,16,18,20,22
   8: 0,2,4,6,8,10,12,14,16,18,20,22
   9: 0,2,4,6,8,10,12,14,16,18,20,22

# virsh emulatorpin r7 --config
emulator: CPU Affinity
----------------------------------
       *: 0,2,4,6,8,10,12,14,16,18,20,22

5. verify this is not correct in xml:

 virsh dumpxml r7 --inactive
<domain type='kvm'>
  <name>r7</name>
  <uuid>67c7a123-5415-4136-af62-a2ee098ba6cd</uuid>
  <maxMemory slots='16' unit='KiB'>15243264</maxMemory>
  <memory unit='KiB'>2097152</memory>
  <currentMemory unit='KiB'>2097152</currentMemory>
  <vcpu placement='auto' current='6'>10</vcpu>
  <numatune>
    <memory mode='strict' placement='auto'/>
  </numatune>
  <resource>
    <partition>/machine</partition>
  </resource>
  <os>
    <type arch='x86_64' machine='pc-i440fx-rhel7.3.0'>hvm</type>
    <boot dev='hd'/>
  </os>

6. destroy guest, check it again:

# virsh destroy r7
Domain r7 destroyed

# virsh emulatorpin r7 --config
emulator: CPU Affinity
----------------------------------
       *: 0,2,4,6,8,10,12,14,16,18,20,22

# virsh vcpupin r7 --config
VCPU: CPU Affinity
----------------------------------
   0: 0,2,4,6,8,10,12,14,16,18,20,22
   1: 0,2,4,6,8,10,12,14,16,18,20,22
   2: 0,2,4,6,8,10,12,14,16,18,20,22
   3: 0,2,4,6,8,10,12,14,16,18,20,22
   4: 0,2,4,6,8,10,12,14,16,18,20,22
   5: 0,2,4,6,8,10,12,14,16,18,20,22
   6: 0,2,4,6,8,10,12,14,16,18,20,22
   7: 0,2,4,6,8,10,12,14,16,18,20,22
   8: 0,2,4,6,8,10,12,14,16,18,20,22
   9: 0,2,4,6,8,10,12,14,16,18,20,22

# virsh numatune r7 --config
numa_mode      : strict
numa_nodeset   : 0-1


Actual results:

libvirt show wrong vcpupin/emulatorpin configure on a guest which is automatic nodeset

Expected results:

# virsh numatune r7 --config
numa_mode      : strict
numa_nodeset   : 

# virsh vcpupin r7 --config
VCPU: CPU Affinity
----------------------------------
   0: 0-23
   1: 0-23
   2: 0-23
   3: 0-23
   4: 0-23
   5: 0-23
   6: 0-23
   7: 0-23
   8: 0-23
   9: 0-23

# virsh emulatorpin r7 --config
emulator: CPU Affinity
----------------------------------
       *: 0-23


Additional info:

Comment 1 Peter Krempa 2016-09-21 14:41:19 UTC
Fixed upstream:

commit 006a532cc082baa28191d66d378e7e946b787e85
Author: Peter Krempa <pkrempa>
Date:   Wed Sep 14 07:37:16 2016 +0200

    qemu: driver: Don't return automatic NUMA emulator pinning data for persistentDef
    
    Calling virDomainGetEmulatorPinInfo on a live VM with automatic NUMA
    pinning and VIR_DOMAIN_AFFECT_CONFIG would return the automatic pinning
    data in some cases which is bogus. Use the autoCpuset property only when
    called on a live definition.
    
    Resolves: https://bugzilla.redhat.com/show_bug.cgi?id=1365779

commit 552892c59d887b7e24c18b20b208141913fa99d4
Author: Peter Krempa <pkrempa>
Date:   Wed Sep 14 07:37:16 2016 +0200

    qemu: driver: Don't return automatic NUMA vCPU pinning data for persistentDef
    
    Calling virDomainGetVcpuPinInfo on a live VM with automatic NUMA pinning
    and VIR_DOMAIN_AFFECT_CONFIG would return the automatic pinning data
    in some cases which is bogus. Use the autoCpuset property only when
    called on a live definition.
    
    Resolves: https://bugzilla.redhat.com/show_bug.cgi?id=1365779

Comment 3 chhu 2017-04-21 08:58:30 UTC
Hi, Peter

When set the memory mode='strict' placement='auto' in numatune part,
the return of virsh vcpupin/emulatorpin --config are correct.
However, the return of virsh numatune <> --config is wrong, the numa_nodeset should be null.

Please see more details as below:

Try to verify on packages:
libvirt-3.2.0-2.el7.x86_64
qemu-kvm-rhev-2.8.0-6.el7.x86_64

Test steps:
1. Prepare a numa machine with 4 numa nodes.
2. Prepare an inactive guest.
# virsh dumpxml vm1 --inactive|grep vcpu -A 3
  <vcpu placement='auto' current='6'>10</vcpu>
  <numatune>
    <memory mode='strict' placement='auto'/>
  </numatune>

3. Start the guest. 
# virsh start vm1
Domain vm1 started

4. check numatune/vcpupin/emulatorpin configure via virsh cmd:
# virsh numatune vm1
numa_mode      : strict
numa_nodeset   : 2-3

# virsh numatune vm1 --config
numa_mode      : strict
numa_nodeset   : 2-3

# virsh emulatorpin vm1 
emulator: CPU Affinity
----------------------------------
       *: 1,3,5,7,9,11,13,15,17,19,21,23

# virsh emulatorpin vm1 --config
emulator: CPU Affinity
----------------------------------
       *: 0-23

# virsh vcpupin vm1
VCPU: CPU Affinity
----------------------------------
   0: 1,3,5,7,9,11,13,15,17,19,21,23
   1: 1,3,5,7,9,11,13,15,17,19,21,23
   2: 1,3,5,7,9,11,13,15,17,19,21,23
   3: 1,3,5,7,9,11,13,15,17,19,21,23
   4: 1,3,5,7,9,11,13,15,17,19,21,23
   5: 1,3,5,7,9,11,13,15,17,19,21,23
   6: 1,3,5,7,9,11,13,15,17,19,21,23
   7: 1,3,5,7,9,11,13,15,17,19,21,23
   8: 1,3,5,7,9,11,13,15,17,19,21,23
   9: 1,3,5,7,9,11,13,15,17,19,21,23

# virsh vcpupin vm1 --config
VCPU: CPU Affinity
----------------------------------
   0: 0-23
   1: 0-23
   2: 0-23
   3: 0-23
   4: 0-23
   5: 0-23
   6: 0-23
   7: 0-23
   8: 0-23
   9: 0-23

Actual results:
libvirt show wrong numatune configure on a guest which is automatic nodeset.

Expected results:
# virsh numatune vm1 --config
numa_mode      : strict
numa_nodeset   :

Comment 4 chhu 2017-04-21 09:27:04 UTC
Destroy the guest, check the numatune/vcpupin/emulatorpin configure via virsh cmd:

# virsh destroy vm1
Domain vm1 destroyed

# virsh dumpxml vm1 --inactive| grep vcpu -A 5
  <vcpu placement='auto' current='6'>10</vcpu>
  <numatune>
    <memory mode='strict' placement='auto'/>
  </numatune>
  <os>
    <type arch='x86_64' machine='pc-i440fx-rhel7.4.0'>hvm</type>

# virsh numatune vm1 --config
numa_mode      : strict
numa_nodeset   : 2-3

# virsh numatune vm1
numa_mode      : strict
numa_nodeset   : 2-3

# virsh vcpupin vm1
VCPU: CPU Affinity
----------------------------------
   0: 0-23
   1: 0-23
   2: 0-23
   3: 0-23
   4: 0-23
   5: 0-23
   6: 0-23
   7: 0-23
   8: 0-23
   9: 0-23

# virsh vcpupin vm1 --config
VCPU: CPU Affinity
----------------------------------
   0: 0-23
   1: 0-23
   2: 0-23
   3: 0-23
   4: 0-23
   5: 0-23
   6: 0-23
   7: 0-23
   8: 0-23
   9: 0-23

# virsh emulatorpin vm1 --config
emulator: CPU Affinity
----------------------------------
       *: 0-23

# virsh emulatorpin vm1
emulator: CPU Affinity
----------------------------------
       *: 0-23

Actual results:
libvirt show wrong numatune configure on a guest which is automatic nodeset.

Expected results:
# virsh numatune vm1 --config
numa_mode      : strict
numa_nodeset   :

Comment 5 Peter Krempa 2017-04-25 13:46:04 UTC
I cloned this as https://bugzilla.redhat.com/show_bug.cgi?id=1445325 to track the issue.

Comment 6 chhu 2017-05-10 06:30:50 UTC
According to the comment 3,4,5, the left issue will be tracked in bug1445325. So set the bug status to VERIFIED.

Comment 7 errata-xmlrpc 2017-08-01 17:11:42 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHEA-2017:1846

Comment 8 errata-xmlrpc 2017-08-01 23:53:19 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHEA-2017:1846


Note You need to log in before you can comment on or make changes to this bug.