RHEL Engineering is moving the tracking of its product development work on RHEL 6 through RHEL 9 to Red Hat Jira (issues.redhat.com). If you're a Red Hat customer, please continue to file support cases via the Red Hat customer portal. If you're not, please head to the "RHEL project" in Red Hat Jira and file new tickets here. Individual Bugzilla bugs in the statuses "NEW", "ASSIGNED", and "POST" are being migrated throughout September 2023. Bugs of Red Hat partners with an assigned Engineering Partner Manager (EPM) are migrated in late September as per pre-agreed dates. Bugs against components "kernel", "kernel-rt", and "kpatch" are only migrated if still in "NEW" or "ASSIGNED". If you cannot log in to RH Jira, please consult article #7032570. That failing, please send an e-mail to the RH Jira admins at rh-issues@redhat.com to troubleshoot your issue as a user management inquiry. The email creates a ServiceNow ticket with Red Hat. Individual Bugzilla bugs that are migrated will be moved to status "CLOSED", resolution "MIGRATED", and set with "MigratedToJIRA" in "Keywords". The link to the successor Jira issue will be found under "Links", have a little "two-footprint" icon next to it, and direct you to the "RHEL project" in Red Hat Jira (issue links are of type "https://issues.redhat.com/browse/RHEL-XXXX", where "X" is a digit). This same link will be available in a blue banner at the top of the page informing you that that bug has been migrated.
Bug 1138545 - guest NUMA cannot start when automatic NUMA placement
Summary: guest NUMA cannot start when automatic NUMA placement
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: Red Hat Enterprise Linux 7
Classification: Red Hat
Component: libvirt
Version: 7.1
Hardware: x86_64
OS: Linux
medium
medium
Target Milestone: rc
: ---
Assignee: Martin Kletzander
QA Contact: Virtualization Bugs
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2014-09-05 07:02 UTC by Jincheng Miao
Modified: 2015-03-05 07:44 UTC (History)
4 users (show)

Fixed In Version: libvirt-1.2.8-6.el7
Doc Type: Bug Fix
Doc Text:
Clone Of:
Environment:
Last Closed: 2015-03-05 07:44:00 UTC
Target Upstream Version:
Embargoed:


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
Red Hat Product Errata RHSA-2015:0323 0 normal SHIPPED_LIVE Low: libvirt security, bug fix, and enhancement update 2015-03-05 12:10:54 UTC

Description Jincheng Miao 2014-09-05 07:02:17 UTC
description of problem:
guest NUMA cannot start when automatic NUMA placement.
 From the doc, only memnode is conflict with automatic NUMA placement,
if <numa> <cell> also not compatible with automatic NUMA placement,
an error in parse phase would be better, just like that for memnode.

Version-Release number of selected component (if applicable):
libvirt-1.2.8-1.el7.x86_64

How reproducible:
100%

Steps to Reproduce:
1. prepare NUMA host, and add config to a guest:
# virsh edit r7
...
   <memory unit='KiB'>1048576</memory>
   <vcpu placement='auto'>4</vcpu>
   <numatune>
     <memory mode='strict' placement='auto'/>
   </numatune>
   <cpu>
     <numa>
       <cell id='0' cpus='0-1' memory='524288'/>
       <cell id='1' cpus='2-3' memory='524288'/>
     </numa>
   </cpu>
...

2. start it
# virsh start r7
error: Failed to start domain r7
error: internal error: Advice from numad is needed in case of automatic 
numa placement

Expect result:
if <numa> <cell> also not compatible with automatic NUMA placement,
an error in parse phase would be better, just like that for memnode.

Actual result:
failed to start.

Comment 2 Jincheng Miao 2014-09-05 07:09:50 UTC
By the way, this is happened in NUMA host.

Comment 3 Martin Kletzander 2014-10-30 06:36:21 UTC
Well, you're right.  If someone wants to have guest NUMA nodes, he's much better off using static placement as automatic will most likely cause performance drop.  Anyway, it should still be available to those who are using it already.

Comment 4 Martin Kletzander 2014-11-04 09:53:01 UTC
Fixed upstream by v1.2.10-2-g11a4875:

commit 11a48758a7d6c946062c130b6186ae3eadd58e39
Author:     Martin Kletzander <mkletzan>
AuthorDate: Thu Oct 30 07:34:30 2014 +0100

    qemu: make advice from numad available when building commandline

Comment 7 Jincheng Miao 2014-11-19 10:03:58 UTC
In latest libvirt-1.2.8-7.el7.x86_64, configuring 'auto' placement for guest vcpu will follow numad's suggestion in NUMA host.

The verification steps are:

1. set auto placement for vcpu
# virsh edit r71
...
   <memory unit='KiB'>1048576</memory>
   <vcpu placement='auto'>4</vcpu>
   <numatune>
     <memory mode='strict' placement='auto'/>
   </numatune>
   <cpu>
     <numa>
       <cell id='0' cpus='0-1' memory='524288'/>
       <cell id='1' cpus='2-3' memory='524288'/>
     </numa>
   </cpu>
...

2. start guest
# virsh start r71
Domain r71 started

3. check from libvirtd.log

2014-11-19 09:57:33.478+0000: 15781: debug : virCommandRunAsync:2398 : About to run /bin/numad -w 4:1024
...
2014-11-19 09:57:35.485+0000: 15781: debug : qemuProcessStart:4297 : Nodeset returned from numad: 1

4. check guest vcpu pinning

# numactl --hard
available: 2 nodes (0-1)
node 0 cpus: 0 1 2 3 4 5 6 7 16 17 18 19 20 21 22 23
node 0 size: 65514 MB
node 0 free: 62337 MB
node 1 cpus: 8 9 10 11 12 13 14 15 24 25 26 27 28 29 30 31
node 1 size: 65536 MB
node 1 free: 62187 MB
node distances:
node   0   1 
  0:  10  11 
  1:  11  10

# cat /sys/fs/cgroup/cpuset/machine.slice/machine-qemu\\x2dr71.scope/vcpu0/cpuset.cpus 
8-15,24-31

Guest is pinned to NUMA node 1.

So change the status to VERIFIED.

Comment 9 errata-xmlrpc 2015-03-05 07:44:00 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://rhn.redhat.com/errata/RHSA-2015-0323.html


Note You need to log in before you can comment on or make changes to this bug.