Note: This bug is displayed in read-only format because the product is no longer active in Red Hat Bugzilla.
RHEL Engineering is moving the tracking of its product development work on RHEL 6 through RHEL 9 to Red Hat Jira (issues.redhat.com). If you're a Red Hat customer, please continue to file support cases via the Red Hat customer portal. If you're not, please head to the "RHEL project" in Red Hat Jira and file new tickets here. Individual Bugzilla bugs in the statuses "NEW", "ASSIGNED", and "POST" are being migrated throughout September 2023. Bugs of Red Hat partners with an assigned Engineering Partner Manager (EPM) are migrated in late September as per pre-agreed dates. Bugs against components "kernel", "kernel-rt", and "kpatch" are only migrated if still in "NEW" or "ASSIGNED". If you cannot log in to RH Jira, please consult article #7032570. That failing, please send an e-mail to the RH Jira admins at rh-issues@redhat.com to troubleshoot your issue as a user management inquiry. The email creates a ServiceNow ticket with Red Hat. Individual Bugzilla bugs that are migrated will be moved to status "CLOSED", resolution "MIGRATED", and set with "MigratedToJIRA" in "Keywords". The link to the successor Jira issue will be found under "Links", have a little "two-footprint" icon next to it, and direct you to the "RHEL project" in Red Hat Jira (issue links are of type "https://issues.redhat.com/browse/RHEL-XXXX", where "X" is a digit). This same link will be available in a blue banner at the top of the page informing you that that bug has been migrated.

Bug 1400008

Summary: The third numa node is null when boot up 3 numa node together
Product: Red Hat Enterprise Linux 7 Reporter: Min Deng <mdeng>
Component: qemu-kvm-rhevAssignee: Eduardo Habkost <ehabkost>
Status: CLOSED NOTABUG QA Contact: Virtualization Bugs <virt-bugs>
Severity: medium Docs Contact:
Priority: medium    
Version: 7.3CC: ailan, dgibson, ehabkost, knoel, lvivier, mdeng, qzhang, virt-maint, zhengtli
Target Milestone: rcKeywords: Reopened
Target Release: ---   
Hardware: x86_64   
OS: Unspecified   
Whiteboard:
Fixed In Version: Doc Type: If docs needed, set a value
Doc Text:
Story Points: ---
Clone Of: Environment:
Last Closed: 2016-12-27 19:40:45 UTC Type: Bug
Regression: --- Mount Type: ---
Documentation: --- CRM:
Verified Versions: Category: ---
oVirt Team: --- RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: --- Target Upstream Version:
Embargoed:

Description Min Deng 2016-11-30 09:36:54 UTC
Description of problem:
Numa node it null when boot up 3 numa together

Version-Release number of selected component (if applicable):
qemu-kvm-tools-rhev-2.6.0-27.el7.ppc64le
kernel-3.10.0-514.el7.ppc64le

How reproducible:
2/2

Steps to Reproduce:
1.boot up guest with cli and attach three numas in total 
  /usr/libexec/qemu-kvm -name avocado-vt-vm1 -sandbox off -machine pseries -nodefaults -vga std -device virtio-scsi-pci,id=virtio_scsi_pci0,bus=pci.0,addr=04 -drive id=drive_image1,if=none,snapshot=off,aio=native,cache=none,format=qcow2,file=/root/test_home/mdeng/staf-kvm-devel/workspace/usr/share/avocado/data/avocado-vt/images/RHEL-Server-7.3-ppc64le-virtio-scsi.qcow2 -device scsi-hd,id=image1,drive=drive_image1 -m 1536 -smp 6,maxcpus=6,cores=3,threads=1,sockets=2 -vnc :2 -rtc base=utc,clock=host -boot order=cdn,once=c,menu=off,strict=off -numa node -numa node -numa node -enable-kvm -monitor stdio


Actual results:
(qemu) info numa
3 nodes
node 0 cpus: 0 1 2
node 0 size: 512 MB
node 1 cpus: 3 4 5
node 1 size: 512 MB
node 2 cpus:         --- null
node 2 size: 512 MB
It seemed that the cpus only were assigned to two nodes.Does it work as designed ?

Expected results:
Compared to results from x86 it is more reasonable in some degrees.
(qemu) info numa
3 nodes
node 0 cpus: 0 3
node 0 size: 512 MB
node 1 cpus: 1 4
node 1 size: 512 MB
node 2 cpus: 2 5
node 2 size: 512 MB

Additional info:
It will effect on automation scripts so please give a hand to look over it,thanks a lot.

Comment 2 Laurent Vivier 2016-11-30 13:42:29 UTC
This seems normal as you ask for 3 nodes with 3 cores per socket for a total of 6 cores, so you have 3 cores on node 0, 3 cores on node 1, and as there is no more core available, 0 core on the last one.

If you want the same result as you have with x86, use:

      -numa node -numa node -numa node \
      -smp 6,maxcpus=6,cores=2,threads=1,sockets=3

Comment 3 David Gibson 2016-12-01 00:44:39 UTC
Right.  In short, qemu tries to place whole sockets into NUMA nodes, because splitting a socket across nodes would be implausible on real hardware.

Comment 4 Min Deng 2016-12-01 03:35:59 UTC
(In reply to Laurent Vivier from comment #2)
> This seems normal as you ask for 3 nodes with 3 cores per socket for a total
> of 6 cores, so you have 3 cores on node 0, 3 cores on node 1, and as there
> is no more core available, 0 core on the last one.
> 
> If you want the same result as you have with x86, use:
> 
>       -numa node -numa node -numa node \
>       -smp 6,maxcpus=6,cores=2,threads=1,sockets=3

Hi,it seems that the results is not same to what you said

On x86 ,the output still be 

(qemu) info numa
3 nodes
node 0 cpus: 0 3
node 0 size: 512 MB
node 1 cpus: 1 4
node 1 size: 512 MB
node 2 cpus: 2 5
node 2 size: 512 MB

Comment 5 Min Deng 2016-12-01 05:20:11 UTC
(In reply to dengmin from comment #4)
> (In reply to Laurent Vivier from comment #2)
> > This seems normal as you ask for 3 nodes with 3 cores per socket for a total
> > of 6 cores, so you have 3 cores on node 0, 3 cores on node 1, and as there
> > is no more core available, 0 core on the last one.
> > 
> > If you want the same result as you have with x86, use:
> > 
> >       -numa node -numa node -numa node \
> >       -smp 6,maxcpus=6,cores=2,threads=1,sockets=3
> 
> Hi,it seems that the results is not same to what you said
> 
> On x86 ,the output still be 
> 
> (qemu) info numa
> 3 nodes
> node 0 cpus: 0 3
> node 0 size: 512 MB
> node 1 cpus: 1 4
> node 1 size: 512 MB
> node 2 cpus: 2 5
> node 2 size: 512 MB

  Please ignore my comments,thanks

Comment 6 Min Deng 2016-12-01 05:29:54 UTC
   Draw a conclusion for this bug finally.
On x86,
if use the following commands,
a....-numa node -numa node -numa node -smp 6,maxcpus=6,*cores=2*,threads=1,*sockets=3* ...
b....-numa node -numa node -numa node -smp 6,maxcpus=6,*cores=3*,threads=1,*sockets=2* ...

The outputs are the same 
(qemu) info numa
3 nodes
node 0 cpus: 0 3
node 0 size: 512 MB
node 1 cpus: 1 4
node 1 size: 512 MB
node 2 cpus: 2 5
node 2 size: 512 MB

On ppc
a....-numa node -numa node -numa node -smp 6,maxcpus=6,*cores=2*,threads=1,*sockets=3* ...
b....-numa node -numa node -numa node -smp 6,maxcpus=6,*cores=3*,threads=1,*sockets=2* ...

The output are different with each other
The output of command a,
(qemu) info numa
3 nodes
node 0 cpus: 0 1
node 0 size: 512 MB
node 1 cpus: 2 3
node 1 size: 512 MB
node 2 cpus: 4 5
node 2 size: 512 MB
The output of command b,
(qemu) info numa
3 nodes
node 0 cpus: 0 1 2
node 0 size: 512 MB
node 1 cpus: 3 4 5
node 1 size: 512 MB
node 2 cpus:         --- null
node 2 size: 512 MB

Comment 7 Laurent Vivier 2016-12-01 09:53:47 UTC
looks like a bug on x86.

Comment 8 David Gibson 2016-12-01 23:57:28 UTC
Debateable whether this is really a bug on x86.  Basically x86 allows cores to be split between NUMA nodes, whereas ppc does not.  That's... weird, but not necessarily a problem.

Comment 9 Zhengtong 2016-12-20 02:50:00 UTC
Hi dengmin, unfortunately I didn't reproduce the same issue you. could you provide the qemu version you used under x86 platform/RHEL7.3.  I got the result as below:

[root@hp-z800-06 liuzt]# sh test.sh 
QEMU 2.6.0 monitor - type 'help' for more information
(qemu) VNC server running on '::1;5900'

(qemu) 
(qemu) info numa
3 nodes
node 0 cpus: 0 1 2
node 0 size: 40 MB
node 1 cpus: 3 4 5
node 1 size: 40 MB
node 2 cpus:
node 2 size: 48 MB

[root@hp-z800-06 liuzt]# cat test.sh 
/usr/libexec/qemu-kvm \
-smp 6,maxcpus=6,cores=3,threads=1,sockets=2 \
-numa node \
-numa node \
-numa node \
-monitor stdio \
-enable-kvm

[root@hp-z800-06 liuzt]# /usr/libexec/qemu-kvm --version
QEMU emulator version 2.6.0 (qemu-kvm-rhev-2.6.0-28.el7_3.2), Copyright (c) 2003-2008 Fabrice Bellard

Comment 10 Min Deng 2016-12-20 09:32:28 UTC
The issue only occurs on the build any issues please let me know,thanks a lot!
/usr/libexec/qemu-kvm --version
QEMU emulator version 1.5.3 (qemu-kvm-1.5.3-126.el7), Copyright (c) 2003-2008 Fabrice Bellard

Comment 11 Eduardo Habkost 2016-12-27 19:40:45 UTC
(In reply to Zhengtong from comment #9)
> [root@hp-z800-06 liuzt]# /usr/libexec/qemu-kvm --version
> QEMU emulator version 2.6.0 (qemu-kvm-rhev-2.6.0-28.el7_3.2), Copyright (c)
> 2003-2008 Fabrice Bellard

(In reply to dengmin from comment #10)
> /usr/libexec/qemu-kvm --version
> QEMU emulator version 1.5.3 (qemu-kvm-1.5.3-126.el7), Copyright (c)
> 2003-2008 Fabrice Bellard

You are testing different QEMU versions.

The behavior on qemu-kvm-1.5.3 is the old one (assigning threads from the same socket to different nodes). qemu-kvm-rhev-2.6.0 has the new behavior that is similar to power.

qemu-kvm-rhev has no problem at all, and the default on qemu-kvm-1.5.3 won't be changed (as it is not a bug).