| Summary: | The third numa node is null when boot up 3 numa node together | ||
|---|---|---|---|
| Product: | Red Hat Enterprise Linux 7 | Reporter: | Min Deng <mdeng> |
| Component: | qemu-kvm-rhev | Assignee: | Eduardo Habkost <ehabkost> |
| Status: | CLOSED NOTABUG | QA Contact: | Virtualization Bugs <virt-bugs> |
| Severity: | medium | Docs Contact: | |
| Priority: | medium | ||
| Version: | 7.3 | CC: | ailan, dgibson, ehabkost, knoel, lvivier, mdeng, qzhang, virt-maint, zhengtli |
| Target Milestone: | rc | Keywords: | Reopened |
| Target Release: | --- | ||
| Hardware: | x86_64 | ||
| OS: | Unspecified | ||
| Whiteboard: | |||
| Fixed In Version: | Doc Type: | If docs needed, set a value | |
| Doc Text: | Story Points: | --- | |
| Clone Of: | Environment: | ||
| Last Closed: | 2016-12-27 19:40:45 UTC | Type: | Bug |
| Regression: | --- | Mount Type: | --- |
| Documentation: | --- | CRM: | |
| Verified Versions: | Category: | --- | |
| oVirt Team: | --- | RHEL 7.3 requirements from Atomic Host: | |
| Cloudforms Team: | --- | Target Upstream Version: | |
|
Description
Min Deng
2016-11-30 09:36:54 UTC
This seems normal as you ask for 3 nodes with 3 cores per socket for a total of 6 cores, so you have 3 cores on node 0, 3 cores on node 1, and as there is no more core available, 0 core on the last one.
If you want the same result as you have with x86, use:
-numa node -numa node -numa node \
-smp 6,maxcpus=6,cores=2,threads=1,sockets=3
Right. In short, qemu tries to place whole sockets into NUMA nodes, because splitting a socket across nodes would be implausible on real hardware. (In reply to Laurent Vivier from comment #2) > This seems normal as you ask for 3 nodes with 3 cores per socket for a total > of 6 cores, so you have 3 cores on node 0, 3 cores on node 1, and as there > is no more core available, 0 core on the last one. > > If you want the same result as you have with x86, use: > > -numa node -numa node -numa node \ > -smp 6,maxcpus=6,cores=2,threads=1,sockets=3 Hi,it seems that the results is not same to what you said On x86 ,the output still be (qemu) info numa 3 nodes node 0 cpus: 0 3 node 0 size: 512 MB node 1 cpus: 1 4 node 1 size: 512 MB node 2 cpus: 2 5 node 2 size: 512 MB (In reply to dengmin from comment #4) > (In reply to Laurent Vivier from comment #2) > > This seems normal as you ask for 3 nodes with 3 cores per socket for a total > > of 6 cores, so you have 3 cores on node 0, 3 cores on node 1, and as there > > is no more core available, 0 core on the last one. > > > > If you want the same result as you have with x86, use: > > > > -numa node -numa node -numa node \ > > -smp 6,maxcpus=6,cores=2,threads=1,sockets=3 > > Hi,it seems that the results is not same to what you said > > On x86 ,the output still be > > (qemu) info numa > 3 nodes > node 0 cpus: 0 3 > node 0 size: 512 MB > node 1 cpus: 1 4 > node 1 size: 512 MB > node 2 cpus: 2 5 > node 2 size: 512 MB Please ignore my comments,thanks Draw a conclusion for this bug finally. On x86, if use the following commands, a....-numa node -numa node -numa node -smp 6,maxcpus=6,*cores=2*,threads=1,*sockets=3* ... b....-numa node -numa node -numa node -smp 6,maxcpus=6,*cores=3*,threads=1,*sockets=2* ... The outputs are the same (qemu) info numa 3 nodes node 0 cpus: 0 3 node 0 size: 512 MB node 1 cpus: 1 4 node 1 size: 512 MB node 2 cpus: 2 5 node 2 size: 512 MB On ppc a....-numa node -numa node -numa node -smp 6,maxcpus=6,*cores=2*,threads=1,*sockets=3* ... b....-numa node -numa node -numa node -smp 6,maxcpus=6,*cores=3*,threads=1,*sockets=2* ... The output are different with each other The output of command a, (qemu) info numa 3 nodes node 0 cpus: 0 1 node 0 size: 512 MB node 1 cpus: 2 3 node 1 size: 512 MB node 2 cpus: 4 5 node 2 size: 512 MB The output of command b, (qemu) info numa 3 nodes node 0 cpus: 0 1 2 node 0 size: 512 MB node 1 cpus: 3 4 5 node 1 size: 512 MB node 2 cpus: --- null node 2 size: 512 MB looks like a bug on x86. Debateable whether this is really a bug on x86. Basically x86 allows cores to be split between NUMA nodes, whereas ppc does not. That's... weird, but not necessarily a problem. Hi dengmin, unfortunately I didn't reproduce the same issue you. could you provide the qemu version you used under x86 platform/RHEL7.3. I got the result as below: [root@hp-z800-06 liuzt]# sh test.sh QEMU 2.6.0 monitor - type 'help' for more information (qemu) VNC server running on '::1;5900' (qemu) (qemu) info numa 3 nodes node 0 cpus: 0 1 2 node 0 size: 40 MB node 1 cpus: 3 4 5 node 1 size: 40 MB node 2 cpus: node 2 size: 48 MB [root@hp-z800-06 liuzt]# cat test.sh /usr/libexec/qemu-kvm \ -smp 6,maxcpus=6,cores=3,threads=1,sockets=2 \ -numa node \ -numa node \ -numa node \ -monitor stdio \ -enable-kvm [root@hp-z800-06 liuzt]# /usr/libexec/qemu-kvm --version QEMU emulator version 2.6.0 (qemu-kvm-rhev-2.6.0-28.el7_3.2), Copyright (c) 2003-2008 Fabrice Bellard The issue only occurs on the build any issues please let me know,thanks a lot! /usr/libexec/qemu-kvm --version QEMU emulator version 1.5.3 (qemu-kvm-1.5.3-126.el7), Copyright (c) 2003-2008 Fabrice Bellard (In reply to Zhengtong from comment #9) > [root@hp-z800-06 liuzt]# /usr/libexec/qemu-kvm --version > QEMU emulator version 2.6.0 (qemu-kvm-rhev-2.6.0-28.el7_3.2), Copyright (c) > 2003-2008 Fabrice Bellard (In reply to dengmin from comment #10) > /usr/libexec/qemu-kvm --version > QEMU emulator version 1.5.3 (qemu-kvm-1.5.3-126.el7), Copyright (c) > 2003-2008 Fabrice Bellard You are testing different QEMU versions. The behavior on qemu-kvm-1.5.3 is the old one (assigning threads from the same socket to different nodes). qemu-kvm-rhev-2.6.0 has the new behavior that is similar to power. qemu-kvm-rhev has no problem at all, and the default on qemu-kvm-1.5.3 won't be changed (as it is not a bug). |