Bug 816804 - NUMA information(memory and cpu) set should be the same as configure in command line
NUMA information(memory and cpu) set should be the same as configure in comma...
Status: CLOSED ERRATA
Product: Red Hat Enterprise Linux 6
Classification: Red Hat
Component: qemu-kvm (Show other bugs)
6.3
x86_64 Linux
medium Severity medium
: rc
: ---
Assigned To: Eduardo Habkost
Virtualization Bugs
: TestOnly
: 816798 825668 (view as bug list)
Depends On: 733720 851245
Blocks: 799545 832165 832167 833130 844706 974374
  Show dependency treegraph
 
Reported: 2012-04-26 23:32 EDT by Sibiao Luo
Modified: 2013-06-14 02:07 EDT (History)
19 users (show)

See Also:
Fixed In Version:
Doc Type: Bug Fix
Doc Text:
Story Points: ---
Clone Of:
: 832165 (view as bug list)
Environment:
Last Closed: 2013-02-21 02:34:25 EST
Type: Bug
Regression: ---
Mount Type: ---
Documentation: ---
CRM:
Verified Versions:
Category: ---
oVirt Team: ---
RHEL 7.3 requirements from Atomic Host:


Attachments (Terms of Use)

  None (edit)
Description Sibiao Luo 2012-04-26 23:32:34 EDT
Description of problem:
boot up guest with multi numa node, and check the memory and cpu info between the monitor and guest, NUMA information(memory and cpu) set should be the same as configure in command line.

Version-Release number of selected component (if applicable):
host info:
# uname -r && rpm -q qemu-kvm-rhev
2.6.32-262.el6.x86_64
qemu-kvm-rhev-0.12.1.2-2.285.el6.x86_64
guest info:
guest name: RHEL6.3-20120416.0_x86_64
# uname -r
2.6.32-262.el6.x86_64
# rpm -qa | grep numactl
numactl-2.0.7-3.el6.x86_64

How reproducible:
100%(imulate multi "-numa node")

Steps to Reproduce:
1.boot up guest with multi numa node, the cpus and mem sum should be same as
-mem and -smp and depends on the host condition.
eg: # /usr/libexec/qemu-kvm -M rhel6.3.0 -cpu qemu64,+sse2 -enable-kvm -m 5120 -smp 50,sockets=1,cores=50,threads=1 -usb -device usb-tablet,id=input0 -name sluo_test -uuid `uuidgen` -drive file=/home/RHEL6.3-20120416.0-Server-x86_64.qcow2,format=qcow2,if=none,id=drive-disk,cache=none,werror=stop,rerror=stop -device virtio-blk-pci,drive=drive-disk,id=image,bootindex=1 -netdev tap,id=hostnet0,script=/etc/qemu-ifup -device virtio-net-pci,netdev=hostnet0,id=virtio-net-pci0,bootindex=2 -device virtio-balloon-pci,id=ballooning -monitor stdio -boot menu=on -spice port=5931,disable-ticketing -numa node,mem=1024,cpus=0-9,nodeid=0 -numa node,mem=1024,cpus=10-19,nodeid=1 -numa node,mem=1024,cpus=20-29,nodeid=2 -numa node,mem=1024,cpus=30-48,nodeid=3 -numa node,mem=1024,cpus=49,nodeid=4
2.check memory and cpu info in monitor.
3.check memory and cpu info in guest. 
  
Actual results:
after the step 2,
(qemu) info numa
5 nodes
node 0 cpus: 0 1 2 3 4 5 6 7 8 9 32 33 34 35 36 37 38 39 40 41
node 0 size: 1024 MB
node 1 cpus: 10 11 12 13 14 15 16 17 18 19 42 43 44 45 46 47 48 49
node 1 size: 1024 MB
node 2 cpus: 20 21 22 23 24 25 26 27 28 29
node 2 size: 1024 MB
node 3 cpus: 30
node 3 size: 1024 MB
node 4 cpus: 31
node 4 size: 1024 MB
after the step 3,
# numactl --hardware
available: 5 nodes (0-4)
node 0 cpus: 0 1 2 3 4 5 6 7 8 9 32 33 34 35 36 37 38 39 40 41
node 0 size: 1023 MB
node 0 free: 379 MB
node 1 cpus: 10 11 12 13 14 15 16 17 18 19 42 43 44 45 46 47 48 49
node 1 size: 1024 MB
node 1 free: 901 MB
node 2 cpus: 20 21 22 23 24 25 26 27 28 29
node 2 size: 1024 MB
node 2 free: 945 MB
node 3 cpus: 30 31
node 3 size: 1023 MB
node 3 free: 974 MB
node 4 cpus:
node 4 size: 1024 MB
node 4 free: 990 MB
node distances:
node   0   1   2   3   4 
  0:  10  20  20  20  20 
  1:  20  10  20  20  20 
  2:  20  20  10  20  20 
  3:  20  20  20  10  20 
  4:  20  20  20  20  10

Expected results:
1.NUMA information(memory and cpu) set should be the same as configure in command line.
2.the memory and cpu between the monitor and guest should be the same.

host info:
# lscpu
Architecture:          x86_64
CPU op-mode(s):        32-bit, 64-bit
Byte Order:            Little Endian
CPU(s):                80
On-line CPU(s) list:   0-79
Thread(s) per core:    2
Core(s) per socket:    10
CPU socket(s):         4
NUMA node(s):          4
Vendor ID:             GenuineIntel
CPU family:            6
Model:                 47
Stepping:              2
CPU MHz:               1064.000
BogoMIPS:              3989.95
Virtualization:        VT-x
L1d cache:             32K
L1i cache:             32K
L2 cache:              256K
L3 cache:              24576K
NUMA node0 CPU(s):     0-9,40-49
NUMA node1 CPU(s):     10-19,50-59
NUMA node2 CPU(s):     20-29,60-69
NUMA node3 CPU(s):     30-39,70-79

Additional info:
Comment 2 Sibiao Luo 2012-04-27 05:56:56 EDT
I have tested that the cores with the different socket separated to the same nodes, still hit this issue.
eg: # /usr/libexec/qemu-kvm -M rhel6.3.0 -cpu qemu64,+sse2 -enable-kvm -m 4096 -smp 20,sockets=20,cores=1,threads=1 -usb -device usb-tablet,id=input0 -name sluo_test -uuid `uuidgen` -drive file=/home/RHEL6.3-20120416.0-Server-x86_64.qcow2,format=qcow2,if=none,id=drive-disk,cache=none,werror=stop,rerror=stop -device virtio-blk-pci,drive=drive-disk,id=image,bootindex=1 -netdev tap,id=hostnet0,script=/etc/qemu-ifup -device virtio-net-pci,netdev=hostnet0,id=virtio-net-pci0,bootindex=2 -device virtio-balloon-pci,id=ballooning -monitor stdio -boot menu=on -spice port=5931,disable-ticketing -numa node,mem=1024,cpus=0-4,nodeid=0 -numa node,mem=1024,cpus=5-9,nodeid=1 -numa node,mem=512,cpus=10-18,nodeid=2 -numa node,mem=512,cpus=19,nodeid=3

check memory and cpu info in monitor: 
(qemu) info numa
4 nodes
node 0 cpus: 0 1 2 3 4
node 0 size: 1024 MB
node 1 cpus: 5 6 7 8 9
node 1 size: 1024 MB
node 2 cpus: 10 11 12 13 14 15 16 17 18
node 2 size: 512 MB
node 3 cpus: 19
node 3 size: 512 MB

check memory and cpu info in guest:
# numactl --hardware
available: 1 nodes (0)
node 0 cpus: 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19
node 0 size: 4095 MB
node 0 free: 3287 MB
node distances:
node   0 
  0:  10
# numactl --show
policy: default
preferred node: current
physcpubind: 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 
cpubind: 0 
nodebind: 0 
membind: 0
Comment 4 Eduardo Habkost 2012-07-20 09:35:46 EDT
*** Bug 816798 has been marked as a duplicate of this bug. ***
Comment 5 Ademar Reis 2012-08-02 12:00:01 EDT
*** Bug 825668 has been marked as a duplicate of this bug. ***
Comment 6 Eduardo Habkost 2012-10-08 17:41:36 EDT
Marking as TestOnly, to be tested once bug 733720 is fixed.
Comment 7 Eduardo Habkost 2012-11-04 15:14:40 EST
Bug 733720 is ON_QA. Moving TestOnly BZs to ON_QA as well.
Comment 8 Shaolong Hu 2012-11-29 07:28:44 EST
Test with qemu-kvm-rhev-0.12.1.2-2.337.el6.x86_64:

1.boot guest with:
-smp 50,sockets=1,cores=50,threads=1 -numa node,mem=1024,cpus=0-9,nodeid=0 -numa node,mem=1024,cpus=10-19,nodeid=1 -numa node,mem=1024,cpus=20-29,nodeid=2 -numa node,mem=1024,cpus=30-48,nodeid=3 -numa node,mem=1024,cpus=49,nodeid=4

2. in qemu monitor
(qemu) info numa
5 nodes
node 0 cpus: 0 1 2 3 4 5 6 7 8 9
node 0 size: 1024 MB
node 1 cpus: 10 11 12 13 14 15 16 17 18 19
node 1 size: 1024 MB
node 2 cpus: 20 21 22 23 24 25 26 27 28 29
node 2 size: 1024 MB
node 3 cpus: 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48
node 3 size: 1024 MB
node 4 cpus: 49
node 4 size: 1024 MB

3. in the guest:
numactl --hardware
available: 4 nodes (0-3)
node 0 cpus: 0 1 2 3 4 5 6 7 8 9 49
node 0 size: 1023 MB
node 0 free: 648 MB
node 1 cpus: 10 11 12 13 14 15 16 17 18 19
node 1 size: 1024 MB
node 1 free: 968 MB
node 2 cpus: 20 21 22 23 24 25 26 27 28 29
node 2 size: 1024 MB
node 2 free: 979 MB
node 3 cpus: 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48
node 3 size: 1023 MB
node 3 free: 968 MB
node distances:
node   0   1   2   3 
  0:  10  20  20  20 
  1:  20  10  20  20 
  2:  20  20  10  20 
  3:  20  20  20  10
Comment 9 Eduardo Habkost 2012-12-03 08:18:59 EST
The behavior you see above is the one you are going to get if you (incorrectly) have -m 4096 on the qemu-kvm command-line (the last node will be ignored by the OS because you are giving 1G for each node and the first four nodes are enough to cover the whole RAM).

If you use -m 5120, on the other hand, you should get the right behavior.

So, please show the full qemu-kvm command-line you used.
Comment 10 Eduardo Habkost 2012-12-03 08:29:34 EST
I confirm that bug is reproducible only if using the (incorrect) command-line:

/usr/libexec/qemu-kvm -m 4096 -smp 50,sockets=1,cores=50,threads=1 -numa node,mem=1024,cpus=0-9,nodeid=0 -numa node,mem=1024,cpus=10-19,nodeid=1 -numa node,mem=1024,cpus=20-29,nodeid=2 -numa node,mem=1024,cpus=30-48,nodeid=3 -numa node,mem=1024,cpus=49,nodeid=4 -vnc :0 -hda /var/lib/libvirt/images/rhel64.img

(it has 5 nodes of 1024MB each, but only 4096MB of RAM, so there's no enough RAM for all nodes)

The problem can't be reproduced if the command-line is set so that the amount of RAM is enough for all nodes:

/usr/libexec/qemu-kvm -m 4096 -smp 50,sockets=1,cores=50,threads=1 -numa node,mem=1024,cpus=0-9,nodeid=0 -numa node,mem=1024,cpus=10-19,nodeid=1 -numa node,mem=1024,cpus=20-29,nodeid=2 -numa node,mem=1024,cpus=30-48,nodeid=3 -numa node,mem=1024,cpus=49,nodeid=4 -vnc :0 -hda /var/lib/libvirt/images/rhel64.img

Moving back to ON_QA.
Comment 11 Eduardo Habkost 2012-12-03 08:30:18 EST
(In reply to comment #10)
> The problem can't be reproduced if the command-line is set so that the
> amount of RAM is enough for all nodes:
> 
> /usr/libexec/qemu-kvm -m 4096 -smp 50,sockets=1,cores=50,threads=1 -numa
> node,mem=1024,cpus=0-9,nodeid=0 -numa node,mem=1024,cpus=10-19,nodeid=1
> -numa node,mem=1024,cpus=20-29,nodeid=2 -numa
> node,mem=1024,cpus=30-48,nodeid=3 -numa node,mem=1024,cpus=49,nodeid=4 -vnc
> :0 -hda /var/lib/libvirt/images/rhel64.img

Copy&paste mistake. Right command-line is:
/usr/libexec/qemu-kvm -m 5120 -smp 50,sockets=1,cores=50,threads=1 -numa node,mem=1024,cpus=0-9,nodeid=0 -numa node,mem=1024,cpus=10-19,nodeid=1 -numa node,mem=1024,cpus=20-29,nodeid=2 -numa node,mem=1024,cpus=30-48,nodeid=3 -numa node,mem=1024,cpus=49,nodeid=4 -vnc :0 -hda /var/lib/libvirt/images/rhel64.img
Comment 12 Shaolong Hu 2012-12-03 22:06:14 EST
Hi Eduardo, you are right, i retest and it works correctly, thanks for specification, i will set this one verified.
Comment 14 errata-xmlrpc 2013-02-21 02:34:25 EST
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

http://rhn.redhat.com/errata/RHBA-2013-0527.html

Note You need to log in before you can comment on or make changes to this bug.