RHEL Engineering is moving the tracking of its product development work on RHEL 6 through RHEL 9 to Red Hat Jira (issues.redhat.com). If you're a Red Hat customer, please continue to file support cases via the Red Hat customer portal. If you're not, please head to the "RHEL project" in Red Hat Jira and file new tickets here. Individual Bugzilla bugs in the statuses "NEW", "ASSIGNED", and "POST" are being migrated throughout September 2023. Bugs of Red Hat partners with an assigned Engineering Partner Manager (EPM) are migrated in late September as per pre-agreed dates. Bugs against components "kernel", "kernel-rt", and "kpatch" are only migrated if still in "NEW" or "ASSIGNED". If you cannot log in to RH Jira, please consult article #7032570. That failing, please send an e-mail to the RH Jira admins at rh-issues@redhat.com to troubleshoot your issue as a user management inquiry. The email creates a ServiceNow ticket with Red Hat. Individual Bugzilla bugs that are migrated will be moved to status "CLOSED", resolution "MIGRATED", and set with "MigratedToJIRA" in "Keywords". The link to the successor Jira issue will be found under "Links", have a little "two-footprint" icon next to it, and direct you to the "RHEL project" in Red Hat Jira (issue links are of type "https://issues.redhat.com/browse/RHEL-XXXX", where "X" is a digit). This same link will be available in a blue banner at the top of the page informing you that that bug has been migrated.
Bug 816804 - NUMA information(memory and cpu) set should be the same as configure in command line
Summary: NUMA information(memory and cpu) set should be the same as configure in comma...
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: Red Hat Enterprise Linux 6
Classification: Red Hat
Component: qemu-kvm
Version: 6.3
Hardware: x86_64
OS: Linux
medium
medium
Target Milestone: rc
: ---
Assignee: Eduardo Habkost
QA Contact: Virtualization Bugs
URL:
Whiteboard:
: 816798 825668 (view as bug list)
Depends On: 733720 851245
Blocks: 799545 832165 832167 833130 844706 974374
TreeView+ depends on / blocked
 
Reported: 2012-04-27 03:32 UTC by Sibiao Luo
Modified: 2013-06-14 06:07 UTC (History)
19 users (show)

Fixed In Version:
Doc Type: Bug Fix
Doc Text:
Clone Of:
: 832165 (view as bug list)
Environment:
Last Closed: 2013-02-21 07:34:25 UTC
Target Upstream Version:
Embargoed:


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
Red Hat Product Errata RHBA-2013:0527 0 normal SHIPPED_LIVE qemu-kvm bug fix and enhancement update 2013-02-20 21:51:08 UTC

Description Sibiao Luo 2012-04-27 03:32:34 UTC
Description of problem:
boot up guest with multi numa node, and check the memory and cpu info between the monitor and guest, NUMA information(memory and cpu) set should be the same as configure in command line.

Version-Release number of selected component (if applicable):
host info:
# uname -r && rpm -q qemu-kvm-rhev
2.6.32-262.el6.x86_64
qemu-kvm-rhev-0.12.1.2-2.285.el6.x86_64
guest info:
guest name: RHEL6.3-20120416.0_x86_64
# uname -r
2.6.32-262.el6.x86_64
# rpm -qa | grep numactl
numactl-2.0.7-3.el6.x86_64

How reproducible:
100%(imulate multi "-numa node")

Steps to Reproduce:
1.boot up guest with multi numa node, the cpus and mem sum should be same as
-mem and -smp and depends on the host condition.
eg: # /usr/libexec/qemu-kvm -M rhel6.3.0 -cpu qemu64,+sse2 -enable-kvm -m 5120 -smp 50,sockets=1,cores=50,threads=1 -usb -device usb-tablet,id=input0 -name sluo_test -uuid `uuidgen` -drive file=/home/RHEL6.3-20120416.0-Server-x86_64.qcow2,format=qcow2,if=none,id=drive-disk,cache=none,werror=stop,rerror=stop -device virtio-blk-pci,drive=drive-disk,id=image,bootindex=1 -netdev tap,id=hostnet0,script=/etc/qemu-ifup -device virtio-net-pci,netdev=hostnet0,id=virtio-net-pci0,bootindex=2 -device virtio-balloon-pci,id=ballooning -monitor stdio -boot menu=on -spice port=5931,disable-ticketing -numa node,mem=1024,cpus=0-9,nodeid=0 -numa node,mem=1024,cpus=10-19,nodeid=1 -numa node,mem=1024,cpus=20-29,nodeid=2 -numa node,mem=1024,cpus=30-48,nodeid=3 -numa node,mem=1024,cpus=49,nodeid=4
2.check memory and cpu info in monitor.
3.check memory and cpu info in guest. 
  
Actual results:
after the step 2,
(qemu) info numa
5 nodes
node 0 cpus: 0 1 2 3 4 5 6 7 8 9 32 33 34 35 36 37 38 39 40 41
node 0 size: 1024 MB
node 1 cpus: 10 11 12 13 14 15 16 17 18 19 42 43 44 45 46 47 48 49
node 1 size: 1024 MB
node 2 cpus: 20 21 22 23 24 25 26 27 28 29
node 2 size: 1024 MB
node 3 cpus: 30
node 3 size: 1024 MB
node 4 cpus: 31
node 4 size: 1024 MB
after the step 3,
# numactl --hardware
available: 5 nodes (0-4)
node 0 cpus: 0 1 2 3 4 5 6 7 8 9 32 33 34 35 36 37 38 39 40 41
node 0 size: 1023 MB
node 0 free: 379 MB
node 1 cpus: 10 11 12 13 14 15 16 17 18 19 42 43 44 45 46 47 48 49
node 1 size: 1024 MB
node 1 free: 901 MB
node 2 cpus: 20 21 22 23 24 25 26 27 28 29
node 2 size: 1024 MB
node 2 free: 945 MB
node 3 cpus: 30 31
node 3 size: 1023 MB
node 3 free: 974 MB
node 4 cpus:
node 4 size: 1024 MB
node 4 free: 990 MB
node distances:
node   0   1   2   3   4 
  0:  10  20  20  20  20 
  1:  20  10  20  20  20 
  2:  20  20  10  20  20 
  3:  20  20  20  10  20 
  4:  20  20  20  20  10

Expected results:
1.NUMA information(memory and cpu) set should be the same as configure in command line.
2.the memory and cpu between the monitor and guest should be the same.

host info:
# lscpu
Architecture:          x86_64
CPU op-mode(s):        32-bit, 64-bit
Byte Order:            Little Endian
CPU(s):                80
On-line CPU(s) list:   0-79
Thread(s) per core:    2
Core(s) per socket:    10
CPU socket(s):         4
NUMA node(s):          4
Vendor ID:             GenuineIntel
CPU family:            6
Model:                 47
Stepping:              2
CPU MHz:               1064.000
BogoMIPS:              3989.95
Virtualization:        VT-x
L1d cache:             32K
L1i cache:             32K
L2 cache:              256K
L3 cache:              24576K
NUMA node0 CPU(s):     0-9,40-49
NUMA node1 CPU(s):     10-19,50-59
NUMA node2 CPU(s):     20-29,60-69
NUMA node3 CPU(s):     30-39,70-79

Additional info:

Comment 2 Sibiao Luo 2012-04-27 09:56:56 UTC
I have tested that the cores with the different socket separated to the same nodes, still hit this issue.
eg: # /usr/libexec/qemu-kvm -M rhel6.3.0 -cpu qemu64,+sse2 -enable-kvm -m 4096 -smp 20,sockets=20,cores=1,threads=1 -usb -device usb-tablet,id=input0 -name sluo_test -uuid `uuidgen` -drive file=/home/RHEL6.3-20120416.0-Server-x86_64.qcow2,format=qcow2,if=none,id=drive-disk,cache=none,werror=stop,rerror=stop -device virtio-blk-pci,drive=drive-disk,id=image,bootindex=1 -netdev tap,id=hostnet0,script=/etc/qemu-ifup -device virtio-net-pci,netdev=hostnet0,id=virtio-net-pci0,bootindex=2 -device virtio-balloon-pci,id=ballooning -monitor stdio -boot menu=on -spice port=5931,disable-ticketing -numa node,mem=1024,cpus=0-4,nodeid=0 -numa node,mem=1024,cpus=5-9,nodeid=1 -numa node,mem=512,cpus=10-18,nodeid=2 -numa node,mem=512,cpus=19,nodeid=3

check memory and cpu info in monitor: 
(qemu) info numa
4 nodes
node 0 cpus: 0 1 2 3 4
node 0 size: 1024 MB
node 1 cpus: 5 6 7 8 9
node 1 size: 1024 MB
node 2 cpus: 10 11 12 13 14 15 16 17 18
node 2 size: 512 MB
node 3 cpus: 19
node 3 size: 512 MB

check memory and cpu info in guest:
# numactl --hardware
available: 1 nodes (0)
node 0 cpus: 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19
node 0 size: 4095 MB
node 0 free: 3287 MB
node distances:
node   0 
  0:  10
# numactl --show
policy: default
preferred node: current
physcpubind: 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 
cpubind: 0 
nodebind: 0 
membind: 0

Comment 4 Eduardo Habkost 2012-07-20 13:35:46 UTC
*** Bug 816798 has been marked as a duplicate of this bug. ***

Comment 5 Ademar Reis 2012-08-02 16:00:01 UTC
*** Bug 825668 has been marked as a duplicate of this bug. ***

Comment 6 Eduardo Habkost 2012-10-08 21:41:36 UTC
Marking as TestOnly, to be tested once bug 733720 is fixed.

Comment 7 Eduardo Habkost 2012-11-04 20:14:40 UTC
Bug 733720 is ON_QA. Moving TestOnly BZs to ON_QA as well.

Comment 8 Shaolong Hu 2012-11-29 12:28:44 UTC
Test with qemu-kvm-rhev-0.12.1.2-2.337.el6.x86_64:

1.boot guest with:
-smp 50,sockets=1,cores=50,threads=1 -numa node,mem=1024,cpus=0-9,nodeid=0 -numa node,mem=1024,cpus=10-19,nodeid=1 -numa node,mem=1024,cpus=20-29,nodeid=2 -numa node,mem=1024,cpus=30-48,nodeid=3 -numa node,mem=1024,cpus=49,nodeid=4

2. in qemu monitor
(qemu) info numa
5 nodes
node 0 cpus: 0 1 2 3 4 5 6 7 8 9
node 0 size: 1024 MB
node 1 cpus: 10 11 12 13 14 15 16 17 18 19
node 1 size: 1024 MB
node 2 cpus: 20 21 22 23 24 25 26 27 28 29
node 2 size: 1024 MB
node 3 cpus: 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48
node 3 size: 1024 MB
node 4 cpus: 49
node 4 size: 1024 MB

3. in the guest:
numactl --hardware
available: 4 nodes (0-3)
node 0 cpus: 0 1 2 3 4 5 6 7 8 9 49
node 0 size: 1023 MB
node 0 free: 648 MB
node 1 cpus: 10 11 12 13 14 15 16 17 18 19
node 1 size: 1024 MB
node 1 free: 968 MB
node 2 cpus: 20 21 22 23 24 25 26 27 28 29
node 2 size: 1024 MB
node 2 free: 979 MB
node 3 cpus: 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48
node 3 size: 1023 MB
node 3 free: 968 MB
node distances:
node   0   1   2   3 
  0:  10  20  20  20 
  1:  20  10  20  20 
  2:  20  20  10  20 
  3:  20  20  20  10

Comment 9 Eduardo Habkost 2012-12-03 13:18:59 UTC
The behavior you see above is the one you are going to get if you (incorrectly) have -m 4096 on the qemu-kvm command-line (the last node will be ignored by the OS because you are giving 1G for each node and the first four nodes are enough to cover the whole RAM).

If you use -m 5120, on the other hand, you should get the right behavior.

So, please show the full qemu-kvm command-line you used.

Comment 10 Eduardo Habkost 2012-12-03 13:29:34 UTC
I confirm that bug is reproducible only if using the (incorrect) command-line:

/usr/libexec/qemu-kvm -m 4096 -smp 50,sockets=1,cores=50,threads=1 -numa node,mem=1024,cpus=0-9,nodeid=0 -numa node,mem=1024,cpus=10-19,nodeid=1 -numa node,mem=1024,cpus=20-29,nodeid=2 -numa node,mem=1024,cpus=30-48,nodeid=3 -numa node,mem=1024,cpus=49,nodeid=4 -vnc :0 -hda /var/lib/libvirt/images/rhel64.img

(it has 5 nodes of 1024MB each, but only 4096MB of RAM, so there's no enough RAM for all nodes)

The problem can't be reproduced if the command-line is set so that the amount of RAM is enough for all nodes:

/usr/libexec/qemu-kvm -m 4096 -smp 50,sockets=1,cores=50,threads=1 -numa node,mem=1024,cpus=0-9,nodeid=0 -numa node,mem=1024,cpus=10-19,nodeid=1 -numa node,mem=1024,cpus=20-29,nodeid=2 -numa node,mem=1024,cpus=30-48,nodeid=3 -numa node,mem=1024,cpus=49,nodeid=4 -vnc :0 -hda /var/lib/libvirt/images/rhel64.img

Moving back to ON_QA.

Comment 11 Eduardo Habkost 2012-12-03 13:30:18 UTC
(In reply to comment #10)
> The problem can't be reproduced if the command-line is set so that the
> amount of RAM is enough for all nodes:
> 
> /usr/libexec/qemu-kvm -m 4096 -smp 50,sockets=1,cores=50,threads=1 -numa
> node,mem=1024,cpus=0-9,nodeid=0 -numa node,mem=1024,cpus=10-19,nodeid=1
> -numa node,mem=1024,cpus=20-29,nodeid=2 -numa
> node,mem=1024,cpus=30-48,nodeid=3 -numa node,mem=1024,cpus=49,nodeid=4 -vnc
> :0 -hda /var/lib/libvirt/images/rhel64.img

Copy&paste mistake. Right command-line is:
/usr/libexec/qemu-kvm -m 5120 -smp 50,sockets=1,cores=50,threads=1 -numa node,mem=1024,cpus=0-9,nodeid=0 -numa node,mem=1024,cpus=10-19,nodeid=1 -numa node,mem=1024,cpus=20-29,nodeid=2 -numa node,mem=1024,cpus=30-48,nodeid=3 -numa node,mem=1024,cpus=49,nodeid=4 -vnc :0 -hda /var/lib/libvirt/images/rhel64.img

Comment 12 Shaolong Hu 2012-12-04 03:06:14 UTC
Hi Eduardo, you are right, i retest and it works correctly, thanks for specification, i will set this one verified.

Comment 14 errata-xmlrpc 2013-02-21 07:34:25 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

http://rhn.redhat.com/errata/RHBA-2013-0527.html


Note You need to log in before you can comment on or make changes to this bug.