RHEL Engineering is moving the tracking of its product development work on RHEL 6 through RHEL 9 to Red Hat Jira (issues.redhat.com). If you're a Red Hat customer, please continue to file support cases via the Red Hat customer portal. If you're not, please head to the "RHEL project" in Red Hat Jira and file new tickets here. Individual Bugzilla bugs in the statuses "NEW", "ASSIGNED", and "POST" are being migrated throughout September 2023. Bugs of Red Hat partners with an assigned Engineering Partner Manager (EPM) are migrated in late September as per pre-agreed dates. Bugs against components "kernel", "kernel-rt", and "kpatch" are only migrated if still in "NEW" or "ASSIGNED". If you cannot log in to RH Jira, please consult article #7032570. That failing, please send an e-mail to the RH Jira admins at rh-issues@redhat.com to troubleshoot your issue as a user management inquiry. The email creates a ServiceNow ticket with Red Hat. Individual Bugzilla bugs that are migrated will be moved to status "CLOSED", resolution "MIGRATED", and set with "MigratedToJIRA" in "Keywords". The link to the successor Jira issue will be found under "Links", have a little "two-footprint" icon next to it, and direct you to the "RHEL project" in Red Hat Jira (issue links are of type "https://issues.redhat.com/browse/RHEL-XXXX", where "X" is a digit). This same link will be available in a blue banner at the top of the page informing you that that bug has been migrated.
Bug 989439 - QEMU core dumped when boot guest with vcpus that is bigger than allowed
Summary: QEMU core dumped when boot guest with vcpus that is bigger than allowed
Keywords:
Status: CLOSED WONTFIX
Alias: None
Product: Red Hat Enterprise Linux 6
Classification: Red Hat
Component: qemu-kvm
Version: 6.5
Hardware: Unspecified
OS: Unspecified
medium
medium
Target Milestone: rc
: ---
Assignee: Virtualization Maintenance
QA Contact: Virtualization Bugs
URL:
Whiteboard:
: 990222 (view as bug list)
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2013-07-29 09:26 UTC by Sibiao Luo
Modified: 2013-07-31 07:38 UTC (History)
12 users (show)

Fixed In Version:
Doc Type: Bug Fix
Doc Text:
Clone Of:
Environment:
Last Closed: 2013-07-29 13:20:24 UTC
Target Upstream Version:
Embargoed:


Attachments (Terms of Use)

Description Sibiao Luo 2013-07-29 09:26:57 UTC
Description of problem:
rhel6 only support the max is 160, but if boot guest with vcpus that is bigger than allowed, qemu will core dumped.
rhel7 support the max is 255, if boot guest with 256, qumu will quit automatically with some warning 'Unsupported number of maxcpus'.

Version-Release number of selected component (if applicable):
host info:
2.6.32-402.el6.x86_64
qemu-kvm-0.12.1.2-2.381.el6.x86_64
seabios-0.6.1.2-28.el6.x86_64
guest info:
2.6.32-402.el6.x86_64

How reproducible:
100%

Steps to Reproduce:
1.boot guest with vcpus that is bigger than allowed(160).
# /usr/libexec/qemu-kvm -S -M rhel6.5.0 -cpu SandyBridge -enable-kvm -m 4096 -smp 161 -no-kvm-pit-reinjection...
2.
3.

Actual results:
after step 1, qemu will core dumped.
kvm_create_vcpu: Invalid argument
Failed to create vCPU. Check the -smp parameter.
Aborted (core dumped)

Core was generated by `/usr/libexec/qemu-kvm -S -M rhel6.5.0 -cpu SandyBridge -enable-kvm -m 4096 -smp'.
Program terminated with signal 6, Aborted.
#0  0x00007f5fc5a558a5 in raise () from /lib64/libc.so.6
(gdb) bt
#0  0x00007f5fc5a558a5 in raise () from /lib64/libc.so.6
#1  0x00007f5fc5a57085 in abort () from /lib64/libc.so.6
#2  0x00007f5fc8142e5b in kvm_create_vcpu (_env=<value optimized out>) at /usr/src/debug/qemu-kvm-0.12.1.2/qemu-kvm.c:475
#3  ap_main_loop (_env=<value optimized out>) at /usr/src/debug/qemu-kvm-0.12.1.2/qemu-kvm.c:2026
#4  0x00007f5fc7a75851 in start_thread () from /lib64/libpthread.so.0
#5  0x00007f5fc5b0b90d in clone () from /lib64/libc.so.6
(gdb)

Expected results:
it should give a friendly prompt, like: nsupported number of maxcpus.

Additional info:
# /usr/libexec/qemu-kvm -S -M rhel6.5.0 -cpu SandyBridge -enable-kvm -m 4096 -smp 161 -no-kvm-pit-reinjection -name sluo -uuid 43425b70-86e5-4664-bf2c-3b76699a8aec -rtc base=localtime,clock=host,driftfix=slew -device virtio-serial-pci,id=virtio-serial0,max_ports=16,vectors=0,bus=pci.0,addr=0x3 -chardev socket,id=channel1,path=/tmp/helloworld1,server,nowait -device virtserialport,chardev=channel1,name=com.redhat.rhevm.vdsm.1,bus=virtio-serial0.0,id=port1,nr=1 -chardev socket,id=channel2,path=/tmp/helloworld2,server,nowait -device virtserialport,chardev=channel2,name=com.redhat.rhevm.vdsm.2,bus=virtio-serial0.0,id=port2,nr=2 -drive file=/dev/vg/system-disk.raw,if=none,id=drive-system-disk,format=raw,cache=none,aio=native,werror=stop,rerror=stop,serial="QEMU-DISK1" -device virtio-scsi-pci,bus=pci.0,addr=0x4,id=scsi0 -device scsi-hd,drive=drive-system-disk,id=system-disk,bootindex=1 -device virtio-balloon-pci,id=ballooning,bus=pci.0,addr=0x5 -global PIIX4_PM.disable_s3=0 -global PIIX4_PM.disable_s4=0 -netdev tap,id=hostnet0,vhost=off,script=/etc/qemu-ifup -device virtio-net-pci,netdev=hostnet0,id=virtio-net-pci0,mac=2C:41:38:B6:32:21,bus=pci.0,addr=0x6,bootindex=2 -drive file=/dev/vg/my-data-disk.raw,if=none,id=drive-data-disk,format=raw,media=disk,cache=none,aio=native,werror=stop,rerror=stop,serial="QEMU-DISK2" -device virtio-scsi-pci,bus=pci.0,addr=0x7,id=scsi1 -device scsi-hd,drive=drive-data-disk,id=data-disk,bootindex=3,bus=scsi1.0 -k en-us -boot menu=on -vnc :1 -spice port=5931,disable-ticketing -qmp tcp:0:4444,server,nowait -monitor stdio

Comment 1 Sibiao Luo 2013-07-29 09:28:29 UTC
(In reply to Sibiao Luo from comment #0)
> Description of problem:
> rhel6 only support the max is 160, but if boot guest with vcpus that is
> bigger than allowed, qemu will core dumped.
If boot guest with 160 vcpus specified, it can be boot up successfully, both host and guest work well.
> rhel7 support the max is 255, if boot guest with 256, qumu will quit
> automatically with some warning 'Unsupported number of maxcpus'.
>

Comment 6 Peter Krempa 2013-07-31 07:38:34 UTC
*** Bug 990222 has been marked as a duplicate of this bug. ***


Note You need to log in before you can comment on or make changes to this bug.