Note: This bug is displayed in read-only format because
the product is no longer active in Red Hat Bugzilla.
RHEL Engineering is moving the tracking of its product development work on RHEL 6 through RHEL 9 to Red Hat Jira (issues.redhat.com). If you're a Red Hat customer, please continue to file support cases via the Red Hat customer portal. If you're not, please head to the "RHEL project" in Red Hat Jira and file new tickets here. Individual Bugzilla bugs in the statuses "NEW", "ASSIGNED", and "POST" are being migrated throughout September 2023. Bugs of Red Hat partners with an assigned Engineering Partner Manager (EPM) are migrated in late September as per pre-agreed dates. Bugs against components "kernel", "kernel-rt", and "kpatch" are only migrated if still in "NEW" or "ASSIGNED". If you cannot log in to RH Jira, please consult article #7032570. That failing, please send an e-mail to the RH Jira admins at rh-issues@redhat.com to troubleshoot your issue as a user management inquiry. The email creates a ServiceNow ticket with Red Hat. Individual Bugzilla bugs that are migrated will be moved to status "CLOSED", resolution "MIGRATED", and set with "MigratedToJIRA" in "Keywords". The link to the successor Jira issue will be found under "Links", have a little "two-footprint" icon next to it, and direct you to the "RHEL project" in Red Hat Jira (issue links are of type "https://issues.redhat.com/browse/RHEL-XXXX", where "X" is a digit). This same link will be available in a blue banner at the top of the page informing you that that bug has been migrated.
Description of problem:
Reserved a 160 CPU supported host and then boot guest specified 160 vCPU with only a small memory(e.g: 2G). Guest will call trace during boot process, but guest can boot up successfully and work well(e.g: network, mouse, keyboard, i/o).
Version-Release number of selected component (if applicable):
host info:
# uname -r && rpm -q qemu-kvm
2.6.32-424.el6.x86_64
qemu-kvm-0.12.1.2-2.414.el6.x86_64
guest info:
2.6.32-424.el6.x86_64
How reproducible:
100%
Steps to Reproduce:
1.Reserved a 160 CPU supported host and then boot guest specified 160 vCPU with only a small memory(e.g: 2G).
# /usr/libexec/qemu-kvm -M pc -S -cpu host -enable-kvm -m 2G -smp 160 -no-kvm-pit-reinjection -usb -device usb-tablet,id=input0 -name sluo -uuid 990ea161-6b67-47b2-b803-19fb01d30d30 -rtc base=localtime,clock=host,driftfix=slew -device virtio-serial-pci,id=virtio-serial0,max_ports=16,vectors=0,bus=pci.0,addr=0x3 -chardev socket,id=channel1,path=/tmp/helloworld1,server,nowait -device virtserialport,chardev=channel1,name=com.redhat.rhevm.vdsm,bus=virtio-serial0.0,id=port1 -chardev socket,id=channel2,path=/tmp/helloworld2,server,nowait -device virtserialport,chardev=channel2,name=com.redhat.rhevm.vdsm,bus=virtio-serial0.0,id=port2 -drive file=/home/RHEL-6.5-Snapshot-4-Server-x86_64.qcow2,if=none,id=drive-virtio-disk,format=qcow2,cache=none,aio=native,werror=stop,rerror=stop -device ide-drive,bus=ide.0,unit=0,drive=drive-virtio-disk,id=virtio-disk,bootindex=1 -netdev tap,id=hostnet0,vhost=on,script=/etc/qemu-ifup -device virtio-net-pci,netdev=hostnet0,id=virtio-net-pci0,mac=00:01:02:B6:40:21,bus=pci.0,addr=0x5 -device virtio-balloon-pci,id=ballooning,bus=pci.0,addr=0x6 -global PIIX4_PM.disable_s3=0 -global PIIX4_PM.disable_s4=0 -k en-us -boot menu=on -qmp tcp:0:4444,server,nowait -serial unix:/tmp/ttyS0,server,nowait -vnc :1 -spice disable-ticketing,port=5931 -monitor stdio
2.
3.
Actual results:
Guest will call trace during boot process, but guest can boot up successfully and work well(e.g: network, mouse, keyboard, i/o).
I will attach the boot logs later.
Expected results:
there is no any call trace during guest boot process.
Additional info:
# lscpu
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Byte Order: Little Endian
CPU(s): 160
On-line CPU(s) list: 0-159
Thread(s) per core: 2
Core(s) per socket: 10
Socket(s): 8
NUMA node(s): 8
Vendor ID: GenuineIntel
CPU family: 6
Model: 47
Stepping: 2
CPU MHz: 1066.000
BogoMIPS: 4799.97
Virtualization: VT-x
L1d cache: 32K
L1i cache: 32K
L2 cache: 256K
L3 cache: 30720K
NUMA node0 CPU(s): 0-9,80-89
NUMA node1 CPU(s): 10-19,90-99
NUMA node2 CPU(s): 20-29,100-109
NUMA node3 CPU(s): 30-39,110-119
NUMA node4 CPU(s): 40-49,120-129
NUMA node5 CPU(s): 50-59,130-139
NUMA node6 CPU(s): 60-69,140-149
NUMA node7 CPU(s): 70-79,150-159
My host has 160 PCU and 1 TB memory, guest can boot up successfully without any call trace if i boot guest with 160 vCPU + 900G memory.
Guest also call trace if i boot it with 160 vCPU + 10G memory, but the call trace logs different with 160 vCPU + 2G memory, i will also provice the logs later.
The two call traces are the same, just with different messages when the lockup lasts <120s (in the 10G guest) or >=120s (in the 2G guest).
I think I have seen this bug already, but I cannot find it. I don't think it is useful, each CPU would have access to 10 MB in the 2G guest. That's less than the CPU cache size.
(In reply to Ademar Reis from comment #6)
> WONTFIX for RHEL6, as it's not a real-world scenario. If you can reproduce
> it with RHEL7, please open a RHEL7 bug.
Hi, Sibiao
As Ademar said please open a RHEL7 bug if it exists in RHEL7. Thanks.
(In reply to Ademar Reis from comment #6)
> WONTFIX for RHEL6, as it's not a real-world scenario. If you can reproduce
> it with RHEL7, please open a RHEL7 bug.
Not met this issue in rhel7 with 160 vCPU + 1G memory which boot up guest successfully without any guest call trace.
host info:
160 PCU & 1TB memory
# uname -r && rpm -q qemu-kvm
3.10.0-121.el7.x86_64
qemu-kvm-1.5.3-60.el7.x86_64
guest info:
160 vCPU & 1GB memory
# uname -r
3.10.0-121.el7.x86_64
Best Regards,
sluo