Note: This bug is displayed in read-only format because the product is no longer active in Red Hat Bugzilla.
RHEL Engineering is moving the tracking of its product development work on RHEL 6 through RHEL 9 to Red Hat Jira (issues.redhat.com). If you're a Red Hat customer, please continue to file support cases via the Red Hat customer portal. If you're not, please head to the "RHEL project" in Red Hat Jira and file new tickets here. Individual Bugzilla bugs in the statuses "NEW", "ASSIGNED", and "POST" are being migrated throughout September 2023. Bugs of Red Hat partners with an assigned Engineering Partner Manager (EPM) are migrated in late September as per pre-agreed dates. Bugs against components "kernel", "kernel-rt", and "kpatch" are only migrated if still in "NEW" or "ASSIGNED". If you cannot log in to RH Jira, please consult article #7032570. That failing, please send an e-mail to the RH Jira admins at rh-issues@redhat.com to troubleshoot your issue as a user management inquiry. The email creates a ServiceNow ticket with Red Hat. Individual Bugzilla bugs that are migrated will be moved to status "CLOSED", resolution "MIGRATED", and set with "MigratedToJIRA" in "Keywords". The link to the successor Jira issue will be found under "Links", have a little "two-footprint" icon next to it, and direct you to the "RHEL project" in Red Hat Jira (issue links are of type "https://issues.redhat.com/browse/RHEL-XXXX", where "X" is a digit). This same link will be available in a blue banner at the top of the page informing you that that bug has been migrated.

Bug 1040735

Summary: [svvp][qemu-kvm]Guest hang when running Disk Stress (Logo) job if guest memory much larger than host memeory
Product: Red Hat Enterprise Linux 6 Reporter: Mike Cao <bcao>
Component: qemu-kvmAssignee: Vadim Rozenfeld <vrozenfe>
Status: CLOSED CURRENTRELEASE QA Contact: Virtualization Bugs <virt-bugs>
Severity: unspecified Docs Contact:
Priority: unspecified    
Version: 6.6CC: bcao, chayang, juzhang, michen, mkenneth, qzhang, rbalakri, rpacheco, virt-maint
Target Milestone: rc   
Target Release: ---   
Hardware: Unspecified   
OS: Unspecified   
Whiteboard:
Fixed In Version: Doc Type: Bug Fix
Doc Text:
Story Points: ---
Clone Of: Environment:
Last Closed: 2015-11-05 07:19:20 UTC Type: Bug
Regression: --- Mount Type: ---
Documentation: --- CRM:
Verified Versions: Category: ---
oVirt Team: --- RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: --- Target Upstream Version:
Embargoed:

Description Mike Cao 2013-12-12 02:14:54 UTC
Description of problem:
As I want to start -smp 64 -mem 1TB for SVVP Test but we do not have such big hosts ,Ronen suggests "Since the 64/1011 refer to the guest size, you can try to run such guest on the largest machine that you do have. The host will need a large swap partition, preferably a fast one (maybe several SSDs will do a good work)"  following tests is undering this scenario

Version-Release number of selected component (if applicable):
qemu-kvm-0.12.1.2-2.415.el6.x86_64
seabios-0.6.1.2-28.el6.x86_64
seabios-0.6.1.2-28.el6.x86_64

How reproducible:
100%

Steps to Reproduce:
1.find 2*60GB SSD card on the hosts ,make it as raid0 /dev/md0 ,then make /dev/md0 to swap partition
   #swapoff -a  
   #modprobe raid456
   #mdadm --create --verbose /dev/md0 --level=stripe --raid-devices=2 /dev/sdb1 /dev/sdc1 -c 4096
   #mkswap /dev/md0
   #swapon /dev/md0
2.# free -g
             total       used       free     shared    buffers     cached
Mem:             7          7          0          0          0          0
-/+ buffers/cache:          7          0
Swap:          111         40         71
[root@dell-op780-06 ~]#
3.Start VM with 48GB memory (win2k12R2)
/usr/libexec/qemu-kvm -M rhel6.5.0 -cpu Penryn -enable-kvm -m 48G -smp 4,cores=4 -name bcao_svvp -uuid 970a2c71-2366-4069-80e5-17ac5f634648 -rtc base=localtime,clock=host,driftfix=slew -drive file=win2k12-R2-Raw,if=none,media=disk,serial=aaabbbccc,werror=stop,rerror=stop,cache=none,format=raw,id=drive-disk0 -device ide-drive,drive=drive-disk0,id=disk0,bootindex=1 -drive file=/usr/share/virtio-win/virtio-win.iso,media=cdrom,if=none,id=bb -device ide-drive,id=bb1,drive=bb -netdev tap,vhost=on,id=netdev0 -device e1000,netdev=netdev0,id=nic1,mac=00:52:11:22:37:49 -vnc :1 -vga cirrus -monitor stdio -usb -device usb-tablet,id=tablet0 -global PIIX4_PM.disable_s3=1 -global PIIX_PM.disable_s4=1 -monitor stdio
4.run Disk stress (logo) job
Actual results:
qemu-kvm process hang with ""(qemu) *** glibc detected *** /usr/libexec/qemu-kvm: corrupted double-linked list: 0x00007ff198b65000 ***
*** glibc detected *** /usr/libexec/qemu-kvm: corrupted double-linked list: 0x00007ff198b65000 ***" 

Expected results:
no hang occures

Additional info:
I think load block device can also reproceue it

Comment 2 Ademar Reis 2014-04-16 19:13:23 UTC
Possibly related: Bug 1040002

Comment 4 Vadim Rozenfeld 2015-01-16 05:27:08 UTC
Hi Mike,
Is it still the issue in RHEL 6.7?

Thanks,
Vadim.

Comment 5 Mike Cao 2015-02-15 07:46:29 UTC
Sorry for the late response ,I still do not get the environment provided by xigao.

needinfo xigao until she can provide the testing environment.

Comment 6 Mike Cao 2015-02-16 02:20:06 UTC
Retest this issue on 2.6.32-530.el6.x86_64
qemu-kvm-rhev-0.12.1.2-2.452.el6.x86_64
virtio-win-1.7.2-2.el6.noarch


Steps same as comment#0

Actual Results:
Can not hit the original issue.The job can pass, qemu-kvm do not quit

Comment 8 Vadim Rozenfeld 2015-11-05 07:19:20 UTC
Closing this bug, based on comment#6