RHEL Engineering is moving the tracking of its product development work on RHEL 6 through RHEL 9 to Red Hat Jira (issues.redhat.com). If you're a Red Hat customer, please continue to file support cases via the Red Hat customer portal. If you're not, please head to the "RHEL project" in Red Hat Jira and file new tickets here. Individual Bugzilla bugs in the statuses "NEW", "ASSIGNED", and "POST" are being migrated throughout September 2023. Bugs of Red Hat partners with an assigned Engineering Partner Manager (EPM) are migrated in late September as per pre-agreed dates. Bugs against components "kernel", "kernel-rt", and "kpatch" are only migrated if still in "NEW" or "ASSIGNED". If you cannot log in to RH Jira, please consult article #7032570. That failing, please send an e-mail to the RH Jira admins at rh-issues@redhat.com to troubleshoot your issue as a user management inquiry. The email creates a ServiceNow ticket with Red Hat. Individual Bugzilla bugs that are migrated will be moved to status "CLOSED", resolution "MIGRATED", and set with "MigratedToJIRA" in "Keywords". The link to the successor Jira issue will be found under "Links", have a little "two-footprint" icon next to it, and direct you to the "RHEL project" in Red Hat Jira (issue links are of type "https://issues.redhat.com/browse/RHEL-XXXX", where "X" is a digit). This same link will be available in a blue banner at the top of the page informing you that that bug has been migrated.
Bug 804493 - Guest hangs and generates a vmcore after resume from S4 with attached virtio scsi disk
Summary: Guest hangs and generates a vmcore after resume from S4 with attached virtio ...
Keywords:
Status: CLOSED WONTFIX
Alias: None
Product: Red Hat Enterprise Linux 6
Classification: Red Hat
Component: kernel
Version: 6.3
Hardware: Unspecified
OS: Unspecified
medium
high
Target Milestone: rc
: ---
Assignee: Paolo Bonzini
QA Contact: Virtualization Bugs
URL:
Whiteboard:
Depends On:
Blocks: 720669 761491 804019 832177 846704 912287
TreeView+ depends on / blocked
 
Reported: 2012-03-19 05:15 UTC by Qunfang Zhang
Modified: 2014-06-05 22:02 UTC (History)
13 users (show)

Fixed In Version:
Doc Type: Bug Fix
Doc Text:
Clone Of:
Environment:
Last Closed: 2014-06-05 22:02:56 UTC
Target Upstream Version:
Embargoed:


Attachments (Terms of Use)
Guest vmcore log (29.56 KB, text/plain)
2012-03-19 05:16 UTC, Qunfang Zhang
no flags Details
vmcore-dmesg.txt of guest (31.89 KB, text/plain)
2013-08-13 07:46 UTC, langfang
no flags Details

Description Qunfang Zhang 2012-03-19 05:15:36 UTC
Description of problem:
Boot a rhel6.3 guest with virtio scsi disk and attach another passthrough iscsi disk, do some read/write operation in the secondary disk and then suspend guest to disk. Then resume guest with the same command line after the qemu-kvm quit, guest hang and then restart, generates a vmcore file.

Version-Release number of selected component (if applicable):
Host:
kernel-2.6.32-251.el6.x86_64
qemu-kvm-0.12.1.2-2.246.el6.x86_64
Guest:
kernel-2.6.32-251.el6.x86_64

How reproducible:
1/5

Steps to Reproduce:
1. Boot a guest with virtio scsi disk and a passthrough scsi disk:
/usr/libexec/qemu-kvm -M rhel6.3.0 -cpu Conroe -enable-kvm -m 4G -smp 2,sockets=1,cores=2,threads=1 -name rhel6.3 -uuid 4c84db67-faf8-4498-9829-19a3d6431d9d -rtc base=localtime,driftfix=slew -drive file=/home/rhel6.3-64.raw,if=none,format=raw,id=scsi0 -device virtio-scsi-pci,id=scsi0 -device scsi-hd,drive=scsi0 -netdev tap,id=hostnet0,script=/etc/qemu-ifup -device e1000,netdev=hostnet0,id=net0,mac=00:1a:2a:42:10:66,bus=pci.0 -usb -device usb-tablet,id=input0 -boot c -monitor stdio -qmp tcp:0:4444,server,nowait -chardev socket,id=charserial0,path=/tmp/qzhang-isa,server,nowait -device isa-serial,chardev=charserial0,id=serial0 -spice port=5930,disable-ticketing -vga qxl -global qxl-vga.vram_size=33554432 -drive file=/dev/disk/by-path/ip-10.66.11.239:3260-iscsi-michen-lun-1-part1,if=none,id=iscsi -device scsi-block,drive=iscsi

2.Mount the secondary disk inside guest and do some dd operation.

3.Suspend guest to disk.
#pm-hibernate

4. Resume guest with the same command line.
  
Actual results:
Guest hangs for a while and then restart automatically, and generates a vmcore file.

Expected results:
Guest resume successfully after several times attempts.

Additional info:
bt log for the vmcore
crash> bt
PID: 2797   TASK: ffff8801193dd500  CPU: 1   COMMAND: "dbus-daemon-lau"
 #0 [ffff880118ddb940] machine_kexec at ffffffff810327ab
 #1 [ffff880118ddb9a0] crash_kexec at ffffffff810b8ef2
 #2 [ffff880118ddba70] oops_end at ffffffff814fb480
 #3 [ffff880118ddbaa0] die at ffffffff8100f26b
 #4 [ffff880118ddbad0] do_general_protection at ffffffff814fb012
 #5 [ffff880118ddbb00] general_protection at ffffffff814fa7e5
    [exception RIP: filp_close+49]
    RIP: ffffffff811765e1  RSP: ffff880118ddbbb8  RFLAGS: 00010206
    RAX: 420affffffffffff  RBX: ffff88011a02a080  RCX: ffff880118069d90
    RDX: fffffffffffdffff  RSI: ffff880118069d00  RDI: ffff88011a02a080
    RBP: ffff880118ddbbd8   R8: 0000000000000000   R9: 0000000000000000
    R10: 000000000000011a  R11: 0000000000000002  R12: ffff880118069d00
    R13: ffff880118069d80  R14: ffff880118069d80  R15: ffff880118069d08
    ORIG_RAX: ffffffffffffffff  CS: 0010  SS: 0018
 #6 [ffff880118ddbbe0] sys_close at ffffffff811766e5
 #7 [ffff880118ddbc10] setup_new_exec at ffffffff8117f97e
 #8 [ffff880118ddbc70] load_elf_binary at ffffffff811cf66b
 #9 [ffff880118ddbe40] search_binary_handler at ffffffff81180adb
#10 [ffff880118ddbeb0] do_execve at ffffffff81181c69
#11 [ffff880118ddbf20] sys_execve at ffffffff810095ea
#12 [ffff880118ddbf50] stub_execve at ffffffff8100b54a
    RIP: 00007f7af5ae0be7  RSP: 00007fff651ee578  RFLAGS: 00000206
    RAX: 000000000000003b  RBX: 00007f7af7890ec0  RCX: ffffffffffffffff
    RDX: 00007f7af789a540  RSI: 00007f7af788ff70  RDI: 00007f7af7894a00
    RBP: 00007fff651ee5c0   R8: 00007f7af6c5f7c0   R9: 0000000000000aec
    R10: 00007fff651ee300  R11: 0000000000000206  R12: 0000000000000020
    R13: 00007fff651ee5d0  R14: 00007fff651ee770  R15: 00007f7af788ff70
    ORIG_RAX: 000000000000003b  CS: 0033  SS: 002b

Comment 1 Qunfang Zhang 2012-03-19 05:16:56 UTC
Created attachment 571009 [details]
Guest vmcore log

Comment 4 RHEL Program Management 2012-07-10 06:27:19 UTC
This request was not resolved in time for the current release.
Red Hat invites you to ask your support representative to
propose this request, if still desired, for consideration in
the next release of Red Hat Enterprise Linux.

Comment 5 RHEL Program Management 2012-07-10 23:35:04 UTC
This request was erroneously removed from consideration in Red Hat Enterprise Linux 6.4, which is currently under development.  This request will be evaluated for inclusion in Red Hat Enterprise Linux 6.4.

Comment 9 Paolo Bonzini 2013-05-21 12:58:00 UTC
Adding conditional dev-ack Reproducer.  If no reproducer is found, I'll close it as INSUFFICIENT_DATA.

Comment 10 Karen Noel 2013-08-09 13:29:32 UTC
Can this be reproduced?

Comment 11 Qunfang Zhang 2013-08-13 06:06:25 UTC
flang will help test this because she is executing the acpi test run this week. Thanks flang.

Comment 12 langfang 2013-08-13 07:45:24 UTC
Hit the same problem on the latest version,only tried once ,can hit

Host:
# uname -r 
2.6.32-410.el6.x86_64
# rpm -q qemu-kvm
qemu-kvm-0.12.1.2-2.387.el6.x86_64

Guest:
2.6.32-410.el6.x86_64


Stesp:
1.Boot guest virtio scsi disk and a passthrough scsi disk:
 /usr/libexec/qemu-kvm -M rhel6.3.0 -cpu Conroe -enable-kvm -m 4G -smp 2,sockets=1,cores=2,threads=1 -name rhel6.3 -uuid 4c84db67-faf8-4498-9829-19a3d6431d9d -rtc base=localtime,driftfix=slew -drive file=/home/rhel64-new.raw,if=none,format=raw,id=scsi0 -device virtio-scsi-pci,id=scsi0 -device scsi-hd,drive=scsi0 -netdev tap,id=hostnet0,script=/etc/qemu-ifup -device e1000,netdev=hostnet0,id=net0,mac=00:1a:2a:42:10:66,bus=pci.0 -usb -device usb-tablet,id=input0 -boot c -monitor stdio -qmp tcp:0:4444,server,nowait -chardev socket,id=charserial0,path=/tmp/qzhang-isa,server,nowait -device isa-serial,chardev=charserial0,id=serial0 -spice port=5930,disable-ticketing -vga qxl -global qxl-vga.vram_size=33554432 -drive file=/dev/disk/by-path/ip-10.66.33.252\:3260-iscsi-langfang-lun-1,if=none,id=iscsi -device scsi-block,drive=iscsi


2.Mount the secondary disk inside guest and do some dd operation.
#mount /dev/sdb /mnt

3.Suspend guest to disk.
#cd /mnt
#dd if=/dev/zero of=lang.txt bs=50M count=40

#pm-hibernate

4. Resume guest with the same command line.
  
Actual results:
Wait about 10 min to resume guest,after guest resume--->guest reboot automatically and generates a vmcore file. 
...
<4>Call Trace:
<4> [<ffffffff81181985>] sys_close+0xa5/0x100
<4> [<ffffffff8118c8ce>] setup_new_exec+0x20e/0x2e0
<4> [<ffffffff811dec3e>] load_elf_binary+0x3ce/0x1ab0
<4> [<ffffffff81141fd2>] ? follow_page+0x412/0x500
<4> [<ffffffff81147210>] ? __get_user_pages+0x110/0x430
<4> [<ffffffff811dcf7e>] ? load_misc_binary+0x9e/0x3f0
<4> [<ffffffff811475c9>] ? get_user_pages+0x49/0x50
<4> [<ffffffff8118da47>] search_binary_handler+0x137/0x370
<4> [<ffffffff8118dfb7>] do_execve+0x217/0x2c0
<4> [<ffffffff810095ea>] sys_execve+0x4a/0x80
<4> [<ffffffff8100b4ca>] stub_execve+0x6a/0xc0
<4>Code: ec 20 48 89 5d e8 4c 89 65 f0 4c 89 6d f8 0f 1f 44 00 00 48 8b 47 30 48 89 fb 49 89 f4 48 85 c0 74 4d 48 8b 47 20 48 85 c0 74 3f <48> 8b 40 68 48 85 c0 74 36 ff d0 41 89 c5 4c 89 e6 48 89 df e8 
<1>RIP  [<ffffffff81181881>] filp_close+0x31/0x90
<4> RSP <ffff88011cf8bbc8>

...

Comment 13 langfang 2013-08-13 07:46:55 UTC
Created attachment 786024 [details]
vmcore-dmesg.txt of guest

Comment 14 Ademar Reis 2014-06-05 22:02:56 UTC
S3/S4 support is tech-preview in RHEL6 and it'll be promoted to fully supported only in RHEL7.

Therefore we're closing all S3/S4 related bugs in RHEL6. New bugs will be considered only if they're regressions or break some important use-case or certification.

Please reopen with a justification if you believe this bug should not be closed. We'll consider them on a case-by-case basis following a best effort approach.

RHEL7 is being more extensively tested and effort from QE is underway in certifying that this particular bug is not present there.

Thank you.


Note You need to log in before you can comment on or make changes to this bug.