RHEL Engineering is moving the tracking of its product development work on RHEL 6 through RHEL 9 to Red Hat Jira (issues.redhat.com). If you're a Red Hat customer, please continue to file support cases via the Red Hat customer portal. If you're not, please head to the "RHEL project" in Red Hat Jira and file new tickets here. Individual Bugzilla bugs in the statuses "NEW", "ASSIGNED", and "POST" are being migrated throughout September 2023. Bugs of Red Hat partners with an assigned Engineering Partner Manager (EPM) are migrated in late September as per pre-agreed dates. Bugs against components "kernel", "kernel-rt", and "kpatch" are only migrated if still in "NEW" or "ASSIGNED". If you cannot log in to RH Jira, please consult article #7032570. That failing, please send an e-mail to the RH Jira admins at rh-issues@redhat.com to troubleshoot your issue as a user management inquiry. The email creates a ServiceNow ticket with Red Hat. Individual Bugzilla bugs that are migrated will be moved to status "CLOSED", resolution "MIGRATED", and set with "MigratedToJIRA" in "Keywords". The link to the successor Jira issue will be found under "Links", have a little "two-footprint" icon next to it, and direct you to the "RHEL project" in Red Hat Jira (issue links are of type "https://issues.redhat.com/browse/RHEL-XXXX", where "X" is a digit). This same link will be available in a blue banner at the top of the page informing you that that bug has been migrated.
Bug 846533 - [scsi]Guest BSOD when during s4 while running crystal benchmark
Summary: [scsi]Guest BSOD when during s4 while running crystal benchmark
Keywords:
Status: CLOSED DUPLICATE of bug 846519
Alias: None
Product: Red Hat Enterprise Linux 6
Classification: Red Hat
Component: seabios
Version: 6.4
Hardware: Unspecified
OS: Unspecified
high
high
Target Milestone: rc
: ---
Assignee: Paolo Bonzini
QA Contact: Virtualization Bugs
URL:
Whiteboard:
Depends On:
Blocks: 896495
TreeView+ depends on / blocked
 
Reported: 2012-08-08 05:06 UTC by Mike Cao
Modified: 2015-11-23 03:36 UTC (History)
13 users (show)

Fixed In Version:
Doc Type: Bug Fix
Doc Text:
Clone Of:
Environment:
Last Closed: 2013-03-12 07:09:12 UTC
Target Upstream Version:
Embargoed:


Attachments (Terms of Use)

Description Mike Cao 2012-08-08 05:06:41 UTC
Description of problem:


Version-Release number of selected component (if applicable):
2.6.32-294.el6.x86_64
qemu-kvm-0.12.1.2-2.302.el6.x86_64
seabios-0.6.1.2-19.el6.x86_64

How reproducible:
2/2

Steps to Reproduce:
1.Start VM w/ virtio-scsi
eg:/usr/libexec/qemu-kvm -boot dc -m 4G -smp 2 -cpu Westmere -usb -device usb-tablet -netdev tap,sndbuf=0,id=hostnet2,script=/etc/qemdownscript=no -device e1000,netdev=hostnet2,mac=00:52:13:20:F5:22,bus=pci.0,addr=0x6 -uuid 7976cd92-6557-493d-86a3-7e2055a2d4cd -no-kvm-pit-reinjection -monitor stdio -rtc base=localtime,clock=host,driftfix=slew -device virtio-scsi-pci,id=bus1 -drive file=/home/win2k8-64.qcow2,if=none,media=disk,format=qcow2,rerror=stop,werror=stop,cache=writeback,aio=native,id=scsi-disk0 -device scsi-disk,drive=scsi-disk0,id=disk,bus=bus1.0,serial=miketest -vnc :3 -vga cirrus  -fda /home/virtio-win.vfd -bios /usr/share/seabios/bios-pm.bin -drive file=/hotadd.qcow2,if=none,werror=stop,readonly=on,cache=none,werror=stop,id=drive-hotadd -device virtio-scsi-pci,id=scsi-hotadd -device scsi-hd,drive=drive-hotadd,id=hotadd,bus=scsi-hotadd.0  -drive file=/home/hotadd2.qcow2,if=none,werror=stop,cache=none,werror=stop,id=drive-hotadd2,readonly=on -device virtio-scsi-pci,id=scsi-hotadd2 -device scsi-hd,drive=drive-hotadd2,id=hotadd2,bus=scsi-hotadd2.0
2.running crystal benchmark in the guest
3.during step 2 ,do s4
  
Actual results:
Guest BSOD

Expected results:
no bsod occurs

Additional info:
I did not report against virtio-win conponent due to the dmp said storport.sys cause the BSOD while it is not belongs to virtio-win.

Comment 1 Mike Cao 2012-08-08 05:07:58 UTC
0: kd> !analyze -v
*******************************************************************************
*                                                                             *
*                        Bugcheck Analysis                                    *
*                                                                             *
*******************************************************************************

DRIVER_POWER_STATE_FAILURE (9f)
A driver has failed to complete a power IRP within a specific time (usually 10 minutes).
Arguments:
Arg1: 0000000000000004, The power transition timed out waiting to synchronize with the Pnp
	subsystem.
Arg2: 0000000000000258, Timeout in seconds.
Arg3: fffffa80036b5040, The thread currently holding on to the Pnp lock.
Arg4: 0000000000000000, nt!TRIAGE_9F_PNP on Win7

Debugging Details:
------------------

Implicit thread is now fffffa80`036b5040

DRVPOWERSTATE_SUBCODE:  4

FAULTING_THREAD:  fffffa80036b5040

DEFAULT_BUCKET_ID:  VISTA_DRIVER_FAULT

BUGCHECK_STR:  0x9F

PROCESS_NAME:  System

CURRENT_IRQL:  2

LOCK_ADDRESS:  fffff8000185ec40 -- (!locks fffff8000185ec40)

Resource @ nt!PiEngineLock (0xfffff8000185ec40)    Exclusively owned
    Contention Count = 5
    NumberOfExclusiveWaiters = 2
     Threads: fffffa80036b5040-01<*> 
     Threads Waiting On Exclusive Access:
              fffffa80036b4040       fffffa80036b4bb0       

1 total locks, 1 locks currently held

PNP_TRIAGE: 
	Lock address  : 0xfffff8000185ec40
	Thread Count  : 1
	Thread address: 0xfffffa80036b5040
	Thread wait   : 0x553f

LAST_CONTROL_TRANSFER:  from fffff800016c06fa to fffff800016c0c7f

STACK_TEXT:  
fffffa60`017f5080 fffff800`016c06fa : fffffa60`017f52f0 00000000`00000000 00000000`00000000 00000000`00000000 : nt!KiSwapContext+0x7f
fffffa60`017f51c0 fffff800`016b535b : fffffa60`00ced110 fffffa60`00ce70f7 fffffa80`0477e600 fffffa80`00000008 : nt!KiSwapThread+0x13a
fffffa60`017f5230 fffffa60`00cd7f51 : fffffa80`00000000 fffffa80`00000000 00000000`00000000 00001f80`72536100 : nt!KeWaitForSingleObject+0x2cb
fffffa60`017f52c0 fffffa60`00cd9770 : fffffa80`04680b20 00000000`00000002 00000000`00000001 00000000`00000010 : storport!RaSendIrpSynchronous+0x71
fffffa60`017f5320 fffffa60`00cddca5 : 00000000`00000000 fffffa60`017f56f0 fffffa60`017f56f0 fffffa80`04680b20 : storport!RaidBusEnumeratorIssueSynchronousRequest+0xa0
fffffa60`017f5420 fffffa60`00cdddcb : fffffa80`04680b20 00000000`00000000 fffffa60`017f56f0 00000000`00000000 : storport!RaidBusEnumeratorIssueReportLuns+0x65
fffffa60`017f5470 fffffa60`00ce8661 : fffffa80`039e61b0 00000000`00000000 fffffa60`017f5590 00000000`00000000 : storport!RaidBusEnumeratorGetLunListFromTarget+0x9b
fffffa60`017f54e0 fffffa60`00ce8794 : fffffa60`005ec7f0 fffffa80`039e56f0 00000000`00000000 00000000`00000000 : storport!RaidBusEnumeratorGetLunList+0x61
fffffa60`017f5560 fffffa60`00ce888d : fffffa80`039a31b0 fffffa60`00ced110 fffffa80`03bccc60 fffffa80`039a31b0 : storport!RaidAdapterEnumerateBus+0x94
fffffa60`017f56d0 fffffa60`00d2717e : 00000000`00000001 00000000`c00000bb fffffa60`00fd2110 fffffa80`03bccc60 : storport!RaidAdapterRescanBus+0x7d
fffffa60`017f5790 fffffa60`00d2734e : fffffa80`039a31b0 00000000`00000000 fffffa80`03bccc60 00000000`00000007 : storport!RaidAdapterQueryDeviceRelationsIrp+0xae
fffffa60`017f57d0 fffffa60`00d275d5 : fffffa80`03bccc60 fffff800`0176ea60 00000000`00000001 fffffa80`03bccc60 : storport!RaidAdapterPnpIrp+0xee
fffffa60`017f5830 fffff800`019f7211 : fffffa80`03bccc60 fffffa80`0476e5e0 fffffa80`039a3060 fffffa80`039e56f0 : storport!RaDriverPnpIrp+0x95
fffffa60`017f5870 fffff800`019f9970 : fffffa80`0366ede0 fffff800`0176ea60 fffffa80`03972060 fffff800`018b8ee1 : nt!PnpAsynchronousCall+0xd1
fffffa60`017f58b0 fffff800`01a40d80 : fffffa80`03973520 fffffa80`03973520 00000000`00000002 00000000`00000000 : nt!PnpQueryDeviceRelations+0xd0
fffffa60`017f5960 fffff800`01a9979c : fffffa80`03973520 fffffa80`0366ede0 fffffa80`0366ede0 00000000`00000002 : nt!PipEnumerateDevice+0x120
fffffa60`017f59c0 fffff800`01a99d3a : 00000000`00000000 fffff800`017f5800 fffffa60`005efcc0 00000000`00000001 : nt!PipProcessDevNodeTree+0x21c
fffffa60`017f5c30 fffff800`01791e3d : fffff801`00000003 fffffa80`04c587e0 00000000`00000000 00000000`32706e50 : nt!PiProcessReenumeration+0x8a
fffffa60`017f5c80 fffff800`016c58c3 : fffff800`01791c10 fffffa80`036b5001 fffff800`017f58f8 00000000`00000000 : nt!PnpDeviceActionWorker+0x22d
fffffa60`017f5cf0 fffff800`018c8f37 : fffff800`0185c5c0 01c85a83`a48bffa9 fffffa80`036b5040 00000000`00000080 : nt!ExpWorkerThread+0xfb
fffffa60`017f5d50 fffff800`016fb616 : fffffa60`005ec180 fffffa80`036b5040 fffffa60`005f5d40 fffffa80`036b6818 : nt!PspSystemThreadStartup+0x57
fffffa60`017f5d80 00000000`00000000 : 00000000`00000000 00000000`00000000 00000000`00000000 00000000`00000000 : nt!KxStartSystemThread+0x16


STACK_COMMAND:  .thread 0xfffffa80036b5040 ; kb

FOLLOWUP_IP: 
storport!RaSendIrpSynchronous+71
fffffa60`00cd7f51 33c0            xor     eax,eax

SYMBOL_STACK_INDEX:  3

SYMBOL_NAME:  storport!RaSendIrpSynchronous+71

FOLLOWUP_NAME:  MachineOwner

MODULE_NAME: storport

IMAGE_NAME:  storport.sys

DEBUG_FLR_IMAGE_TIMESTAMP:  49e02bf5

FAILURE_BUCKET_ID:  X64_0x9F_4_storport!RaSendIrpSynchronous+71

BUCKET_ID:  X64_0x9F_4_storport!RaSendIrpSynchronous+71

Followup: MachineOwner
---------

Comment 4 Mike Cao 2012-08-08 05:51:08 UTC
Tried w/ cache=none ,does not hit this issue .

Comment 7 Gal Hammer 2013-03-11 12:08:02 UTC
Hi Mike,

Could be that this is a duplicate of bz#846519? Does the new SeaBIOS works for this one as well?

Thanks, Gal.

Comment 8 dawu 2013-03-12 06:34:37 UTC
(In reply to comment #7)
> Hi Mike,
> 
> Could be that this is a duplicate of bz#846519? Does the new SeaBIOS works
> for this one as well?
> 
> Thanks, Gal.

Hi Gal,
Yes, it should be related to bz#846912, trid to with the new SeaBIOS from https://bugzilla.redhat.com/show_bug.cgi?id=912561, this issue gone.

Thanks for your reminder.

Best Regards,
Dawn

Comment 9 Mike Cao 2013-03-12 07:09:12 UTC

*** This bug has been marked as a duplicate of bug 912561 ***

Comment 10 Paolo Bonzini 2013-03-12 12:11:54 UTC
Closing as a duplicate of the right bug.

*** This bug has been marked as a duplicate of bug 846519 ***


Note You need to log in before you can comment on or make changes to this bug.