RHEL Engineering is moving the tracking of its product development work on RHEL 6 through RHEL 9 to Red Hat Jira (issues.redhat.com). If you're a Red Hat customer, please continue to file support cases via the Red Hat customer portal. If you're not, please head to the "RHEL project" in Red Hat Jira and file new tickets here. Individual Bugzilla bugs in the statuses "NEW", "ASSIGNED", and "POST" are being migrated throughout September 2023. Bugs of Red Hat partners with an assigned Engineering Partner Manager (EPM) are migrated in late September as per pre-agreed dates. Bugs against components "kernel", "kernel-rt", and "kpatch" are only migrated if still in "NEW" or "ASSIGNED". If you cannot log in to RH Jira, please consult article #7032570. That failing, please send an e-mail to the RH Jira admins at rh-issues@redhat.com to troubleshoot your issue as a user management inquiry. The email creates a ServiceNow ticket with Red Hat. Individual Bugzilla bugs that are migrated will be moved to status "CLOSED", resolution "MIGRATED", and set with "MigratedToJIRA" in "Keywords". The link to the successor Jira issue will be found under "Links", have a little "two-footprint" icon next to it, and direct you to the "RHEL project" in Red Hat Jira (issue links are of type "https://issues.redhat.com/browse/RHEL-XXXX", where "X" is a digit). This same link will be available in a blue banner at the top of the page informing you that that bug has been migrated.
Bug 1050893 - [virtio-win][vioscsi]BOSD occurs (0xd1) when do s4 which guest w/ load
Summary: [virtio-win][vioscsi]BOSD occurs (0xd1) when do s4 which guest w/ load
Keywords:
Status: CLOSED WONTFIX
Alias: None
Product: Red Hat Enterprise Linux 7
Classification: Red Hat
Component: virtio-win
Version: 7.0
Hardware: Unspecified
OS: Unspecified
unspecified
unspecified
Target Milestone: rc
: ---
Assignee: Vadim Rozenfeld
QA Contact: Virtualization Bugs
URL:
Whiteboard:
Depends On:
Blocks: Virt-S3/S4-7.0
TreeView+ depends on / blocked
 
Reported: 2014-01-09 10:03 UTC by Mike Cao
Modified: 2015-11-23 03:37 UTC (History)
10 users (show)

Fixed In Version:
Doc Type: Bug Fix
Doc Text:
Clone Of:
Environment:
Last Closed: 2015-05-28 01:45:14 UTC
Target Upstream Version:
Embargoed:


Attachments (Terms of Use)

Description Mike Cao 2014-01-09 10:03:23 UTC
Description of problem:


Version-Release number of selected component (if applicable):
virtio-win-prewhql-74


How reproducible:
1 time 

Steps to Reproduce:
1.Start VM with virtio-scsi-pci and 9 disks on it 
2.run crystal bechmark on it 
3.do s4

Actual results:
BSOD occurs


Expected results:
no bosd occurs 


Additional info:
CLI:
/usr/libexec/qemu-kvm -M rhel6.5.0 -cpu host -evm -m 2G -smp 2,cores=2 -name bcao_win-7-32-netkvm -uuid 884e673a-1b4a-4385-a522-b3cc35ef4e18 -rtc base=localtime,clock=host,driftfix=slew -device virtio-scsi-pci,id=scsi1 -drive file=win2k8-R2.qcow2,if=none,media=disk,serial=aaabbbccc,werror=stop,rerror=stop,cache=none,format=qcow2,id=drive-disk0 -device ide-drive,bootindex=1,drive=drive-disk0 -drive file=test1.raw,if=none,media=disk,serial=aaabbbccc,werror=stop,rerror=stop,cache=none,format=raw,id=drive-disk1 -device scsi-disk,drive=drive-disk1,id=disk1,bus=scsi1.0 -drive file=test2.raw,if=none,media=disk,serial=aaabbbccc,werror=stop,rerror=stop,cache=none,format=raw,id=drive-disk2 -device scsi-disk,drive=drive-disk2,id=disk2,bus=scsi1.0 -drive file=test3.raw,if=none,media=disk,serial=aaabbbccc,werror=stop,rerror=stop,cache=none,format=raw,id=drive-disk3 -device scsi-disk,drive=drive-disk3,id=disk3,bus=scsi1.0 -drive file=test4.raw,if=none,media=disk,serial=aaabbbccc,werror=stop,rerror=stop,cache=writethrough,format=raw,id=drive-disk4 -device scsi-disk,drive=drive-disk4,id=disk4,bus=scsi1.0 -drive file=test5.raw,if=none,media=disk,serial=aaabbbccc,werror=stop,rerror=stop,cache=writethrough,format=raw,id=drive-disk5 -device scsi-disk,drive=drive-disk5,id=disk5,bus=scsi1.0 -drive file=test6.raw,if=none,media=disk,serial=aaabbbccc,werror=stop,rerror=stop,cache=writethrough,format=raw,id=drive-disk6 -device scsi-disk,drive=drive-disk6,id=disk6,bus=scsi1.0 -drive file=test7.raw,if=none,media=disk,serial=aaabbbccc,werror=stop,rerror=stop,cache=writeback,format=raw,id=drive-disk7 -device scsi-disk,drive=drive-disk7,id=disk7,bus=scsi1.0 -drive file=test8.raw,if=none,media=disk,serial=aaabbbccc,werror=stop,rerror=stop,cache=writeback,format=raw,id=drive-disk8 -device scsi-disk,drive=drive-disk8,id=disk8,bus=scsi1.0 -drive file=test9.raw,if=none,media=disk,serial=aaabbbccc,werror=stop,rerror=stop,cache=writeback,format=raw,id=drive-disk9 -device scsi-disk,drive=drive-disk9,id=disk9,bus=scsi1.0 -vnc :2 -vga cirrus -monitor stdio -usb -device usb-tablet -global PIIX4_PM.disable_s4=0

Comment 1 Mike Cao 2014-01-09 10:04:12 UTC
1: kd> !analyze -v
*******************************************************************************
*                                                                             *
*                        Bugcheck Analysis                                    *
*                                                                             *
*******************************************************************************

DRIVER_IRQL_NOT_LESS_OR_EQUAL (d1)
An attempt was made to access a pageable (or completely invalid) address at an
interrupt request level (IRQL) that is too high.  This is usually
caused by drivers using improper addresses.
If kernel debugger is available get stack backtrace.
Arguments:
Arg1: 000000000006a583, memory referenced
Arg2: 0000000000000002, IRQL
Arg3: 0000000000000008, value 0 = read operation, 1 = write operation
Arg4: 000000000006a583, address which referenced memory

Debugging Details:
------------------


READ_ADDRESS:  000000000006a583 

CURRENT_IRQL:  2

FAULTING_IP: 
+0
00000000`0006a583 ??              ???

PROCESS_NAME:  System

DEFAULT_BUCKET_ID:  WIN7_DRIVER_FAULT

BUGCHECK_STR:  0xD1

TRAP_FRAME:  fffff880009cbab0 -- (.trap 0xfffff880009cbab0)
NOTE: The trap frame does not contain all registers.
Some register values may be zeroed or incorrect.
rax=fffff880009cbbb8 rbx=0000000000000000 rcx=fffffa80023cf010
rdx=fffffa80023cf010 rsi=0000000000000000 rdi=0000000000000000
rip=000000000006a583 rsp=fffff880009cbc48 rbp=fffffa8001ab6060
 r8=fffff880009cbbb8  r9=0000000000000020 r10=fffffa8001a99bc0
r11=0000000000000002 r12=0000000000000000 r13=0000000000000000
r14=0000000000000000 r15=0000000000000000
iopl=0         nv up ei ng nz na po nc
00000000`0006a583 ??              ???
Resetting default scope

LAST_CONTROL_TRANSFER:  from fffff800014d0be9 to fffff800014d1640

FAILED_INSTRUCTION_ADDRESS: 
+0
00000000`0006a583 ??              ???

STACK_TEXT:  
fffff880`009cb968 fffff800`014d0be9 : 00000000`0000000a 00000000`0006a583 00000000`00000002 00000000`00000008 : nt!KeBugCheckEx
fffff880`009cb970 fffff800`014cf860 : fffff800`014cd063 00000000`00000009 00000000`00001000 fffffa80`01c60030 : nt!KiBugCheckDispatch+0x69
fffff880`009cbab0 00000000`0006a583 : fffff880`01202553 00000000`00000000 00000000`000000fa 00000000`ffffffed : nt!KiPageFault+0x260
fffff880`009cbc48 fffff880`01202553 : 00000000`00000000 00000000`000000fa 00000000`ffffffed fffffa80`01ab6060 : 0x6a583
fffff880`009cbc50 fffff800`014dcb1c : fffff880`009b8180 00000000`0000002a fffffa80`01ab6128 00000000`00000031 : storport!RaidpAdapterDpcRoutine+0x53
fffff880`009cbc90 fffff800`014c936a : fffff880`009b8180 fffff880`009c2f40 00000000`00000000 fffff880`01202500 : nt!KiRetireDpcList+0x1bc
fffff880`009cbd40 00000000`00000000 : fffff880`009cc000 fffff880`009c6000 fffff880`009cbd00 00000000`00000000 : nt!KiIdleLoop+0x5a


STACK_COMMAND:  kb

FOLLOWUP_IP: 
storport!RaidpAdapterDpcRoutine+53
fffff880`01202553 ff442448        inc     dword ptr [rsp+48h]

SYMBOL_STACK_INDEX:  4

SYMBOL_NAME:  storport!RaidpAdapterDpcRoutine+53

FOLLOWUP_NAME:  MachineOwner

MODULE_NAME: storport

IMAGE_NAME:  storport.sys

DEBUG_FLR_IMAGE_TIMESTAMP:  4ce7a456

FAILURE_BUCKET_ID:  X64_0xD1_CODE_AV_BAD_IP_storport!RaidpAdapterDpcRoutine+53

BUCKET_ID:  X64_0xD1_CODE_AV_BAD_IP_storport!RaidpAdapterDpcRoutine+53

Followup: MachineOwner
---------

Comment 7 Ronen Hod 2014-08-07 08:18:01 UTC
QE, please check with build-88. might be fixed.

Comment 8 Mike Cao 2014-08-11 13:01:31 UTC
lijin, pls help to handle this needinfo request

Comment 9 lijin 2014-08-14 05:19:44 UTC
cannot reproduce this issue with virtio-win-prewhql-74;
test more than 5 times with both virtio-win-prewhql-74 and virtio-win-prewhql-89,guest can do s4 and resume successfully.

package info:
qemu-kvm-rhev-1.5.3-60.el7ev_0.2.x86_64
kernel-3.10.0-133.el7.x86_64
seabios-1.7.2.2-12.el7.x86_64

Comment 10 Mike Cao 2014-08-14 05:24:46 UTC
(In reply to Ronen Hod from comment #7)
> QE, please check with build-88. might be fixed.


Ronen ,Since we only hit the original bug one time comment #9 can not fully proved this issue has been fixed ald ,It's better to ask developer review the dump file first and paste the dump analyze to the bug comment and to check whether there are potential unfixed issue instead of simply closing it

Comment 14 Vadim Rozenfeld 2015-05-28 01:45:14 UTC
Closing this bug as WONTFIX for the following reasons:
- We are not going to support S3/S4 except for WHQL testing;
- There bug should be fixed already by the fix related to bz#1195920


Note You need to log in before you can comment on or make changes to this bug.