RHEL Engineering is moving the tracking of its product development work on RHEL 6 through RHEL 9 to Red Hat Jira (issues.redhat.com). If you're a Red Hat customer, please continue to file support cases via the Red Hat customer portal. If you're not, please head to the "RHEL project" in Red Hat Jira and file new tickets here. Individual Bugzilla bugs in the statuses "NEW", "ASSIGNED", and "POST" are being migrated throughout September 2023. Bugs of Red Hat partners with an assigned Engineering Partner Manager (EPM) are migrated in late September as per pre-agreed dates. Bugs against components "kernel", "kernel-rt", and "kpatch" are only migrated if still in "NEW" or "ASSIGNED". If you cannot log in to RH Jira, please consult article #7032570. That failing, please send an e-mail to the RH Jira admins at rh-issues@redhat.com to troubleshoot your issue as a user management inquiry. The email creates a ServiceNow ticket with Red Hat. Individual Bugzilla bugs that are migrated will be moved to status "CLOSED", resolution "MIGRATED", and set with "MigratedToJIRA" in "Keywords". The link to the successor Jira issue will be found under "Links", have a little "two-footprint" icon next to it, and direct you to the "RHEL project" in Red Hat Jira (issue links are of type "https://issues.redhat.com/browse/RHEL-XXXX", where "X" is a digit). This same link will be available in a blue banner at the top of the page informing you that that bug has been migrated.
Bug 1013343 - [WHQL][netkvm]BSOD (0A)occurs when running Common Scenario Stress with IO job on win2k8-64 guest
Summary: [WHQL][netkvm]BSOD (0A)occurs when running Common Scenario Stress with IO job...
Keywords:
Status: CLOSED WONTFIX
Alias: None
Product: Red Hat Enterprise Linux 6
Classification: Red Hat
Component: virtio-win
Version: 6.5
Hardware: Unspecified
OS: Unspecified
urgent
urgent
Target Milestone: rc
: ---
Assignee: Dmitry Fleytman
QA Contact: Virtualization Bugs
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2013-09-29 12:56 UTC by Mike Cao
Modified: 2015-11-23 03:37 UTC (History)
8 users (show)

Fixed In Version:
Doc Type: Bug Fix
Doc Text:
Clone Of:
Environment:
Last Closed: 2013-11-06 08:36:44 UTC
Target Upstream Version:
Embargoed:


Attachments (Terms of Use)

Description Mike Cao 2013-09-29 12:56:39 UTC
Description of problem:


Version-Release number of selected component (if applicable):
2.6.32-420.el6.x86_64
qemu-kvm-rhev-0.12.1.2-2.405.el6.x86_64
seabios-0.6.1.2-28.el6.x86_64
spice-server-0.12.4-3.el6.x86_64
virtio-win-prewhql-72

How reproducible:
3/3

Steps to Reproduce:
1./usr/libexec/qemu-kvm -M rhel6.5.0 -m 6G -smp 4,cores=4 -cpu cpu64-rhel6 -usb -device usb-tablet -drive file=win2k8-64-nic1.raw,if=none,id=drive-ide0-0-0,werror=stop,rerror=stop,cache=none -device ide-drive,bus=ide.0,unit=0,drive=drive-ide0-0-0,id=ide0-0-0 -netdev tap,sndbuf=0,id=hostnet0,vhost=on,script=/etc/qemu-ifup-private,downscript=no -device virtio-net-pci,netdev=hostnet0,mac=00:42:32:22:23:42,bus=pci.0,addr=0x4,id=virtio-net-pci0,ctrl_guest_offloads=on -netdev tap,sndbuf=0,id=hostnet1,vhost=on,script=/etc/qemu-ifup-private,downscript=no -device virtio-net-pci,netdev=hostnet1,mac=00:22:22:42:23:32,bus=pci.0,addr=0x5,id=virtio-net-pci1,ctrl_guest_offloads=on -netdev tap,sndbuf=0,id=hostnet2,script=/etc/qemu-ifup,downscript=no -device e1000,netdev=hostnet2,mac=00:13:43:21:a4:21,bus=pci.0,addr=0x6 -uuid b1ec9271-1036-435b-9e69-f65f0d9c974a -no-kvm-pit-reinjection -chardev socket,id=111a,path=/tmp/monitor-win2k8-64-72-nic1,server,nowait -mon chardev=111a,mode=readline -vnc :11 -vga cirrus -name win2k8-64-nic1-72-WLK -rtc base=localtime,clock=host,driftfix=slew -global PIIX4_PM.disable_s3=1 -global PIIX4_PM.disable_s4=1 -monitor stdio
2.Run Common Scenario Stress with IO job 


Actual results:
BSOD occurs 

Expected results:
job can pass


Additional info:

Comment 3 Mike Cao 2013-09-29 13:08:35 UTC
1: kd> !analyze -v
*******************************************************************************
*                                                                             *
*                        Bugcheck Analysis                                    *
*                                                                             *
*******************************************************************************

IRQL_NOT_LESS_OR_EQUAL (a)
An attempt was made to access a pageable (or completely invalid) address at an
interrupt request level (IRQL) that is too high.  This is usually
caused by drivers using improper addresses.
If a kernel debugger is available get the stack backtrace.
Arguments:
Arg1: 0000000000000010, memory referenced
Arg2: 000000000000000c, IRQL
Arg3: 0000000000000000, bitfield :
	bit 0 : value 0 = read operation, 1 = write operation
	bit 3 : value 0 = not an execute operation, 1 = execute operation (only on chips which support this level of status)
Arg4: fffff800016cfe94, address which referenced memory

Debugging Details:
------------------


READ_ADDRESS:  0000000000000010 

CURRENT_IRQL:  0

FAULTING_IP: 
nt!IopCompleteRequest+b74
fffff800`016cfe94 4c8b4910        mov     r9,qword ptr [rcx+10h]

DEFAULT_BUCKET_ID:  VISTA_DRIVER_FAULT

BUGCHECK_STR:  0xA

PROCESS_NAME:  

IRP_ADDRESS:  fffffa6000865098

TRAP_FRAME:  fffffa60019e5090 -- (.trap 0xfffffa60019e5090)
NOTE: The trap frame does not contain all registers.
Some register values may be zeroed or incorrect.
rax=fffffa60019e5428 rbx=0000000000000000 rcx=0000000000000000
rdx=0000000000000000 rsi=0000000000000000 rdi=0000000000000000
rip=fffff800016cfe94 rsp=fffffa60019e5220 rbp=fffff9800a4f8e88
 r8=fffffa60019e5318  r9=fffffa60019e5310 r10=fffffa60005ecf00
r11=0000000000000002 r12=0000000000000000 r13=0000000000000000
r14=0000000000000000 r15=0000000000000000
iopl=0         nv up ei pl zr na po nc
nt!IopCompleteRequest+0xb74:
fffff800`016cfe94 4c8b4910        mov     r9,qword ptr [rcx+10h] ds:00000000`00000010=????????????????
Resetting default scope

LOCK_ADDRESS:  fffff8000185dca0 -- (!locks fffff8000185dca0)

Resource @ nt!PiEngineLock (0xfffff8000185dca0)    Exclusively owned
    Contention Count = 139
     Threads: fffffa8004f6a720-01<*> 
1 total locks, 1 locks currently held

PNP_TRIAGE: 
	Lock address  : 0xfffff8000185dca0
	Thread Count  : 1
	Thread address: 0xfffffa8004f6a720
	Thread wait   : 0x1e703

LAST_CONTROL_TRANSFER:  from fffff800016b9eee to fffff800016ba150

STACK_TEXT:  
fffffa60`019e4f48 fffff800`016b9eee : 00000000`0000000a 00000000`00000010 00000000`0000000c 00000000`00000000 : nt!KeBugCheckEx
fffffa60`019e4f50 fffff800`016b8dcb : 00000000`00000000 fffff800`0171656f 00000000`00000103 fffff980`0a4f8e10 : nt!KiBugCheckDispatch+0x6e
fffffa60`019e5090 fffff800`016cfe94 : fffff980`00000000 00000000`00000000 fffff980`0a4f8fb8 fffffa80`0758ed20 : nt!KiPageFault+0x20b
fffffa60`019e5220 fffff800`016db8be : fffffa60`00865110 fffffa80`04f6a720 fffffa60`019e5380 fffff800`01abb992 : nt!IopCompleteRequest+0xb74
fffffa60`019e52e0 fffff800`016df303 : fffffa60`019e5400 00000000`00000000 00000000`00000000 00000000`00000000 : nt!KiDeliverApc+0x19e
fffffa60`019e5380 fffffa60`00b57b33 : fffffa60`00b389ab 00000000`00000000 fffffa60`00b3a0a8 fffffa80`000e0000 : nt!KiApcInterrupt+0x103
fffffa60`019e5518 fffffa60`00b389ab : 00000000`00000000 fffffa60`00b3a0a8 fffffa80`000e0000 00000000`000007ff : acpi!memset+0x3
fffffa60`019e5520 fffffa60`00b32cef : fffffa80`05b76d50 fffffa80`4154535f fffffa60`00000000 fffffa80`071c5c00 : acpi!ACPIGet+0x197
fffffa60`019e55c0 fffffa60`00b6783b : 00000000`00000000 fffffa60`019e5688 00000000`0000000f 00000000`00000000 : acpi!ACPIDetectFilterDevices+0xf7
fffffa60`019e5650 fffffa60`00b362e5 : fffffa60`00b67794 fffffa80`071c5c70 fffffa80`05b5d450 fffffa80`05a5a140 : acpi!ACPIBusIrpQueryDeviceRelations+0xa7
fffffa60`019e5680 fffff800`01ac658a : fffff980`0a4c8ea0 fffffa80`05b5d450 fffff980`0a4c8ea0 00000000`00000002 : acpi!ACPIDispatchIrp+0x191
fffffa60`019e5700 fffffa60`00bb043f : fffff980`0a4c8f70 fffff980`0a4c8ea0 00000000`00000002 fffffa80`0656f3f0 : nt!IovCallDriver+0x34a
fffffa60`019e5740 fffffa60`00bae608 : 00000000`00000000 fffffa80`05b5b6f0 fffffa60`019e5800 00000000`000007ff : pci!PciCallDownIrpStack+0x73
fffffa60`019e57a0 fffffa60`00b98aaa : fffff980`0a4c8ea0 fffff980`0a4c8fb8 fffff980`0a4c8ea0 fffff800`01ab1dbd : pci!PciBus_QueryDeviceRelations+0x1bc
fffffa60`019e57f0 fffff800`01ac658a : fffff980`0a4c8ea0 00000000`00000002 fffffa80`05b5b5a0 fffff880`06c7b410 : pci!PciDispatchPnpPower+0xda
fffffa60`019e5830 fffff800`019f4901 : fffff980`0a4c8ea0 fffffa80`071b86f0 fffff980`0a4c8ea0 fffffa80`07706e70 : nt!IovCallDriver+0x34a
fffffa60`019e5870 fffff800`019f7060 : 00000000`00280026 fffff800`0176dc30 fffffa80`05b5d450 00000000`00000003 : nt!PnpAsynchronousCall+0xd1
fffffa60`019e58b0 fffff800`01a3e5a0 : fffffa80`05b5c790 fffffa80`05b5c790 00000000`00000002 fffffa80`05b5c790 : nt!PnpQueryDeviceRelations+0xd0
fffffa60`019e5960 fffff800`01a96e7c : fffffa80`05b5c790 fffffa80`05b5c790 fffffa80`05b5c790 00000000`00000002 : nt!PipEnumerateDevice+0x120
fffffa60`019e59c0 fffff800`01a9741a : 00000000`00000000 fffff800`016bf8a4 00000000`00000000 fffff800`018b8531 : nt!PipProcessDevNodeTree+0x21c
fffffa60`019e5c30 fffff800`0179100d : fffff801`00000003 fffffa80`075c6430 00000000`00000000 fffffa60`32706e50 : nt!PiProcessReenumeration+0x8a
fffffa60`019e5c80 fffff800`016c15cb : fffff800`01790de0 fffffa01`00000001 fffff800`017f48f8 00000000`00000000 : nt!PnpDeviceActionWorker+0x22d
fffffa60`019e5cf0 fffff800`018c6227 : fffff800`0185b620 00000000`00000000 fffffa80`04f6a720 00000000`00000080 : nt!ExpWorkerThread+0xfb
fffffa60`019e5d50 fffff800`016f7456 : fffffa60`005ec180 fffffa80`04f6a720 fffffa60`005f5d40 fffffa80`04f69ca8 : nt!PspSystemThreadStartup+0x57
fffffa60`019e5d80 00000000`00000000 : 00000000`00000000 00000000`00000000 00000000`00000000 00000000`00000000 : nt!KiStartSystemThread+0x16


STACK_COMMAND:  kb

FOLLOWUP_IP: 
pci!PciCallDownIrpStack+73
fffffa60`00bb043f 3d03010000      cmp     eax,103h

SYMBOL_STACK_INDEX:  c

SYMBOL_NAME:  pci!PciCallDownIrpStack+73

FOLLOWUP_NAME:  MachineOwner

MODULE_NAME: pci

IMAGE_NAME:  pci.sys

DEBUG_FLR_IMAGE_TIMESTAMP:  49e024a5

FAILURE_BUCKET_ID:  X64_0xA_VRF_pci!PciCallDownIrpStack+73

BUCKET_ID:  X64_0xA_VRF_pci!PciCallDownIrpStack+73

Followup: MachineOwner

Comment 4 Yvugenfi@redhat.com 2013-11-06 08:00:35 UTC
Is this HCK or WLK?

If this is HCK - I suggest to close it.

Comment 5 Mike Cao 2013-11-06 08:36:44 UTC
(In reply to Yan Vugenfirer from comment #4)
> Is this HCK or WLK?
> 
> If this is HCK - I suggest to close it.

HCK


Note You need to log in before you can comment on or make changes to this bug.