RHEL Engineering is moving the tracking of its product development work on RHEL 6 through RHEL 9 to Red Hat Jira (issues.redhat.com). If you're a Red Hat customer, please continue to file support cases via the Red Hat customer portal. If you're not, please head to the "RHEL project" in Red Hat Jira and file new tickets here. Individual Bugzilla bugs in the statuses "NEW", "ASSIGNED", and "POST" are being migrated throughout September 2023. Bugs of Red Hat partners with an assigned Engineering Partner Manager (EPM) are migrated in late September as per pre-agreed dates. Bugs against components "kernel", "kernel-rt", and "kpatch" are only migrated if still in "NEW" or "ASSIGNED". If you cannot log in to RH Jira, please consult article #7032570. That failing, please send an e-mail to the RH Jira admins at rh-issues@redhat.com to troubleshoot your issue as a user management inquiry. The email creates a ServiceNow ticket with Red Hat. Individual Bugzilla bugs that are migrated will be moved to status "CLOSED", resolution "MIGRATED", and set with "MigratedToJIRA" in "Keywords". The link to the successor Jira issue will be found under "Links", have a little "two-footprint" icon next to it, and direct you to the "RHEL project" in Red Hat Jira (issue links are of type "https://issues.redhat.com/browse/RHEL-XXXX", where "X" is a digit). This same link will be available in a blue banner at the top of the page informing you that that bug has been migrated.
Bug 1455488 - [Virtio-win][vioser][ovmf] Guest occured BSOD when hotunplug virtio-serial-pci.
Summary: [Virtio-win][vioser][ovmf] Guest occured BSOD when hotunplug virtio-serial-pci.
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: Red Hat Enterprise Linux 7
Classification: Red Hat
Component: virtio-win
Version: 7.4
Hardware: x86_64
OS: Unspecified
high
high
Target Milestone: rc
: ---
Assignee: Amnon Ilan
QA Contact: xiagao
URL:
Whiteboard:
Depends On:
Blocks: 1473046
TreeView+ depends on / blocked
 
Reported: 2017-05-25 10:19 UTC by xiagao
Modified: 2018-04-10 06:30 UTC (History)
8 users (show)

Fixed In Version:
Doc Type: If docs needed, set a value
Doc Text:
Clone Of:
Environment:
Last Closed: 2018-04-10 06:28:08 UTC
Target Upstream Version:
Embargoed:


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
Red Hat Bugzilla 1457920 0 high CLOSED Hot-unplugging PCI Express virtio devices causes driver reload on Windows Server 2016 2021-12-17 17:34:00 UTC
Red Hat Product Errata RHBA-2018:0657 0 None None None 2018-04-10 06:30:38 UTC

Internal Links: 1457920

Description xiagao 2017-05-25 10:19:44 UTC
Description of problem:
Guest occured BSOD when hotunplug virtio-serial-pci.

Version-Release number of selected component (if applicable):
kernel-3.10.0-671.el7.x86_64
qemu-kvm-rhev-2.9.0-5.el7.x86_64
virtio-win-prewhql-0.1-137

How reproducible:
9/10 

Steps to Reproduce:
1.boot win2016 guest with serial device in q35 and ovmf env.
2.install vioser driver
3.hotunplug port
4.hotunplug virtio-serial-pci


Actual results:
Guest BSOD

Expected results:
no BSOD

Aditional info:
1.Change to another image,did not hit BSOD.
2.Download to virtio-win-prewhql-0.1-126,tried six times,and hit BSOD once.
3.Qemu cmd line
/usr/libexec/qemu-kvm -name 137SRLW10S64TRT -enable-kvm -m 6G -smp 8 -nodefconfig -nodefaults -rtc base=localtime,driftfix=slew -boot order=cd,menu=on -drive file=137SRLW10S64TRT,if=none,id=drive-ide0-0-0,format=raw,serial=mike_cao,cache=none -device ide-drive,bus=ide.0,drive=drive-ide0-0-0,id=ide0-0-0 -drive file=en_windows_server_2016_x64_dvd_9718492.iso,if=none,media=cdrom,id=drive-ide0-1-0,readonly=on,format=raw -device ide-drive,bus=ide.1,drive=drive-ide0-1-0,id=ide0-1-0 -netdev tap,script=/etc/qemu-ifup,downscript=no,id=hostnet0 -device e1000,netdev=hostnet0,id=net0,mac=00:52:69:6c:2a:84 -usb -device usb-tablet,id=input0 -vnc 0.0.0.0:0 -vga std -monitor stdio -qmp tcp:0:1234,server,nowait -M q35 -device ioh3420,bus=pcie.0,id=root1.0,slot=1 -drive file=137SRLW10S64TRT_ovmf/OVMF_CODE.secboot.fd,if=pflash,format=raw,unit=0,readonly=on -drive file=137SRLW10S64TRT_ovmf/OVMF_VARS.fd,if=pflash,format=raw,unit=1 -drive file=137SRLW10S64TRT_ovmf/UefiShell.iso,if=none,cache=none,snapshot=off,aio=native,media=cdrom,id=cdrom1 -device ide-cd,drive=cdrom1,id=ide-cd1 -device virtio-serial-pci,id=virtio-serial0,max_ports=511,bus=root1.0 -chardev socket,id=channel1,path=/tmp/helloworld1,server,nowait -device virtserialport,chardev=channel1,name=com.redhat.rhevm.vdsm1,bus=virtio-serial0.0,id=port1
4. debug log
BugCheck 7E, {ffffffffc0000005, fffff8079a1889f5, ffff9a0051a6c9b8, ffff9a0051a6c1e0}

*** ERROR: Module load completed but symbols could not be loaded for vioser.sys
Probably caused by : vioser.sys ( vioser+89f5 )

Followup: MachineOwner
---------

1: kd> !analyze -v
*******************************************************************************
*                                                                             *
*                        Bugcheck Analysis                                    *
*                                                                             *
*******************************************************************************

SYSTEM_THREAD_EXCEPTION_NOT_HANDLED (7e)
This is a very common bugcheck.  Usually the exception address pinpoints
the driver/function that caused the problem.  Always note this address
as well as the link date of the driver/image that contains this address.
Arguments:
Arg1: ffffffffc0000005, The exception code that was not handled
Arg2: fffff8079a1889f5, The address that the exception occurred at
Arg3: ffff9a0051a6c9b8, Exception Record Address
Arg4: ffff9a0051a6c1e0, Context Record Address

Debugging Details:
------------------


EXCEPTION_CODE: (NTSTATUS) 0xc0000005 - The instruction at 0x%08lx referenced memory at 0x%08lx. The memory could not be %s.

FAULTING_IP: 
vioser+89f5
fffff807`9a1889f5 488b1f          mov     rbx,qword ptr [rdi]

EXCEPTION_RECORD:  ffff9a0051a6c9b8 -- (.exr 0xffff9a0051a6c9b8)
ExceptionAddress: fffff8079a1889f5 (vioser+0x00000000000089f5)
   ExceptionCode: c0000005 (Access violation)
  ExceptionFlags: 00000000
NumberParameters: 2
   Parameter[0]: 0000000000000000
   Parameter[1]: 0000000000008000
Attempt to read from address 0000000000008000

CONTEXT:  ffff9a0051a6c1e0 -- (.cxr 0xffff9a0051a6c1e0;r)
rax=fffff8079a18b2d0 rbx=000000000000032a rcx=ffffd7043df68bf0
rdx=0000000000038995 rsi=ffffd7043df68bf0 rdi=0000000000008000
rip=fffff8079a1889f5 rsp=ffff9a0051a6cbf0 rbp=0000000000000000
 r8=0000000000000001  r9=000000000023f8d7 r10=0000000000000100
r11=ffff9a0051a6cc00 r12=fffff8079a182614 r13=0000000000000000
r14=0000000000000400 r15=fffff8079a182974
iopl=0         nv up ei ng nz na po nc
cs=0010  ss=0018  ds=002b  es=002b  fs=0053  gs=002b             efl=00010286
vioser+0x89f5:
fffff807`9a1889f5 488b1f          mov     rbx,qword ptr [rdi] ds:002b:00000000`00008000=????????????????
Last set context:
rax=fffff8079a18b2d0 rbx=000000000000032a rcx=ffffd7043df68bf0
rdx=0000000000038995 rsi=ffffd7043df68bf0 rdi=0000000000008000
rip=fffff8079a1889f5 rsp=ffff9a0051a6cbf0 rbp=0000000000000000
 r8=0000000000000001  r9=000000000023f8d7 r10=0000000000000100
r11=ffff9a0051a6cc00 r12=fffff8079a182614 r13=0000000000000000
r14=0000000000000400 r15=fffff8079a182974
iopl=0         nv up ei ng nz na po nc
cs=0010  ss=0018  ds=002b  es=002b  fs=0053  gs=002b             efl=00010286
vioser+0x89f5:
fffff807`9a1889f5 488b1f          mov     rbx,qword ptr [rdi] ds:002b:00000000`00008000=????????????????
Resetting default scope

DEFAULT_BUCKET_ID:  WIN8_DRIVER_FAULT

PROCESS_NAME:  System

CURRENT_IRQL:  0

ERROR_CODE: (NTSTATUS) 0xc0000005 - The instruction at 0x%08lx referenced memory at 0x%08lx. The memory could not be %s.

EXCEPTION_PARAMETER1:  0000000000000000

EXCEPTION_PARAMETER2:  0000000000008000

READ_ADDRESS: unable to get nt!MmSpecialPoolStart
unable to get nt!MmSpecialPoolEnd
unable to get nt!MmPagedPoolEnd
unable to get nt!MmNonPagedPoolStart
unable to get nt!MmSizeOfNonPagedPoolInBytes
 0000000000008000 

FOLLOWUP_IP: 
vioser+89f5
fffff807`9a1889f5 488b1f          mov     rbx,qword ptr [rdi]

BUGCHECK_STR:  AV

ANALYSIS_VERSION: 6.3.9600.16384 (debuggers(dbg).130821-1623) amd64fre

LOCK_ADDRESS:  fffff803769a68e0 -- (!locks fffff803769a68e0)

Resource @ nt!PiEngineLock (0xfffff803769a68e0)    Exclusively owned
    Contention Count = 10
     Threads: ffffd7043c65c040-01<*> 
1 total locks, 1 locks currently held

PNP_TRIAGE: 
	Lock address  : 0xfffff803769a68e0
	Thread Count  : 1
	Thread address: 0xffffd7043c65c040
	Thread wait   : 0x40f7

LAST_CONTROL_TRANSFER:  from fffff803767e32c5 to fffff803767d3510

STACK_TEXT:  
ffff9a00`51a6cbf0 fffff807`9a1876a1 : 00000000`0000032a 00000000`00004bf0 00000000`00004bd8 ffffd704`3dd6d370 : vioser+0x89f5
ffff9a00`51a6cc20 fffff807`9a185efb : 00000000`80000011 00000000`00000001 00000000`00000329 ffff9a00`51a6cea8 : vioser+0x76a1
ffff9a00`51a6cc50 fffff807`9a1826b9 : ffff8086`a2e5abb0 ffffd704`3dd6d080 000028fb`c2292f78 fffff807`96c92630 : vioser+0x5efb
ffff9a00`51a6ccb0 fffff807`9a18243f : ffff9a00`00000001 ffffee80`03ca2c00 00000000`00000000 fffff807`96c40dd9 : vioser+0x26b9
ffff9a00`51a6cce0 fffff807`96c236a3 : ffffd704`3e8bed08 ffffd704`3e8bed08 02bc02f0`02f00324 ffff9a00`51a6cd90 : vioser+0x243f
ffff9a00`51a6cd10 fffff807`96c2364b : 00000000`00000000 00000000`00000043 00000000`0000000e ffff9a00`51a6ce08 : Wdf01000!FxPnpDeviceD0Entry::InvokeClient+0x23 [d:\rs1\minkernel\wdf\framework\shared\irphandlers\pnp\pnpcallbacks.cpp @ 93]
ffff9a00`51a6cd70 fffff807`96c25ba8 : 00000000`00000000 ffff9a00`51a6cee0 00000000`000001e0 00000000`00000000 : Wdf01000!FxPrePostCallback::InvokeStateful+0x47 [d:\rs1\minkernel\wdf\framework\shared\irphandlers\pnp\cxpnppowercallbacks.cpp @ 454]
ffff9a00`51a6cdb0 fffff807`96c15b51 : ffffd704`3e8be800 00000000`000002c0 00000000`0000030f fffff807`96c94600 : Wdf01000!FxPkgPnp::PowerD0Starting+0x38 [d:\rs1\minkernel\wdf\framework\shared\irphandlers\pnp\powerstatemachine.cpp @ 2215]
ffff9a00`51a6cde0 fffff807`96c145ba : ffffd704`3e8bea00 fffff807`00000000 ffffd704`3e8be9d8 ffffd704`00000001 : Wdf01000!FxPkgPnp::PowerProcessEventInner+0x231 [d:\rs1\minkernel\wdf\framework\shared\irphandlers\pnp\powerstatemachine.cpp @ 1557]
ffff9a00`51a6cf50 fffff807`96c25ea2 : 00000000`00000000 ffffd704`3e8be800 00000000`00000000 ffffd704`3f3367b0 : Wdf01000!FxPkgPnp::PowerProcessEvent+0x16a [d:\rs1\minkernel\wdf\framework\shared\irphandlers\pnp\powerstatemachine.cpp @ 1338]
ffff9a00`51a6cff0 fffff807`96c14947 : ffffd704`3e8be800 ffff9a00`51a6d120 00000000`00000000 00000000`00000500 : Wdf01000!FxPkgPnp::PowerPolStarting+0x52 [d:\rs1\minkernel\wdf\framework\shared\irphandlers\pnp\powerpolicystatemachine.cpp @ 3559]
ffff9a00`51a6d020 fffff807`96c15fa3 : ffffd704`3e8bead8 ffffd704`00000000 fffff807`96c93290 fffff807`00000001 : Wdf01000!FxPkgPnp::PowerPolicyProcessEventInner+0x227 [d:\rs1\minkernel\wdf\framework\shared\irphandlers\pnp\powerpolicystatemachine.cpp @ 3263]
ffff9a00`51a6d190 fffff807`96c1c5a2 : 00000000`00000000 00000000`00000000 ffffd704`3ef05040 ffffd704`3e8be800 : Wdf01000!FxPkgPnp::PowerPolicyProcessEvent+0x173 [d:\rs1\minkernel\wdf\framework\shared\irphandlers\pnp\powerpolicystatemachine.cpp @ 3023]
ffff9a00`51a6d230 fffff807`96c16a79 : ffffd704`3e8be801 00000000`00000102 ffffd704`3e8be800 00000000`00000108 : Wdf01000!FxPkgPnp::PnpEventHardwareAvailable+0xb2 [d:\rs1\minkernel\wdf\framework\shared\irphandlers\pnp\pnpstatemachine.cpp @ 1458]
ffff9a00`51a6d270 fffff807`96c141a8 : ffffd704`3e8be958 ffff9a00`00000000 ffffd704`3e8be930 00000000`00000001 : Wdf01000!FxPkgPnp::PnpProcessEventInner+0x1c9 [d:\rs1\minkernel\wdf\framework\shared\irphandlers\pnp\pnpstatemachine.cpp @ 1150]
ffff9a00`51a6d320 fffff807`96c26e8e : 00000000`00000000 ffff9a00`51a6d429 00000000`00000000 ffffd704`3f0e8a58 : Wdf01000!FxPkgPnp::PnpProcessEvent+0x158 [d:\rs1\minkernel\wdf\framework\shared\irphandlers\pnp\pnpstatemachine.cpp @ 933]
ffff9a00`51a6d3c0 fffff807`96bf3e7f : ffffd704`3e8be800 ffff9a00`51a6d429 00000000`00000000 ffff8086`a2e34dc0 : Wdf01000!FxPkgPnp::_PnpStartDevice+0x1e [d:\rs1\minkernel\wdf\framework\shared\irphandlers\pnp\fxpkgpnp.cpp @ 1845]
ffff9a00`51a6d3f0 fffff807`96bf34f5 : ffff8086`a2e34dc0 ffffd704`3e8be800 ffff8086`a2e34dc0 ffff8086`a2e34f20 : Wdf01000!FxPkgPnp::Dispatch+0xef [d:\rs1\minkernel\wdf\framework\shared\irphandlers\pnp\fxpkgpnp.cpp @ 654]
ffff9a00`51a6d490 fffff807`96b7dc66 : ffff8086`a2e34dc0 fffff803`76da3f6a 00000000`00000000 fffff803`76d94c08 : Wdf01000!FxDevice::DispatchWithLock+0x155 [d:\rs1\minkernel\wdf\framework\shared\core\fxdevice.cpp @ 1430]
ffff9a00`51a6d580 fffff803`76d89d26 : ffff8086`a2e34dc0 ffffd704`3dec1290 00000000`00000000 ffff9a00`51a6d5f8 : VerifierExt!xdv_IRP_MJ_PNP_wrapper+0xc6
ffff9a00`51a6d5d0 fffff803`766f5092 : ffff8086`a2e34dc0 fffff803`76b82939 ffffd704`3ef05040 ffffd704`3f0e89a0 : nt!IovCallDriver+0x252
ffff9a00`51a6d610 fffff803`76da3f6a : ffff8086`a2e34dc0 fffff803`76b82939 ffffd704`3ef05040 fffff803`00000001 : nt!IofCallDriver+0x72
ffff9a00`51a6d650 fffff803`76d89d26 : ffffd704`3ef05190 ffff8086`a2e34dc0 fffff803`76b82939 ffffd704`3ef05040 : nt!ViFilterDispatchPnp+0x1a2
ffff9a00`51a6d690 fffff803`766f5092 : ffff8086`a2e34dc0 ffff9a00`51a6d7f0 ffffd704`3ef05040 ffffd704`3f2e7580 : nt!IovCallDriver+0x252
ffff9a00`51a6d6d0 fffff803`76b82939 : ffffd704`3d7b2060 ffff9a00`51a6d7f0 ffffd704`3ef05040 ffffd704`3d7b2060 : nt!IofCallDriver+0x72
ffff9a00`51a6d710 fffff803`76780c6e : ffffd704`3d7b2060 00000000`00000000 ffffd704`3ef4c3a0 00000000`00000000 : nt!PnpAsynchronousCall+0xe5
ffff9a00`51a6d750 fffff803`766c59f8 : 00000000`00000000 ffffd704`3d7b2060 fffff803`76780ea8 fffff803`76780ea8 : nt!PnpSendIrp+0x92
ffff9a00`51a6d7c0 fffff803`76b82433 : ffffd704`3d7b1290 ffffd704`3ef4c3a0 00000000`00000000 00000000`00000000 : nt!PnpStartDevice+0x88
ffff9a00`51a6d850 fffff803`76b780fb : ffffd704`3d7b1290 ffff9a00`51a6da20 00000000`00000000 ffffd704`3d7b1290 : nt!PnpStartDeviceNode+0xdb
ffff9a00`51a6d8e0 fffff803`76b85c9d : ffffd704`3d7b1290 00000000`00000001 00000000`00000001 ffffd704`3d781a50 : nt!PipProcessStartPhase1+0x53
ffff9a00`51a6d920 fffff803`76b80322 : ffffd704`3e923fb0 ffff9a00`0000000b 00000000`00000000 00000000`00000000 : nt!PipProcessDevNodeTree+0x401
ffff9a00`51a6dba0 fffff803`766c650a : ffffd701`00000003 00000000`00000000 00000000`00000000 fffff803`00000006 : nt!PiProcessReenumeration+0xa6
ffff9a00`51a6dbf0 fffff803`7671ffd9 : ffffd704`3c65c040 fffff803`769a5380 fffff803`76a46280 fffff803`76a46280 : nt!PnpDeviceActionWorker+0x166
ffff9a00`51a6dcc0 fffff803`7668b729 : 6d6d6d6d`6d6d6d6d 00000000`00000080 ffffd704`3c665040 ffffd704`3c65c040 : nt!ExpWorkerThread+0xe9
ffff9a00`51a6dd50 fffff803`767d89d6 : ffff9a00`514eb180 ffffd704`3c65c040 fffff803`7668b6e8 6d6d6d6d`6d6d6d6d : nt!PspSystemThreadStartup+0x41
ffff9a00`51a6dda0 00000000`00000000 : ffff9a00`51a6e000 ffff9a00`51a68000 00000000`00000000 00000000`00000000 : nt!KiStartSystemThread+0x16


SYMBOL_STACK_INDEX:  0

SYMBOL_NAME:  vioser+89f5

FOLLOWUP_NAME:  MachineOwner

MODULE_NAME: vioser

IMAGE_NAME:  vioser.sys

DEBUG_FLR_IMAGE_TIMESTAMP:  59133065

STACK_COMMAND:  .cxr 0xffff9a0051a6c1e0 ; kb

FAILURE_BUCKET_ID:  AV_VRF_vioser+89f5

BUCKET_ID:  AV_VRF_vioser+89f5

ANALYSIS_SOURCE:  KM

FAILURE_ID_HASH_STRING:  km:av_vrf_vioser+89f5

FAILURE_ID_HASH:  {8924d99d-53cd-c187-f351-9eb1e0d70ee1}

Followup: MachineOwner
---------

Comment 3 xiagao 2017-05-26 03:16:44 UTC
Did not hit this bug under pc and seabios.

Comment 4 xiagao 2017-05-26 05:03:49 UTC
(In reply to xiagao from comment #3)
> Did not hit this bug under pc and seabios.

Also not hit it under q35 and seabios.

Comment 10 Ladi Prosek 2017-05-29 07:28:11 UTC
The driver failed to initialize because it couldn't set MSI vector for 809th virtqueue (STATUS_DEVICE_BUSY - only set on set_queue_vector failure). The error path has a bug and accesses potentially uninitialized memory which triggered the BSOD.

The fix is simple but it's not clear why QEMU would fail to set the vector (we use the same vector for all queues and QEMU doesn't allocate or do anything fail-worthy there). It's also not clear why the driver would be initializing on *unplug*.

Comment 11 Ladi Prosek 2017-05-29 07:31:17 UTC
Hi,

(In reply to xiagao from comment #0)
> Description of problem:
> Guest occured BSOD when hotunplug virtio-serial-pci.
> 
> Version-Release number of selected component (if applicable):
> kernel-3.10.0-671.el7.x86_64
> qemu-kvm-rhev-2.9.0-5.el7.x86_64
> virtio-win-prewhql-0.1-137
> 
> How reproducible:
> 9/10 
> 
> Steps to Reproduce:
> 1.boot win2016 guest with serial device in q35 and ovmf env.
> 2.install vioser driver
> 3.hotunplug port
> 4.hotunplug virtio-serial-pci
> 
> 
> Actual results:
> Guest BSOD
> 
> Expected results:
> no BSOD

Would it be possible to get access to the VM and host? I haven't been able to reproduce this locally. Thanks!

Comment 12 Ladi Prosek 2017-05-29 11:36:39 UTC
Fix for the BSOD has been committed:
https://github.com/virtio-win/kvm-guest-drivers-windows/commit/09e62053b315ad7e09eaddf0431f76ab694c65da

Comment 17 Ladi Prosek 2017-06-01 08:46:57 UTC
(In reply to Ladi Prosek from comment #10)
> The driver failed to initialize because it couldn't set MSI vector for 809th
> virtqueue (STATUS_DEVICE_BUSY - only set on set_queue_vector failure). The
> error path has a bug and accesses potentially uninitialized memory which
> triggered the BSOD.
> 
> The fix is simple but it's not clear why QEMU would fail to set the vector
> (we use the same vector for all queues and QEMU doesn't allocate or do
> anything fail-worthy there). It's also not clear why the driver would be
> initializing on *unplug*.

So the reason why virt queue initialization fails is that the device simply disappears and setting queue MSI vector to 1 fails because the default 0 is read back.

To recap:
1. virtio-serial-pci is hotunplugged and the driver correctly shuts down.
2. For an unknown reason, the device re-appears and Windows loads the driver again.
3. While the driver initializes, the device disappears which triggers the BSOD.

So far we have fixed the BSOD on second device removal but something is still wrong with hotunplug, likely only if the device is connected via PCI Express.

Comment 18 Vadim Rozenfeld 2017-06-01 10:25:01 UTC
should be fixed in build 139
http://download.eng.bos.redhat.com/brewroot/work/tasks/1396/13321396/virtio-win-prewhql-0.1.zip

Comment 19 Ladi Prosek 2017-06-01 14:30:06 UTC
Opened bug 1457920 to track the driver reload issue.

Comment 20 Peixiu Hou 2017-06-19 09:09:22 UTC
Tested this issue with virtio-win-prewhql-139, used a new created image, cannot reproduce this bug, did not hit BSOD, passed 10/10.
And also tried to reproduced it with virtio-win-prewhql-137, did not reproduced it, tried 10 times, did not hit BSOD.

Steps as comment#0

Used version:
kernel-3.10.0-679.el7.x86_64
qemu-kvm-rhev-2.9.0-10.el7.x86_64
seabios-bin-1.10.2-1.el7.noarch

Best Regards~
Peixiu Hou

Comment 21 Ladi Prosek 2017-06-19 09:31:50 UTC
(In reply to Peixiu Hou from comment #20)
> Tested this issue with virtio-win-prewhql-139, used a new created image,
> cannot reproduce this bug, did not hit BSOD, passed 10/10.
> And also tried to reproduced it with virtio-win-prewhql-137, did not
> reproduced it, tried 10 times, did not hit BSOD.
> 
> Steps as comment#0
> 
> Used version:
> kernel-3.10.0-679.el7.x86_64
> qemu-kvm-rhev-2.9.0-10.el7.x86_64
> seabios-bin-1.10.2-1.el7.noarch

Thanks, yes, the timing has to be just right to hit this bug. It is possible that it wouldn't reproduce on other hosts or with fresh installed Windows. I would maybe try different vcpu count: -smp 4, -smp 2, and -smp 1.

Any chance you can use the host and VM where this was originally found?

Comment 22 Peixiu Hou 2017-06-19 10:04:08 UTC
(In reply to Ladi Prosek from comment #21)
> (In reply to Peixiu Hou from comment #20)
> > Tested this issue with virtio-win-prewhql-139, used a new created image,
> > cannot reproduce this bug, did not hit BSOD, passed 10/10.
> > And also tried to reproduced it with virtio-win-prewhql-137, did not
> > reproduced it, tried 10 times, did not hit BSOD.
> > 
> > Steps as comment#0
> > 
> > Used version:
> > kernel-3.10.0-679.el7.x86_64
> > qemu-kvm-rhev-2.9.0-10.el7.x86_64
> > seabios-bin-1.10.2-1.el7.noarch
> 
> Thanks, yes, the timing has to be just right to hit this bug. It is possible
> that it wouldn't reproduce on other hosts or with fresh installed Windows. I
> would maybe try different vcpu count: -smp 4, -smp 2, and -smp 1.
> 
> Any chance you can use the host and VM where this was originally found?

Yeah, tests mentioned on comment#20 were executed on the original host, but the original VM image has been deleted. And I also can try with different vcpu count, any result will update to here, thank you so much~

Comment 23 Peixiu Hou 2017-06-26 07:40:37 UTC
(In reply to Peixiu Hou from comment #22)
> (In reply to Ladi Prosek from comment #21)
> > (In reply to Peixiu Hou from comment #20)
> > > Tested this issue with virtio-win-prewhql-139, used a new created image,
> > > cannot reproduce this bug, did not hit BSOD, passed 10/10.
> > > And also tried to reproduced it with virtio-win-prewhql-137, did not
> > > reproduced it, tried 10 times, did not hit BSOD.
> > > 
> > > Steps as comment#0
> > > 
> > > Used version:
> > > kernel-3.10.0-679.el7.x86_64
> > > qemu-kvm-rhev-2.9.0-10.el7.x86_64
> > > seabios-bin-1.10.2-1.el7.noarch
> > 
> > Thanks, yes, the timing has to be just right to hit this bug. It is possible
> > that it wouldn't reproduce on other hosts or with fresh installed Windows. I
> > would maybe try different vcpu count: -smp 4, -smp 2, and -smp 1.
> > 
> > Any chance you can use the host and VM where this was originally found?
> 
> Yeah, tests mentioned on comment#20 were executed on the original host, but
> the original VM image has been deleted. And I also can try with different
> vcpu count, any result will update to here, thank you so much~

On the original host:
Tried with "-smp 4", did not reproduce this bug, used build 137 and 139, both cannot reproduce it, passed 6/6.
Tried with "-smp 2", did not reproduce this bug, used build 137 and 139, both cannot reproduce it, passed 5/5.
Tried with "-smp 1", did not reproduce this bug, used build 137 and 139, both cannot reproduce it, passed 5/5.

Best Regards~
Peixiu

Comment 24 lijin 2017-08-17 08:17:22 UTC
I'd like to change status to verified as no bsod after many times' try.
Feel free to re-open it if anyone hit it again.

Comment 27 errata-xmlrpc 2018-04-10 06:28:08 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHBA-2018:0657


Note You need to log in before you can comment on or make changes to this bug.