RHEL Engineering is moving the tracking of its product development work on RHEL 6 through RHEL 9 to Red Hat Jira (issues.redhat.com). If you're a Red Hat customer, please continue to file support cases via the Red Hat customer portal. If you're not, please head to the "RHEL project" in Red Hat Jira and file new tickets here. Individual Bugzilla bugs in the statuses "NEW", "ASSIGNED", and "POST" are being migrated throughout September 2023. Bugs of Red Hat partners with an assigned Engineering Partner Manager (EPM) are migrated in late September as per pre-agreed dates. Bugs against components "kernel", "kernel-rt", and "kpatch" are only migrated if still in "NEW" or "ASSIGNED". If you cannot log in to RH Jira, please consult article #7032570. That failing, please send an e-mail to the RH Jira admins at rh-issues@redhat.com to troubleshoot your issue as a user management inquiry. The email creates a ServiceNow ticket with Red Hat. Individual Bugzilla bugs that are migrated will be moved to status "CLOSED", resolution "MIGRATED", and set with "MigratedToJIRA" in "Keywords". The link to the successor Jira issue will be found under "Links", have a little "two-footprint" icon next to it, and direct you to the "RHEL project" in Red Hat Jira (issue links are of type "https://issues.redhat.com/browse/RHEL-XXXX", where "X" is a digit). This same link will be available in a blue banner at the top of the page informing you that that bug has been migrated.
Bug 1711167 - Windows guest hits BSOD after rescan disks in Disk Management.
Summary: Windows guest hits BSOD after rescan disks in Disk Management.
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: Red Hat Enterprise Linux 8
Classification: Red Hat
Component: virtio-win
Version: 8.2
Hardware: Unspecified
OS: Windows
high
high
Target Milestone: rc
: 8.2
Assignee: Vadim Rozenfeld
QA Contact: menli@redhat.com
URL:
Whiteboard:
: 1783214 (view as bug list)
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2019-05-17 07:03 UTC by Xueqiang Wei
Modified: 2020-07-21 15:33 UTC (History)
15 users (show)

Fixed In Version:
Doc Type: If docs needed, set a value
Doc Text:
Clone Of:
Environment:
Last Closed: 2020-07-21 15:32:52 UTC
Type: Bug
Target Upstream Version:
Embargoed:


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
Red Hat Product Errata RHEA-2020:3055 0 None None None 2020-07-21 15:33:27 UTC

Description Xueqiang Wei 2019-05-17 07:03:24 UTC
Description of problem:

Windows guest with dummy image hits BSOD after rescan disks in Disk Management.





Version-Release number of selected component (if applicable):

Host:
kernel-4.18.0-85.el8.x86_64
qemu-kvm-4.0.0-0.module+el8.1.0+3169+3c501422

Guest:
windows 2019 with virtio-win-prewhql-0.1-171



How reproducible:
3/3


Steps to Reproduce:

1. boot guest with below cmd lines.
/usr/libexec/qemu-kvm \
    -S  \
    -name 'avocado-vt-vm1'  \
    -sandbox off  \
    -machine q35  \
    -nodefaults \
    -device VGA,bus=pcie.0,addr=0x1 \
    -device pcie-root-port,id=pcie_root_port_0,slot=2,chassis=2,addr=0x2,bus=pcie.0 \
    -device pcie-root-port,id=pcie_root_port_1,slot=3,chassis=3,addr=0x3,bus=pcie.0 \
    -device pcie-root-port,id=pcie_root_port_2,slot=4,chassis=4,addr=0x4,bus=pcie.0  \
    -chardev socket,id=qmp_id_qmpmonitor1,path=/var/tmp/avocado_w2u90exl/monitor-qmpmonitor1-20181127-024837-wdAVx2FL,server,nowait \
    -mon chardev=qmp_id_qmpmonitor1,mode=control  \
    -chardev socket,id=qmp_id_catch_monitor,path=/var/tmp/avocado_w2u90exl/monitor-catch_monitor-20181127-024837-wdAVx2FL,server,nowait \
    -mon chardev=qmp_id_catch_monitor,mode=control \
    -device pvpanic,ioport=0x505,id=idulvcka  \
    -chardev socket,id=serial_id_serial0,path=/var/tmp/avocado_w2u90exl/serial-serial0-20181127-024837-wdAVx2FL,server,nowait \
    -device isa-serial,chardev=serial_id_serial0  \
    -chardev socket,id=seabioslog_id_20181127-024837-wdAVx2FL,path=/var/tmp/avocado_w2u90exl/seabios-20181127-024837-wdAVx2FL,server,nowait \
    -device isa-debugcon,chardev=seabioslog_id_20181127-024837-wdAVx2FL,iobase=0x402 \
    -device pcie-root-port,id=pcie.0-root-port-5,slot=5,chassis=5,addr=0x5,bus=pcie.0 \
    -device qemu-xhci,id=usb1,bus=pcie.0-root-port-5,addr=0x0 \
    -object iothread,id=iothread0 \
    -device pcie-root-port,id=pcie.0-root-port-6,slot=6,chassis=6,addr=0x6,bus=pcie.0 \
    -device virtio-scsi-pci,id=virtio_scsi_pci0,bus=pcie.0-root-port-6,addr=0x0,iothread=iothread0 \
    -blockdev driver=file,cache.direct=on,cache.no-flush=off,filename=/home/nfs_test/win2019-64-virtio-scsi.qcow2,node-name=my_file \
    -blockdev driver=qcow2,node-name=my,file=my_file,cache.direct=on,cache.no-flush=off \
    -device scsi-hd,drive=my,bus=virtio_scsi_pci0.0,write-cache=on \
    -device pcie-root-port,id=pcie.0-root-port-7,slot=7,chassis=7,addr=0x7,bus=pcie.0 \
    -device virtio-net-pci,mac=9a:34:35:36:37:38,id=idyb3F88,vectors=4,netdev=idTAFS0s,bus=pcie.0-root-port-7,addr=0x0  \
    -netdev tap,id=idTAFS0s,vhost=on \
    -m 4G  \
    -smp 12,maxcpus=12,cores=6,threads=1,sockets=2  \
    -cpu 'Opteron_G5',hv_stimer,hv_synic,hv_vpindex,hv_reset,hv_relaxed,hv_spinlocks=0x1fff,hv_vapic,hv_time,hv-tlbflush,+kvm_pv_unhalt \
    -device pcie-root-port,id=pcie.0-root-port-8,slot=8,chassis=8,addr=0x8,bus=pcie.0 \
    -device virtio-scsi-pci,id=virtio_scsi_pci2,bus=pcie.0-root-port-8,addr=0x0 \
    -blockdev driver=raw,file.driver=file,cache.direct=off,cache.no-flush=on,file.filename=/home/kvm_autotest_root/iso/windows/winutils.iso,node-name=drive2,read-only=on \
    -device scsi-cd,drive=drive2,id=data-disk1,bus=virtio_scsi_pci2.0 \
    -blockdev driver=raw,file.driver=file,cache.direct=off,cache.no-flush=on,file.filename=/home/kvm_autotest_root/iso/windows/virtio-win-prewhql-0.1-171.iso,node-name=drive3,read-only=on \
    -device scsi-cd,drive=drive3,id=data-disk2,bus=virtio_scsi_pci2.0 \
    -device usb-tablet,id=usb-tablet1,bus=usb1.0,port=1  \
    -vnc :0  \
    -rtc base=localtime,clock=host,driftfix=slew  \
    -boot order=cdn,once=c,menu=off,strict=off \
    -enable-kvm \
    -monitor stdio \
    -qmp tcp:0:4444,server,nowait \
    -device pcie-root-port,id=pcie.0-root-port-9,slot=9,chassis=9,addr=0x9,bus=pcie.0 \
    -device virtio-scsi-pci,id=virtio_scsi_pci1,bus=pcie.0-root-port-9,addr=0x0 \
    -blockdev node-name=file_none,driver=null-co \
    -blockdev node-name=drive_none,driver=raw,file=file_none \
    -device scsi-hd,id=none,drive=drive_none \

2. check dummy image in qemu and qmp monitor.
(qemu) info block
my: /home/nfs_test/win2019-64-virtio-scsi.qcow2 (qcow2)
    Attached to:      /machine/peripheral-anon/device[3]
    Cache mode:       writeback, direct

drive2: /home/kvm_autotest_root/iso/windows/winutils.iso (raw, read-only)
    Attached to:      data-disk1
    Removable device: not locked, tray closed
    Cache mode:       writeback, ignore flushes

drive3: /home/kvm_autotest_root/iso/windows/virtio-win-prewhql-0.1-171.iso (raw, read-only)
    Attached to:      data-disk2
    Removable device: not locked, tray closed
    Cache mode:       writeback, ignore flushes

drive_none: null-co:// (raw)
    Attached to:      none
    Cache mode:       writeback


{"execute": "query-block"}

{"return": [{"io-status": "ok", "device": "", "locked": false, "removable": false, "inserted": {"iops_rd": 0, "detect_zeroes": "off", "image": {"virtual-size": 32212254720, "filename": "/home/nfs_test/win2019-64-virtio-scsi.qcow2", "cluster-size": 65536, "format": "qcow2", "actual-size": 15222702080, "format-specific": {"type": "qcow2", "data": {"compat": "1.1", "lazy-refcounts": false, "refcount-bits": 16, "corrupt": false}}, "dirty-flag": false}, "iops_wr": 0, "ro": false, "node-name": "my", "backing_file_depth": 0, "drv": "qcow2", "iops": 0, "bps_wr": 0, "write_threshold": 0, "encrypted": false, "bps": 0, "bps_rd": 0, "cache": {"no-flush": false, "direct": true, "writeback": true}, "file": "/home/nfs_test/win2019-64-virtio-scsi.qcow2", "encryption_key_missing": false}, "qdev": "/machine/peripheral-anon/device[3]", "type": "unknown"}, {"io-status": "ok", "device": "", "locked": false, "removable": true, "inserted": {"iops_rd": 0, "detect_zeroes": "off", "image": {"virtual-size": 2931793920, "filename": "/home/kvm_autotest_root/iso/windows/winutils.iso", "format": "raw", "actual-size": 2937556992, "dirty-flag": false}, "iops_wr": 0, "ro": true, "node-name": "drive2", "backing_file_depth": 0, "drv": "raw", "iops": 0, "bps_wr": 0, "write_threshold": 0, "encrypted": false, "bps": 0, "bps_rd": 0, "cache": {"no-flush": true, "direct": false, "writeback": true}, "file": "/home/kvm_autotest_root/iso/windows/winutils.iso", "encryption_key_missing": false}, "qdev": "data-disk1", "tray_open": false, "type": "unknown"}, {"io-status": "ok", "device": "", "locked": false, "removable": true, "inserted": {"iops_rd": 0, "detect_zeroes": "off", "image": {"virtual-size": 612358144, "filename": "/home/kvm_autotest_root/iso/windows/virtio-win-prewhql-0.1-171.iso", "format": "raw", "actual-size": 613568512, "dirty-flag": false}, "iops_wr": 0, "ro": true, "node-name": "drive3", "backing_file_depth": 0, "drv": "raw", "iops": 0, "bps_wr": 0, "write_threshold": 0, "encrypted": false, "bps": 0, "bps_rd": 0, "cache": {"no-flush": true, "direct": false, "writeback": true}, "file": "/home/kvm_autotest_root/iso/windows/virtio-win-prewhql-0.1-171.iso", "encryption_key_missing": false}, "qdev": "data-disk2", "tray_open": false, "type": "unknown"}, {"io-status": "ok", "device": "", "locked": false, "removable": false, "inserted": {"iops_rd": 0, "detect_zeroes": "off", "image": {"virtual-size": 1073741824, "filename": "null-co://", "format": "raw"}, "iops_wr": 0, "ro": false, "node-name": "drive_none", "backing_file_depth": 0, "drv": "raw", "iops": 0, "bps_wr": 0, "write_threshold": 0, "encrypted": false, "bps": 0, "bps_rd": 0, "cache": {"no-flush": false, "direct": false, "writeback": true}, "file": "null-co://", "encryption_key_missing": false}, "qdev": "none", "type": "unknown"}]}

3. Select Disk 1 (dummy image), then Initialize Disk, Rescan Disk in Disk Management.


Actual results:
Hit BSOD after step 3.

qmp output:
{"timestamp": {"seconds": 1557996513, "microseconds": 175822}, "event": "GUEST_PANICKED", "data": {"action": "pause"}}
{"timestamp": {"seconds": 1557996513, "microseconds": 176018}, "event": "GUEST_PANICKED", "data": {"action": "poweroff"}}
{"timestamp": {"seconds": 1557996513, "microseconds": 176163}, "event": "SHUTDOWN", "data": {"guest": true, "reason": "guest-panic"}}


Expected results:
Not BSOD


Additional info:

0: kd> !analyze -v
*******************************************************************************
*                                                                             *
*                        Bugcheck Analysis                                    *
*                                                                             *
*******************************************************************************

CRITICAL_PROCESS_DIED (ef)
        A critical system process died
Arguments:
Arg1: ffffd38f7e89d080, Process object or thread object
Arg2: 0000000000000000, If this is 0, a process died. If this is 1, a thread died.
Arg3: 0000000000000000
Arg4: 0000000000000000

Debugging Details:
------------------


DUMP_CLASS: 1

DUMP_QUALIFIER: 401

BUILD_VERSION_STRING:  17763.1.amd64fre.rs5_release.180914-1434

SYSTEM_MANUFACTURER:  Red Hat

SYSTEM_PRODUCT_NAME:  KVM

SYSTEM_VERSION:  RHEL-8.0.0 PC (Q35 + ICH9, 2009)

BIOS_VENDOR:  SeaBIOS

BIOS_VERSION:  1.12.0-1.module+el8.1.0+3164+94495c71

BIOS_DATE:  04/01/2014

DUMP_TYPE:  1

BUGCHECK_P1: ffffd38f7e89d080

BUGCHECK_P2: 0

BUGCHECK_P3: 0

BUGCHECK_P4: 0

PROCESS_NAME:  services.exe

CRITICAL_PROCESS:  services.exe

EXCEPTION_RECORD:  0000000000001000 -- (.exr 0x1000)
Cannot read Exception record @ 0000000000001000

EXCEPTION_CODE: (NTSTATUS) 0x7ad04080 - <Unable to get error code text>

ERROR_CODE: (NTSTATUS) 0x7ad04080 - <Unable to get error code text>

CPU_COUNT: c

CPU_MHZ: a21

CPU_VENDOR:  AuthenticAMD

CPU_FAMILY: 15

CPU_MODEL: 2

CPU_STEPPING: 0

DEFAULT_BUCKET_ID:  WIN8_DRIVER_FAULT

BUGCHECK_STR:  0xEF

CURRENT_IRQL:  0

ANALYSIS_SESSION_HOST:  WIN-3IORRL4PE1F

ANALYSIS_SESSION_TIME:  05-17-2019 14:16:52.0722

ANALYSIS_VERSION: 10.0.16299.15 amd64fre

TRAP_FRAME:  ffff800000000000 -- (.trap 0xffff800000000000)
Unable to read trap frame at ffff8000`00000000

LAST_CONTROL_TRANSFER:  from fffff8052b524c7d to fffff8052ae4c5e0

THREAD_SHA1_HASH_MOD_FUNC:  269a2c84fa2bbf0fee7030073f73f0b9f8c95667

THREAD_SHA1_HASH_MOD_FUNC_OFFSET:  b8f53dce0a03e7dc7a6bed19542bd701f6e8bbc5

THREAD_SHA1_HASH_MOD:  c9afdb6ede27e76a8d20e4c23d412dbf80e4b379

FOLLOWUP_IP:
ntdll!RtlpCallVectoredHandlers+35
00007ff9`b7096a35 4c896c2438      mov     qword ptr [rsp+38h],r13

FAULT_INSTR_CODE:  246c894c

SYMBOL_STACK_INDEX:  a

SYMBOL_NAME:  ntdll!RtlpCallVectoredHandlers+35

FOLLOWUP_NAME:  MachineOwner

MODULE_NAME: ntdll

IMAGE_NAME:  ntdll.dll

DEBUG_FLR_IMAGE_TIMESTAMP:  0

STACK_COMMAND:  .thread ; .cxr ; kb

BUCKET_ID_FUNC_OFFSET:  35

FAILURE_BUCKET_ID:  0xEF_services.exe_VRF_BUGCHECK_CRITICAL_PROCESS_7ad04080_ntdll!RtlpCallVectoredHandlers

BUCKET_ID:  0xEF_services.exe_VRF_BUGCHECK_CRITICAL_PROCESS_7ad04080_ntdll!RtlpCallVectoredHandlers

PRIMARY_PROBLEM_CLASS:  0xEF_services.exe_VRF_BUGCHECK_CRITICAL_PROCESS_7ad04080_ntdll!RtlpCallVectoredHandlers

TARGET_TIME:  2019-05-15T21:12:53.000Z

OSBUILD:  17763

OSSERVICEPACK:  0

SERVICEPACK_NUMBER: 0

OS_REVISION: 0

SUITE_MASK:  400

PRODUCT_TYPE:  3

OSPLATFORM_TYPE:  x64

OSNAME:  Windows 10

OSEDITION:  Windows 10 Server TerminalServer DataCenter SingleUserTS

OS_LOCALE:

USER_LCID:  0

OSBUILD_TIMESTAMP:  unknown_date

BUILDDATESTAMP_STR:  180914-1434

BUILDLAB_STR:  rs5_release

BUILDOSVER_STR:  10.0.17763.1.amd64fre.rs5_release.180914-1434

ANALYSIS_SESSION_ELAPSED_TIME:  b2b

ANALYSIS_SOURCE:  KM

FAILURE_ID_HASH_STRING:  km:0xef_services.exe_vrf_bugcheck_critical_process_7ad04080_ntdll!rtlpcallvectoredhandlers

FAILURE_ID_HASH:  {87c7033a-e40e-94a2-eea8-c8287129776a}

Followup:     MachineOwner

Comment 1 Vadim Rozenfeld 2019-06-11 02:29:01 UTC
Two questions.

Can we try qcow2 file instead of null-co?  And can we try combination null-co with scsi-cd?

In my understanding both cases should work.

Best regards,
Vadim.

Comment 2 Xueqiang Wei 2019-06-21 09:42:21 UTC
(In reply to Vadim Rozenfeld from comment #1)
> Two questions.
> 
> Can we try qcow2 file instead of null-co?  And can we try combination
> null-co with scsi-cd?
> 
> In my understanding both cases should work.
> 
> Best regards,
> Vadim.


1. Also hit BOSD after rescan disks with qcow2 file

    -device pcie-root-port,id=pcie.0-root-port-9,slot=9,chassis=9,addr=0x9,bus=pcie.0 \
    -device virtio-scsi-pci,id=virtio_scsi_pci1,bus=pcie.0-root-port-9,addr=0x0 \
    -blockdev driver=file,cache.direct=off,cache.no-flush=on,filename=/home/nfs_test/data.qcow2,node-name=data_disk \
    -blockdev driver=qcow2,node-name=disk1,file=data_disk \
    -device scsi-hd,drive=disk1,bus=virtio_scsi_pci1.0,id=data_disk \

Memory dump file: http://fileshare.englab.nay.redhat.com/pub/section2/images_backup/bug1711167/


2. combination null-co with scsi-cd, also hit this issue.

    -device pcie-root-port,id=pcie.0-root-port-9,slot=9,chassis=9,addr=0x9,bus=pcie.0 \
    -device virtio-scsi-pci,id=virtio_scsi_pci1,bus=pcie.0-root-port-9,addr=0x0 \
    -blockdev node-name=file_none,driver=null-co,read-only=on \
    -blockdev node-name=drive_none,driver=raw,file=file_none,read-only=on \
    -device scsi-cd,id=none,drive=drive_none \

Comment 4 qing.wang 2019-12-23 08:39:29 UTC
*** Bug 1783214 has been marked as a duplicate of this bug. ***

Comment 5 Li Xiaohui 2020-01-07 10:39:43 UTC
Hit this issue when unplug balloon device in win2019 guest on rhel8.2.0-av host.
And could reproduce with "blockdev" mode, but won't reproduce with drive mode. Thanks.

Comment 6 Yiqian Wei 2020-01-13 03:23:21 UTC
hit the same problem as comment 3 when hotplug virtio-blk-pci device to win2019 guest.

host version:
kernel-4.18.0-147.4.1.el8_1.x86_64
qemu-kvm-4.1.0-21.module+el8.1.1+5388+fd51bfbc.x86_64
seabios-1.12.0-5.module+el8.1.1+5309+6d656f05.x86_64
virtio-win-1.9.10-3.el8.noarch
guest:Win2019 (pc + seabios)

Comment 7 menli@redhat.com 2020-01-15 06:10:24 UTC
hit the same problem as comment 1  on win2012R2 guest.

host version:
kernel-4.18.0-167.el8.x86_64
qemu-kvm-4.2.0-5.module+el8.2.0+5389+367d9739.x86_64
seabios-1.12.0-5.module+el8.2.0+4793+b09dd2fb.x86_64

guest:Win2012R2 (q35 + seabios)

It is easy to hit this issue ,please help to have a look asap ,much thanks.

Comment 9 menli@redhat.com 2020-01-20 07:11:32 UTC
yes,also reproduce on pc machine(win2012R2 guest)

host version:
kernel-4.18.0-167.el8.x86_64
qemu-kvm-4.2.0-5.module+el8.2.0+5389+367d9739.x86_64
seabios-1.12.0-5.module+el8.2.0+4793+b09dd2fb.x86_64

CML:
 
/usr/libexec/qemu-kvm \
    -name 'avocado-vt-vm3' \
    -machine pc \
    -nodefaults \
    -device VGA,bus=pci.0 \
    -device virtio-scsi-pci,id=virtio_scsi_pci0,bus=pci.0  \
    -blockdev node-name=file_image1,driver=file,cache.direct=on,cache.no-flush=off,filename=os.qcow2,aio=threads \
    -blockdev node-name=drive_image1,driver=qcow2,cache.direct=on,cache.no-flush=off,file=file_image1 \
    -device scsi-hd,id=image1,drive=drive_image1,bus=virtio_scsi_pci0.0 \
    -device virtio-net-pci,mac=9a:36:83:b6:3d:05,id=idJVpmsF,netdev=id23ZUK6,bus=pci.0  \
    -netdev tap,id=id23ZUK6,vhost=on \
    -m 14336  \
    -smp 2,maxcpus=4 \
    -cpu 'Skylake-Server' \
    -drive id=drive_cd1,if=none,snapshot=off,aio=threads,cache=none,media=cdrom,file=/home/kvm_autotest_root/iso/ISO/Win2012R2/en_windows_server_2012_r2_with_update_x64_dvd_6052708.iso \
    -device ide-cd,id=cd2,drive=drive_cd1,bus=ide.0,unit=0 \
    -cdrom /home/kvm_autotest_root/iso/windows/virtio-win-prewhql-0.1-176.iso\
    -device piix3-usb-uhci,id=usb -device usb-tablet,id=input0 \
    -vnc :1  \
    -rtc base=localtime,clock=host,driftfix=slew  \
    -boot order=cdn,once=c,menu=off,strict=off \
    -enable-kvm \
    -qmp tcp:0:1231,server,nowait \
    -monitor stdio \

Comment 10 Vadim Rozenfeld 2020-01-20 07:30:57 UTC
(In reply to menli from comment #9)
> yes,also reproduce on pc machine(win2012R2 guest)
> 
> host version:
> kernel-4.18.0-167.el8.x86_64
> qemu-kvm-4.2.0-5.module+el8.2.0+5389+367d9739.x86_64
> seabios-1.12.0-5.module+el8.2.0+4793+b09dd2fb.x86_64
> 
> CML:
>  
> /usr/libexec/qemu-kvm \
>     -name 'avocado-vt-vm3' \
>     -machine pc \
>     -nodefaults \
>     -device VGA,bus=pci.0 \
>     -device virtio-scsi-pci,id=virtio_scsi_pci0,bus=pci.0  \
>     -blockdev
> node-name=file_image1,driver=file,cache.direct=on,cache.no-flush=off,
> filename=os.qcow2,aio=threads \
>     -blockdev
> node-name=drive_image1,driver=qcow2,cache.direct=on,cache.no-flush=off,
> file=file_image1 \
>     -device scsi-hd,id=image1,drive=drive_image1,bus=virtio_scsi_pci0.0 \
>     -device
> virtio-net-pci,mac=9a:36:83:b6:3d:05,id=idJVpmsF,netdev=id23ZUK6,bus=pci.0  \
>     -netdev tap,id=id23ZUK6,vhost=on \
>     -m 14336  \
>     -smp 2,maxcpus=4 \
>     -cpu 'Skylake-Server' \
>     -drive
> id=drive_cd1,if=none,snapshot=off,aio=threads,cache=none,media=cdrom,file=/
> home/kvm_autotest_root/iso/ISO/Win2012R2/
> en_windows_server_2012_r2_with_update_x64_dvd_6052708.iso \
>     -device ide-cd,id=cd2,drive=drive_cd1,bus=ide.0,unit=0 \
>     -cdrom
> /home/kvm_autotest_root/iso/windows/virtio-win-prewhql-0.1-176.iso\
>     -device piix3-usb-uhci,id=usb -device usb-tablet,id=input0 \
>     -vnc :1  \
>     -rtc base=localtime,clock=host,driftfix=slew  \
>     -boot order=cdn,once=c,menu=off,strict=off \
>     -enable-kvm \
>     -qmp tcp:0:1231,server,nowait \
>     -monitor stdio \

Great. 
Can you post crash dump from pc type VM as well.
Thank you,
Vadim.

Comment 11 menli@redhat.com 2020-01-20 10:18:31 UTC
post crash dump to following link, please access to bz1711167 to download it,thanks.

http://fileshare.englab.nay.redhat.com/pub/section2/coredump/

Comment 12 menli@redhat.com 2020-01-21 01:57:45 UTC
(In reply to menli from comment #11)
> post crash dump to following link, please access to bz1711167 to download
> it,thanks.
> 
> http://fileshare.englab.nay.redhat.com/pub/section2/coredump/bz1711167/

Comment 13 xiagao 2020-02-04 03:38:52 UTC
I also hit this issue after hotplug/unplug balloon device on win8-32 guest with pc machine type, there is an auto rescan operation in device manager after hotplug/unplug.

Comment 14 Ademar Reis 2020-02-05 22:58:04 UTC
QEMU has been recently split into sub-components and as a one-time operation to avoid breakage of tools, we are setting the QEMU sub-component of this BZ to "General". Please review and change the sub-component if necessary the next time you review this BZ. Thanks

Comment 16 xiagao 2020-02-14 04:02:53 UTC
When I test block_resize and rescan in disk management, found if use virtio-scsi device, bsod will happen, but if use virtio-block, it will not happen BSOD. 
@qingwang, could you confirm if this situation is true?

Comment 18 Peixiu Hou 2020-04-27 07:29:41 UTC
Hi Vadim,

On new virtio-win-prewhql-181 version, tested with virtio_blk disk, reproduced this issue when do rescan operation,
but on virtio-win-prewhql-180 version, tested with virtio_blk disk, cannot reproduce issue when do rescan operation.

Tested cli for upper both tests(only attach 1 virtio_blk system disk):

-blockdev node-name=file_image1,driver=file,aio=threads,filename=/home/kvm_autotest_root/images/win10-64-virtio.qcow2,cache.direct=on,cache.no-flush=off \
-blockdev node-name=drive_image1,driver=qcow2,cache.direct=on,cache.no-flush=off,file=file_image1 \
-device virtio-blk-pci,id=image1,drive=drive_image1,bootindex=0,write-cache=on,serial=TARGET_DISK0,bus=pci.0,addr=0x4 \

As above test results, there have a regression issue on virtio-win-prewhql-181, if we need to file a new bug for viostor?

And, on version virtio-win-prewhql-181, I also tested without serial=TARGET_DISK0 on virtio-blk-pci(-device virtio-blk-pci,id=image1,drive=drive_image1,bootindex=0,write-cache=on,bus=pci.0,addr=0x4), cannot reproduce this issue.


Thanks a lot~
Peixiu

Comment 19 Vadim Rozenfeld 2020-04-27 10:41:27 UTC
(In reply to Peixiu Hou from comment #18)
> Hi Vadim,
> 
> On new virtio-win-prewhql-181 version, tested with virtio_blk disk,
> reproduced this issue when do rescan operation,
> but on virtio-win-prewhql-180 version, tested with virtio_blk disk, cannot
> reproduce issue when do rescan operation.
> 
> Tested cli for upper both tests(only attach 1 virtio_blk system disk):
> 
> -blockdev
> node-name=file_image1,driver=file,aio=threads,filename=/home/
> kvm_autotest_root/images/win10-64-virtio.qcow2,cache.direct=on,cache.no-
> flush=off \
> -blockdev
> node-name=drive_image1,driver=qcow2,cache.direct=on,cache.no-flush=off,
> file=file_image1 \
> -device
> virtio-blk-pci,id=image1,drive=drive_image1,bootindex=0,write-cache=on,
> serial=TARGET_DISK0,bus=pci.0,addr=0x4 \
> 
> As above test results, there have a regression issue on
> virtio-win-prewhql-181, if we need to file a new bug for viostor?
> 
> And, on version virtio-win-prewhql-181, I also tested without
> serial=TARGET_DISK0 on virtio-blk-pci(-device
> virtio-blk-pci,id=image1,drive=drive_image1,bootindex=0,write-cache=on,
> bus=pci.0,addr=0x4), cannot reproduce this issue.
> 
> 
> Thanks a lot~
> Peixiu

Yws, since it is a regression, please open a new bug assigned to me.

Best,
Vadim.

Comment 20 Peixiu Hou 2020-04-28 04:29:25 UTC
(In reply to Vadim Rozenfeld from comment #19)
> (In reply to Peixiu Hou from comment #18)
> > Hi Vadim,
> > 
> > On new virtio-win-prewhql-181 version, tested with virtio_blk disk,
> > reproduced this issue when do rescan operation,
> > but on virtio-win-prewhql-180 version, tested with virtio_blk disk, cannot
> > reproduce issue when do rescan operation.
> > 
> > Tested cli for upper both tests(only attach 1 virtio_blk system disk):
> > 
> > -blockdev
> > node-name=file_image1,driver=file,aio=threads,filename=/home/
> > kvm_autotest_root/images/win10-64-virtio.qcow2,cache.direct=on,cache.no-
> > flush=off \
> > -blockdev
> > node-name=drive_image1,driver=qcow2,cache.direct=on,cache.no-flush=off,
> > file=file_image1 \
> > -device
> > virtio-blk-pci,id=image1,drive=drive_image1,bootindex=0,write-cache=on,
> > serial=TARGET_DISK0,bus=pci.0,addr=0x4 \
> > 
> > As above test results, there have a regression issue on
> > virtio-win-prewhql-181, if we need to file a new bug for viostor?
> > 
> > And, on version virtio-win-prewhql-181, I also tested without
> > serial=TARGET_DISK0 on virtio-blk-pci(-device
> > virtio-blk-pci,id=image1,drive=drive_image1,bootindex=0,write-cache=on,
> > bus=pci.0,addr=0x4), cannot reproduce this issue.
> > 
> > 
> > Thanks a lot~
> > Peixiu
> 
> Yws, since it is a regression, please open a new bug assigned to me.
> 

Thanks, filed a new bug https://bugzilla.redhat.com/show_bug.cgi?id=1828658, and has assigned to you~

Best Regards~
Peixiu

> Best,
> Vadim.

Comment 21 Yumei Huang 2020-05-14 05:27:28 UTC
Hit same issue on 8.2.1-av with win2019 guest after hotplug balloon device under pc machine type.

Comment 22 Vadim Rozenfeld 2020-05-14 05:46:11 UTC
Please update drivers to the latest build 203 available at
https://brewweb.engineering.redhat.com/brew/taskinfo?taskID=28400644

and give it another try.

Thanks,
Vadim.

Comment 23 Yumei Huang 2020-05-14 05:55:28 UTC
(In reply to Vadim Rozenfeld from comment #22)
> Please update drivers to the latest build 203 available at
> https://brewweb.engineering.redhat.com/brew/taskinfo?taskID=28400644

The link is not available now, would you please provide another one?

> 
> and give it another try.
> 
> Thanks,
> Vadim.

Comment 24 lijin 2020-05-14 06:11:26 UTC
(In reply to Yumei Huang from comment #23)
> (In reply to Vadim Rozenfeld from comment #22)
> > Please update drivers to the latest build 203 available at
> > https://brewweb.engineering.redhat.com/brew/taskinfo?taskID=28400644
> 
> The link is not available now, would you please provide another one?

latest 184 build: https://brewweb.engineering.redhat.com/brew/buildinfo?buildID=1191255

Needinfo menli as well she hit similar issue with rng device.

Comment 25 qing.wang 2020-05-14 07:16:35 UTC
Tested on :
4.18.0-193.2.1.el8_2.x86_64
qemu-kvm-core-4.2.0-21.module+el8.2.1+6586+8b7713b9.x86_64
virtio-win-prewhql-0.1-184

Test steps:

1.create image
qemu-img create -f qcow2 /home/kvm_autotest_root/images/stg1.qcow2 1G

2.Boot vm
/usr/libexec/qemu-kvm \
    -name 'avocado-vt-vm1' \
    -machine q35  \
    -nodefaults \
    -device VGA,bus=pcie.0,addr=0x1  \
    -device pvpanic,ioport=0x505,id=idZcGD6F  \
    -device pcie-root-port,id=pcie.0-root-port-2,slot=2,chassis=2,addr=0x2,bus=pcie.0 \
    -device qemu-xhci,id=usb1,bus=pcie.0-root-port-2,addr=0x0 \
    -device pcie-root-port,id=pcie.0-root-port-3,slot=3,chassis=3,addr=0x3,bus=pcie.0 \
    -device pcie-root-port,id=pcie.0-root-port-4,slot=4,chassis=4,addr=0x4,bus=pcie.0 \
    -device virtio-scsi-pci,id=scsi0,bus=pcie.0-root-port-4,addr=0x0 \
    \
    -blockdev driver=file,cache.direct=off,cache.no-flush=on,filename=/home/kvm_autotest_root/images/win2019-64-virtio-scsi.qcow2,node-name=host_disk2 \
    -blockdev driver=qcow2,node-name=disk_2,file=host_disk2 \
    -device scsi-hd,drive=disk_2,bus=scsi0.0,id=host_disk_2 \
    \
    -blockdev node-name=data_image1,driver=file,cache.direct=on,cache.no-flush=off,filename=/home/kvm_autotest_root/images/stg1.qcow2,aio=threads \
    -blockdev node-name=data1,driver=qcow2,cache.direct=on,cache.no-flush=off,file=data_image1 \
    -device scsi-hd,id=disk1,drive=data1,bus=scsi0.0 \
    -device pcie-root-port,id=pcie.0-root-port-5,slot=5,chassis=5,addr=0x5,bus=pcie.0 \
    -device virtio-net-pci,mac=9a:55:56:57:58:59,id=id18Xcuo,netdev=idGRsMas,bus=pcie.0-root-port-5,addr=0x0  \
    -netdev tap,id=idGRsMas,vhost=on \
    -m 13312  \
    -smp 24,maxcpus=24,cores=12,threads=1,sockets=2  \
    -cpu 'Skylake-Server',hv_stimer,hv_synic,hv_vpindex,hv_reset,hv_relaxed,hv_spinlocks=0x1fff,hv_vapic,hv_time,hv-tlbflush,+kvm_pv_unhalt \
    -drive id=drive_cd1,if=none,snapshot=off,aio=threads,cache=none,media=cdrom,file=/home/kvm_autotest_root/iso/ISO/Win2019/en_windows_server_2019_updated_march_2019_x64_dvd_2ae967ab.iso \
    -device ide-cd,id=cd1,drive=drive_cd1,bootindex=2,bus=ide.0,unit=0 \
    -drive id=drive_virtio,if=none,snapshot=off,aio=threads,cache=none,media=cdrom,file=/home/kvm_autotest_root/iso/windows/virtio-win-prewhql-0.1-176.iso \
    -device ide-cd,id=virtio,drive=drive_virtio,bootindex=3,bus=ide.1,unit=0 \
    -drive id=drive_winutils,if=none,snapshot=off,aio=threads,cache=none,media=cdrom,file=/home/kvm_autotest_root/iso/windows/winutils.iso \
    -device ide-cd,id=winutils,drive=drive_winutils,bus=ide.2,unit=0\
    -device usb-tablet,id=usb-tablet1,bus=usb1.0,port=1  \
    -vnc :5  \
    -rtc base=localtime,clock=host,driftfix=slew  \
    -boot order=cdn,once=c,menu=off,strict=off \
    -enable-kvm \
    -device pcie-root-port,id=pcie_extra_root_port_0,slot=6,chassis=6,addr=0x6,bus=pcie.0 \
    -monitor stdio \
    -qmp tcp:0:5955,server,nowait \

3,select the disk online it in diskmanager then rescan disk


4.houtunplug disk then rescan disk in diskmanager

{'execute':'qmp_capabilities'}

{"execute":"device_del","arguments":{"id":"disk1"}}


5.hotplug back then 

{'execute':'device_add','arguments':{'driver':'scsi-hd','id':'disk1','drive':'data1','bus':'scsi0.0'}}

Not hit BSOD issue

Comment 26 menli@redhat.com 2020-05-14 07:32:00 UTC
Test with virtio-win-prewhql-0.1-184 for hotplug rng device with blockdev+scsi,not hit BSOD issue.

Comment 27 lijin 2020-05-14 08:15:42 UTC
change component to virtio-win according to comment#25 and comment#26.

Comment 31 Yumei Huang 2020-05-14 09:38:11 UTC
(In reply to lijin from comment #24)
> (In reply to Yumei Huang from comment #23)
> > (In reply to Vadim Rozenfeld from comment #22)
> > > Please update drivers to the latest build 203 available at
> > > https://brewweb.engineering.redhat.com/brew/taskinfo?taskID=28400644
> > 
> > The link is not available now, would you please provide another one?
> 
> latest 184 build:
> https://brewweb.engineering.redhat.com/brew/buildinfo?buildID=1191255

Thanks, the issue is gone with this build.

> 
> Needinfo menli as well she hit similar issue with rng device.

Comment 32 lijin 2020-05-14 09:54:35 UTC
Change status to verified

Comment 36 errata-xmlrpc 2020-07-21 15:32:52 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHEA-2020:3055


Note You need to log in before you can comment on or make changes to this bug.