Bug 1670673 - Windows guest with virtio scsi system disk cannot be booted(BSOD) after disable->enable vioscsi driver
Summary: Windows guest with virtio scsi system disk cannot be booted(BSOD) after disab...
Keywords:
Status: CLOSED WONTFIX
Alias: None
Product: Red Hat Enterprise Linux Advanced Virtualization
Classification: Red Hat
Component: qemu-kvm
Version: 8.0
Hardware: Unspecified
OS: Unspecified
high
high
Target Milestone: rc
: ---
Assignee: Vadim Rozenfeld
QA Contact: qing.wang
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2019-01-30 07:02 UTC by Peixiu Hou
Modified: 2021-12-17 17:24 UTC (History)
12 users (show)

Fixed In Version:
Doc Type: If docs needed, set a value
Doc Text:
Clone Of:
Environment:
Last Closed: 2020-03-03 09:12:13 UTC
Type: Bug
Target Upstream Version:
Embargoed:


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
Red Hat Issue Tracker RHELPLAN-26555 0 None None None 2021-12-17 17:24:35 UTC

Description Peixiu Hou 2019-01-30 07:02:22 UTC
Description of problem:
On rhel8 slow train, after disable->enable vioscsi driver, cannot booted in  Windows guest with virtio scsi system disk.

Version-Release number of selected component (if applicable):
kernel-4.18.0-62.el8.x86_64
qemu-kvm-2.12.0-57.module+el8+2683+02b3b955.x86_64
seabios-bin-1.11.1-3.module+el8+2529+a9686a4d.noarch
virtio-win-1.9.7-0.el8

How reproducible:
100%

Steps to Reproduce:
1. Boot a guest up with system disk as ide-hd and a data disk as scsi-hd.
============================================================================
MALLOC_PERTURB_=1  /usr/libexec/qemu-kvm \
    -name 'avocado-vt-vm1' \
    -machine pc  \
    -nodefaults \
    -device VGA,bus=pci.0,addr=0x2  \
    -device ich9-usb-ehci1,id=usb1,addr=0x1d.7,multifunction=on,bus=pci.0 \
    -device ich9-usb-uhci1,id=usb1.0,multifunction=on,masterbus=usb1.0,addr=0x1d.0,firstport=0,bus=pci.0 \
    -device ich9-usb-uhci2,id=usb1.1,multifunction=on,masterbus=usb1.0,addr=0x1d.2,firstport=2,bus=pci.0 \
    -device ich9-usb-uhci3,id=usb1.2,multifunction=on,masterbus=usb1.0,addr=0x1d.4,firstport=4,bus=pci.0 \
    -blockdev node-name=file_image1,driver=file,cache.direct=on,cache.no-flush=off,filename=/home/kvm_autotest_root/images/win2019-64-virtio-scsi.qcow2,aio=threads \
    -blockdev node-name=drive_image1,driver=qcow2,cache.direct=on,cache.no-flush=off,file=file_image1 \
    -device ide-hd,id=image1,drive=drive_image1,write-cache=on,bus=ide.0,unit=0 \
    -device virtio-scsi-pci,id=virtio_scsi_pci0,bus=pci.0,addr=0x3 \
    -blockdev node-name=file_stg,driver=file,cache.direct=on,cache.no-flush=off,filename=/home/kvm_autotest_root/images/storage.qcow2,aio=threads \
    -blockdev node-name=drive_stg,driver=qcow2,cache.direct=on,cache.no-flush=off,file=file_stg \
    -device scsi-hd,id=stg,drive=drive_stg,write-cache=on \
    -device virtio-net-pci,mac=9a:88:89:8a:8b:8c,id=idzBrfhs,vectors=4,netdev=id1ASQXN,bus=pci.0,addr=0x4  \
    -netdev tap,id=id1ASQXN,vhost=on \
    -m 14336  \
    -smp 24,maxcpus=24,cores=12,threads=1,sockets=2  \
    -cpu 'Skylake-Server',+kvm_pv_unhalt,hv_relaxed,hv_spinlocks=0x1fff,hv_vapic,hv_time \
    -blockdev node-name=file_cd1,driver=file,read-only=on,cache.direct=on,cache.no-flush=off,filename=/home/kvm_autotest_root/iso/windows/virtio-win-1.9.7-3.el8.iso,aio=threads \
    -blockdev node-name=drive_cd1,driver=raw,read-only=on,cache.direct=on,cache.no-flush=off,file=file_cd1 \
    -device ide-cd,id=cd1,drive=drive_cd1,write-cache=on \
    -device usb-tablet,id=usb-tablet1,bus=usb1.0,port=1  \
    -vnc :0  \
    -rtc base=localtime,clock=host,driftfix=slew  \
    -boot order=cdn,once=c,menu=off,strict=off \
    -enable-kvm \
    -monitor stdio \
==============================================================================
2. Install vioscsi driver from virtio-win-1.9.7-3.el8.iso(vioscsi build 162)
3. Disable the vioscsi driver.
4. Reboot the guest vm.
5. Enable the vioscsi driver.
6. Shutdown the guest vm.
7. Boot the guest vm up with system disk as scsi-hd:
==============================================================================
MALLOC_PERTURB_=1  /usr/libexec/qemu-kvm \
    -name 'avocado-vt-vm1' \
    -machine pc  \
    -nodefaults \
    -device VGA,bus=pci.0,addr=0x2  \
    -device ich9-usb-ehci1,id=usb1,addr=0x1d.7,multifunction=on,bus=pci.0 \
    -device ich9-usb-uhci1,id=usb1.0,multifunction=on,masterbus=usb1.0,addr=0x1d.0,firstport=0,bus=pci.0 \
    -device ich9-usb-uhci2,id=usb1.1,multifunction=on,masterbus=usb1.0,addr=0x1d.2,firstport=2,bus=pci.0 \
    -device ich9-usb-uhci3,id=usb1.2,multifunction=on,masterbus=usb1.0,addr=0x1d.4,firstport=4,bus=pci.0 \
    -device virtio-scsi-pci,id=virtio_scsi_pci0,bus=pci.0,addr=0x3 \
    -blockdev node-name=file_image1,driver=file,cache.direct=on,cache.no-flush=off,filename=/home/kvm_autotest_root/images/win2019-64-virtio-scsi.qcow2,aio=threads \
    -blockdev node-name=drive_image1,driver=qcow2,cache.direct=on,cache.no-flush=off,file=file_image1 \
    -device scsi-hd,id=image1,drive=drive_image1,bootindex=0,write-cache=on \
    -device virtio-net-pci,mac=9a:88:89:8a:8b:8c,id=idzBrfhs,vectors=4,netdev=id1ASQXN,bus=pci.0,addr=0x4  \
    -netdev tap,id=id1ASQXN,vhost=on \
    -m 14336  \
    -smp 24,maxcpus=24,cores=12,threads=1,sockets=2  \
    -cpu 'Skylake-Server',+kvm_pv_unhalt,hv_relaxed,hv_spinlocks=0x1fff,hv_vapic,hv_time \
    -blockdev node-name=file_cd1,driver=file,read-only=on,cache.direct=on,cache.no-flush=off,filename=/home/kvm_autotest_root/iso/windows/virtio-win-prewhql-0.1-162.iso,aio=threads \
    -blockdev node-name=drive_cd1,driver=raw,read-only=on,cache.direct=on,cache.no-flush=off,file=file_cd1 \
    -device ide-cd,id=cd1,drive=drive_cd1,write-cache=on \
    -device usb-tablet,id=usb-tablet1,bus=usb1.0,port=1  \
    -vnc :0  \
    -rtc base=localtime,clock=host,driftfix=slew  \
    -boot order=cdn,once=c,menu=off,strict=off \
    -enable-kvm \
    -monitor stdio \
===============================================================================
8. Check the guest vm can be booted in and works normally.

Actual results:
BSOD(Stop code:INACCESSBLE BOOT DEVICE)


Expected results:
Boot in normally

Additional info:
1. Tested with pc + blockdev format on rhel8 slow train, reproduced this issue.
2. Tested with q35 + blockdev format on rhel8 slow train, reproduced this issue.
3. Tested with pc + drive format on rhel8 slow train, reproduced this issue.
4. Tested with q35 + drive format on rhel8 slow train, reproduced this issue.
5. Tested with q35 + drive format on rhel8 fast train, not reproduced this issue.

Comment 1 Rick Barry 2019-01-30 16:54:57 UTC
ITR is set to backlog ('---'), but priority/severity is unspecified. If this should be fixed in 8.0.0.0, please set ITR to 8.0.0.0, set priority to 'high' at least and request exception/blocker.

Comment 4 Vadim Rozenfeld 2019-02-13 22:25:41 UTC
(In reply to Peixiu Hou from comment #0)
> Description of problem:
> On rhel8 slow train, after disable->enable vioscsi driver, cannot booted in 
> Windows guest with virtio scsi system disk.
> 
> Version-Release number of selected component (if applicable):
> kernel-4.18.0-62.el8.x86_64
> qemu-kvm-2.12.0-57.module+el8+2683+02b3b955.x86_64
> seabios-bin-1.11.1-3.module+el8+2529+a9686a4d.noarch
> virtio-win-1.9.7-0.el8
> 
> How reproducible:
> 100%
> 
> Steps to Reproduce:
> 1. Boot a guest up with system disk as ide-hd and a data disk as scsi-hd.
> ============================================================================
> MALLOC_PERTURB_=1  /usr/libexec/qemu-kvm \
>     -name 'avocado-vt-vm1' \
>     -machine pc  \
>     -nodefaults \
>     -device VGA,bus=pci.0,addr=0x2  \
>     -device ich9-usb-ehci1,id=usb1,addr=0x1d.7,multifunction=on,bus=pci.0 \
>     -device
> ich9-usb-uhci1,id=usb1.0,multifunction=on,masterbus=usb1.0,addr=0x1d.0,
> firstport=0,bus=pci.0 \
>     -device
> ich9-usb-uhci2,id=usb1.1,multifunction=on,masterbus=usb1.0,addr=0x1d.2,
> firstport=2,bus=pci.0 \
>     -device
> ich9-usb-uhci3,id=usb1.2,multifunction=on,masterbus=usb1.0,addr=0x1d.4,
> firstport=4,bus=pci.0 \
>     -blockdev
> node-name=file_image1,driver=file,cache.direct=on,cache.no-flush=off,
> filename=/home/kvm_autotest_root/images/win2019-64-virtio-scsi.qcow2,
> aio=threads \
>     -blockdev
> node-name=drive_image1,driver=qcow2,cache.direct=on,cache.no-flush=off,
> file=file_image1 \
>     -device
> ide-hd,id=image1,drive=drive_image1,write-cache=on,bus=ide.0,unit=0 \
>     -device virtio-scsi-pci,id=virtio_scsi_pci0,bus=pci.0,addr=0x3 \
>     -blockdev
> node-name=file_stg,driver=file,cache.direct=on,cache.no-flush=off,filename=/
> home/kvm_autotest_root/images/storage.qcow2,aio=threads \
>     -blockdev
> node-name=drive_stg,driver=qcow2,cache.direct=on,cache.no-flush=off,
> file=file_stg \
>     -device scsi-hd,id=stg,drive=drive_stg,write-cache=on \
>     -device
> virtio-net-pci,mac=9a:88:89:8a:8b:8c,id=idzBrfhs,vectors=4,netdev=id1ASQXN,
> bus=pci.0,addr=0x4  \
>     -netdev tap,id=id1ASQXN,vhost=on \
>     -m 14336  \
>     -smp 24,maxcpus=24,cores=12,threads=1,sockets=2  \
>     -cpu
> 'Skylake-Server',+kvm_pv_unhalt,hv_relaxed,hv_spinlocks=0x1fff,hv_vapic,
> hv_time \
>     -blockdev
> node-name=file_cd1,driver=file,read-only=on,cache.direct=on,cache.no-
> flush=off,filename=/home/kvm_autotest_root/iso/windows/virtio-win-1.9.7-3.
> el8.iso,aio=threads \
>     -blockdev
> node-name=drive_cd1,driver=raw,read-only=on,cache.direct=on,cache.no-
> flush=off,file=file_cd1 \
>     -device ide-cd,id=cd1,drive=drive_cd1,write-cache=on \
>     -device usb-tablet,id=usb-tablet1,bus=usb1.0,port=1  \
>     -vnc :0  \
>     -rtc base=localtime,clock=host,driftfix=slew  \
>     -boot order=cdn,once=c,menu=off,strict=off \
>     -enable-kvm \
>     -monitor stdio \
> =============================================================================
> =
> 2. Install vioscsi driver from virtio-win-1.9.7-3.el8.iso(vioscsi build 162)
> 3. Disable the vioscsi driver.
> 4. Reboot the guest vm.
> 5. Enable the vioscsi driver.
> 6. Shutdown the guest vm.
> 7. Boot the guest vm up with system disk as scsi-hd:
> =============================================================================
> =
> MALLOC_PERTURB_=1  /usr/libexec/qemu-kvm \
>     -name 'avocado-vt-vm1' \
>     -machine pc  \
>     -nodefaults \
>     -device VGA,bus=pci.0,addr=0x2  \
>     -device ich9-usb-ehci1,id=usb1,addr=0x1d.7,multifunction=on,bus=pci.0 \
>     -device
> ich9-usb-uhci1,id=usb1.0,multifunction=on,masterbus=usb1.0,addr=0x1d.0,
> firstport=0,bus=pci.0 \
>     -device
> ich9-usb-uhci2,id=usb1.1,multifunction=on,masterbus=usb1.0,addr=0x1d.2,
> firstport=2,bus=pci.0 \
>     -device
> ich9-usb-uhci3,id=usb1.2,multifunction=on,masterbus=usb1.0,addr=0x1d.4,
> firstport=4,bus=pci.0 \
>     -device virtio-scsi-pci,id=virtio_scsi_pci0,bus=pci.0,addr=0x3 \
>     -blockdev
> node-name=file_image1,driver=file,cache.direct=on,cache.no-flush=off,
> filename=/home/kvm_autotest_root/images/win2019-64-virtio-scsi.qcow2,
> aio=threads \
>     -blockdev
> node-name=drive_image1,driver=qcow2,cache.direct=on,cache.no-flush=off,
> file=file_image1 \
>     -device scsi-hd,id=image1,drive=drive_image1,bootindex=0,write-cache=on \
>     -device
> virtio-net-pci,mac=9a:88:89:8a:8b:8c,id=idzBrfhs,vectors=4,netdev=id1ASQXN,
> bus=pci.0,addr=0x4  \
>     -netdev tap,id=id1ASQXN,vhost=on \
>     -m 14336  \
>     -smp 24,maxcpus=24,cores=12,threads=1,sockets=2  \
>     -cpu
> 'Skylake-Server',+kvm_pv_unhalt,hv_relaxed,hv_spinlocks=0x1fff,hv_vapic,
> hv_time \
>     -blockdev
> node-name=file_cd1,driver=file,read-only=on,cache.direct=on,cache.no-
> flush=off,filename=/home/kvm_autotest_root/iso/windows/virtio-win-prewhql-0.
> 1-162.iso,aio=threads \
>     -blockdev
> node-name=drive_cd1,driver=raw,read-only=on,cache.direct=on,cache.no-
> flush=off,file=file_cd1 \
>     -device ide-cd,id=cd1,drive=drive_cd1,write-cache=on \
>     -device usb-tablet,id=usb-tablet1,bus=usb1.0,port=1  \
>     -vnc :0  \
>     -rtc base=localtime,clock=host,driftfix=slew  \
>     -boot order=cdn,once=c,menu=off,strict=off \
>     -enable-kvm \
>     -monitor stdio \
> =============================================================================
> ==
> 8. Check the guest vm can be booted in and works normally.
> 
> Actual results:
> BSOD(Stop code:INACCESSBLE BOOT DEVICE)
> 
> 
> Expected results:
> Boot in normally
> 
> Additional info:
> 1. Tested with pc + blockdev format on rhel8 slow train, reproduced this
> issue.
> 2. Tested with q35 + blockdev format on rhel8 slow train, reproduced this
> issue.
> 3. Tested with pc + drive format on rhel8 slow train, reproduced this issue.
> 4. Tested with q35 + drive format on rhel8 slow train, reproduced this issue.
> 5. Tested with q35 + drive format on rhel8 fast train, not reproduced this
> issue.

It looks as PCI resources were relocated in some cases.
Can you please post "info pci" information after steps
5. Enable the vioscsi driver.
and 
8. Check the guest vm can be booted in and works normally.

for cases
> 4. Tested with q35 + drive format on rhel8 slow train, reproduced this issue.
> 5. Tested with q35 + drive format on rhel8 fast train, not reproduced this
> issue.

Thanks,
Vadim.

Comment 5 Peixiu Hou 2019-02-14 09:19:37 UTC
Hi all,

RHEL8 slow train host used versions:
kernel-4.18.0-62.el8.x86_64
qemu-kvm-2.12.0-57.module+el8+2683+02b3b955.x86_64
virtio-win-1.9.7-3.el8.iso

RHEL8 fast train host used versions:
kernel-4.18.0-62.el8.x86_64
qemu-kvm-3.1.0-7.module+el8+2715+f4b84bed.x86_64
virtio-win-1.9.7-3.el8.iso

RHEL7 host used versions:
kernel-3.10.0-931.el7.x86_64
qemu-kvm-rhev-2.12.0-10.el7.x86_64
virtio-win-1.9.7-3.el8.iso

1. Tested with q35 + drive format + win2019 guest on rhel8 slow train, reproduced this issue.
After step5: Enable the vioscsi driver, the message for "info pci" as:
===============================================================================================================
(qemu) info pci
  Bus  0, device   0, function 0:
    Host bridge: PCI device 8086:29c0
      id ""
  Bus  0, device   1, function 0:
    VGA controller: PCI device 1234:1111
      BAR0: 32 bit prefetchable memory at 0xfc000000 [0xfcffffff].
      BAR2: 32 bit memory at 0xfea10000 [0xfea10fff].
      BAR6: 32 bit memory at 0xffffffffffffffff [0x0000fffe].
      id ""
  Bus  0, device   2, function 0:
    PCI bridge: PCI device 1b36:000c
      IRQ 0.
      BUS 0.
      secondary bus 1.
      subordinate bus 1.
      IO range [0xf000, 0x0fff]
      memory range [0xfe800000, 0xfe9fffff]
      prefetchable memory range [0xfda00000, 0xfdbfffff]
      BAR0: 32 bit memory at 0xfea11000 [0xfea11fff].
      id "pcie_root_port_0"
  Bus  0, device   3, function 0:
    PCI bridge: PCI device 1b36:000c
      IRQ 0.
      BUS 0.
      secondary bus 2.
      subordinate bus 2.
      IO range [0xf000, 0x0fff]
      memory range [0xfe600000, 0xfe7fffff]
      prefetchable memory range [0xfd800000, 0xfd9fffff]
      BAR0: 32 bit memory at 0xfea12000 [0xfea12fff].
      id "pcie_root_port_1"
  Bus  0, device   4, function 0:
    PCI bridge: PCI device 1b36:000c
      IRQ 0.
      BUS 0.
      secondary bus 3.
      subordinate bus 3.
      IO range [0xf000, 0x0fff]
      memory range [0xfe400000, 0xfe5fffff]
      prefetchable memory range [0xfd600000, 0xfd7fffff]
      BAR0: 32 bit memory at 0xfea13000 [0xfea13fff].
      id "pcie_root_port_2"
  Bus  0, device   5, function 0:
    PCI bridge: PCI device 1b36:000c
      IRQ 0.
      BUS 0.
      secondary bus 4.
      subordinate bus 4.
      IO range [0xf000, 0x0fff]
      memory range [0xfe200000, 0xfe3fffff]
      prefetchable memory range [0xfd400000, 0xfd5fffff]
      BAR0: 32 bit memory at 0xfea14000 [0xfea14fff].
      id "pcie.0-root-port-5"
  Bus  4, device   0, function 0:
    USB controller: PCI device 1b36:000d
      IRQ 0.
      BAR0: 64 bit memory at 0xfe200000 [0xfe203fff].
      id "usb1"
  Bus  0, device   6, function 0:
    PCI bridge: PCI device 1b36:000c
      IRQ 0.
      BUS 0.
      secondary bus 5.
      subordinate bus 5.
      IO range [0xf000, 0x0fff]
      memory range [0xfe000000, 0xfe1fffff]
      prefetchable memory range [0xfd200000, 0xfd3fffff]
      BAR0: 32 bit memory at 0xfea15000 [0xfea15fff].
      id "pcie.0-root-port-6"
  Bus  5, device   0, function 0:
    SCSI controller: PCI device 1af4:1048
      IRQ 0.
      BAR1: 32 bit memory at 0xfe000000 [0xfe000fff].
      BAR4: 64 bit prefetchable memory at 0xfd200000 [0xfd203fff].
      id "virtio_scsi_pci0"
  Bus  0, device   7, function 0:
    PCI bridge: PCI device 1b36:000c
      IRQ 0.
      BUS 0.
      secondary bus 6.
      subordinate bus 6.
      IO range [0xf000, 0x0fff]
      memory range [0xfde00000, 0xfdffffff]
      prefetchable memory range [0xfd000000, 0xfd1fffff]
      BAR0: 32 bit memory at 0xfea16000 [0xfea16fff].
      id "pcie.0-root-port-7"
  Bus  6, device   0, function 0:
    Ethernet controller: PCI device 1af4:1041
      IRQ 0.
      BAR1: 32 bit memory at 0xfde40000 [0xfde40fff].
      BAR4: 64 bit prefetchable memory at 0xfd000000 [0xfd003fff].
      BAR6: 32 bit memory at 0xffffffffffffffff [0x0003fffe].
      id "idPO8uVS"
  Bus  0, device  31, function 0:
    ISA bridge: PCI device 8086:2918
      id ""
  Bus  0, device  31, function 2:
    SATA controller: PCI device 8086:2922
      IRQ 0.
      BAR4: I/O at 0xffffffffffffffff [0x001e].
      BAR5: 32 bit memory at 0xfea17000 [0xfea17fff].
      id ""
  Bus  0, device  31, function 3:
    SMBus: PCI device 8086:2930
      IRQ 10.
      BAR4: I/O at 0x0700 [0x073f].
      id ""
============================================================================================================
After step8:Check the guest vm can be booted in and works normally, the message of "info pci" as:
============================================================================================================
(qemu) info pci
  Bus  0, device   0, function 0:
    Host bridge: PCI device 8086:29c0
      id ""
  Bus  0, device   1, function 0:
    VGA controller: PCI device 1234:1111
      BAR0: 32 bit prefetchable memory at 0xfc000000 [0xfcffffff].
      BAR2: 32 bit memory at 0xfea10000 [0xfea10fff].
      BAR6: 32 bit memory at 0xffffffffffffffff [0x0000fffe].
      id ""
  Bus  0, device   2, function 0:
    PCI bridge: PCI device 1b36:000c
      IRQ 0.
      BUS 0.
      secondary bus 1.
      subordinate bus 1.
      IO range [0xf000, 0x0fff]
      memory range [0xfe800000, 0xfe9fffff]
      prefetchable memory range [0xfda00000, 0xfdbfffff]
      BAR0: 32 bit memory at 0xfea11000 [0xfea11fff].
      id "pcie_root_port_0"
  Bus  0, device   3, function 0:
    PCI bridge: PCI device 1b36:000c
      IRQ 0.
      BUS 0.
      secondary bus 2.
      subordinate bus 2.
      IO range [0xf000, 0x0fff]
      memory range [0xfe600000, 0xfe7fffff]
      prefetchable memory range [0xfd800000, 0xfd9fffff]
      BAR0: 32 bit memory at 0xfea12000 [0xfea12fff].
      id "pcie_root_port_1"
  Bus  0, device   4, function 0:
    PCI bridge: PCI device 1b36:000c
      IRQ 0.
      BUS 0.
      secondary bus 3.
      subordinate bus 3.
      IO range [0xf000, 0x0fff]
      memory range [0xfe400000, 0xfe5fffff]
      prefetchable memory range [0xfd600000, 0xfd7fffff]
      BAR0: 32 bit memory at 0xfea13000 [0xfea13fff].
      id "pcie_root_port_2"
  Bus  0, device   5, function 0:
    PCI bridge: PCI device 1b36:000c
      IRQ 0.
      BUS 0.
      secondary bus 4.
      subordinate bus 4.
      IO range [0xf000, 0x0fff]
      memory range [0xfe200000, 0xfe3fffff]
      prefetchable memory range [0xfd400000, 0xfd5fffff]
      BAR0: 32 bit memory at 0xfea14000 [0xfea14fff].
      id "pcie.0-root-port-5"
  Bus  4, device   0, function 0:
    USB controller: PCI device 1b36:000d
      IRQ 10.
      BAR0: 64 bit memory at 0xfe200000 [0xfe203fff].
      id "usb1"
  Bus  0, device   6, function 0:
    PCI bridge: PCI device 1b36:000c
      IRQ 0.
      BUS 0.
      secondary bus 5.
      subordinate bus 5.
      IO range [0xf000, 0x0fff]
      memory range [0xfe000000, 0xfe1fffff]
      prefetchable memory range [0xfd200000, 0xfd3fffff]
      BAR0: 32 bit memory at 0xfea15000 [0xfea15fff].
      id "pcie.0-root-port-6"
  Bus  5, device   0, function 0:
    SCSI controller: PCI device 1af4:1048
      IRQ 11.
      BAR1: 32 bit memory at 0xfe000000 [0xfe000fff].
      BAR4: 64 bit prefetchable memory at 0xfd200000 [0xfd203fff].
      id "virtio_scsi_pci0"
  Bus  0, device   7, function 0:
    PCI bridge: PCI device 1b36:000c
      IRQ 0.
      BUS 0.
      secondary bus 6.
      subordinate bus 6.
      IO range [0xf000, 0x0fff]
      memory range [0xfde00000, 0xfdffffff]
      prefetchable memory range [0xfd000000, 0xfd1fffff]
      BAR0: 32 bit memory at 0xfea16000 [0xfea16fff].
      id "pcie.0-root-port-7"
  Bus  6, device   0, function 0:
    Ethernet controller: PCI device 1af4:1041
      IRQ 11.
      BAR1: 32 bit memory at 0xfde40000 [0xfde40fff].
      BAR4: 64 bit prefetchable memory at 0xfd000000 [0xfd003fff].
      BAR6: 32 bit memory at 0xffffffffffffffff [0x0003fffe].
      id "idPO8uVS"
  Bus  0, device  31, function 0:
    ISA bridge: PCI device 8086:2918
      id ""
  Bus  0, device  31, function 2:
    SATA controller: PCI device 8086:2922
      IRQ 0.
      BAR4: I/O at 0xffffffffffffffff [0x001e].
      BAR5: 32 bit memory at 0xfea17000 [0xfea17fff].
      id ""
  Bus  0, device  31, function 3:
    SMBus: PCI device 8086:2930
      IRQ 10.
      BAR4: I/O at 0x0700 [0x073f].
      id ""
=========================================================================================================

2. Tested with q35 + drive format + win2019 guest on rhel8 fast train, also reproduced this issue this time, tried 3 times, all reproduced.

The "info pci" results are same with on rhel8 fast train, after step5, IRQ for scsi controller is 0. 
===================================================================  
Bus  5, device   0, function 0:
    SCSI controller: PCI device 1af4:1048
      IRQ 0.
      BAR1: 32 bit memory at 0xfe000000 [0xfe000fff].
      BAR4: 64 bit prefetchable memory at 0xfd200000 [0xfd203fff].
      id "virtio_scsi_pci0"
===================================================================
But after step8, IRQ for scsi controller change to 11.
===================================================================
Bus  5, device   0, function 0:
    SCSI controller: PCI device 1af4:1048
      IRQ 11.
      BAR1: 32 bit memory at 0xfe000000 [0xfe000fff].
      BAR4: 64 bit prefetchable memory at 0xfd200000 [0xfd203fff].
      id "virtio_scsi_pci0"
===================================================================

3. Tested with q35 + drive format + win2019 guest on rhel7 host, also reproduced this issue.
4. Tested with q35 + drive format + win2016 guest on rhel8 slow train, also reproduced this issue.
5. Tested with q35 + drive format + win10-64 guest on rhel8 slow train, not reproduced this issue, tried 3 times, no bsod, after step8, guest works normally.
The "info pci" result after step5(Enable the vioscsi driver):
===========================================================================================
(qemu) info pci
  Bus  0, device   0, function 0:
    Host bridge: PCI device 8086:29c0
      PCI subsystem 1af4:1100
      id ""
  Bus  0, device   1, function 0:
    VGA controller: PCI device 1234:1111
      PCI subsystem 1af4:1100
      BAR0: 32 bit prefetchable memory at 0xfc000000 [0xfcffffff].
      BAR2: 32 bit memory at 0xfea10000 [0xfea10fff].
      BAR6: 32 bit memory at 0xffffffffffffffff [0x0000fffe].
      id ""
  Bus  0, device   2, function 0:
    PCI bridge: PCI device 1b36:000c
      IRQ 0.
      BUS 0.
      secondary bus 1.
      subordinate bus 1.
      IO range [0xf000, 0x0fff]
      memory range [0xfe800000, 0xfe9fffff]
      prefetchable memory range [0xfda00000, 0xfdbfffff]
      BAR0: 32 bit memory at 0xfea11000 [0xfea11fff].
      id "pcie_root_port_0"
  Bus  0, device   3, function 0:
    PCI bridge: PCI device 1b36:000c
      IRQ 0.
      BUS 0.
      secondary bus 2.
      subordinate bus 2.
      IO range [0xf000, 0x0fff]
      memory range [0xfe600000, 0xfe7fffff]
      prefetchable memory range [0xfd800000, 0xfd9fffff]
      BAR0: 32 bit memory at 0xfea12000 [0xfea12fff].
      id "pcie_root_port_1"
  Bus  0, device   4, function 0:
    PCI bridge: PCI device 1b36:000c
      IRQ 0.
      BUS 0.
      secondary bus 3.
      subordinate bus 3.
      IO range [0xf000, 0x0fff]
      memory range [0xfe400000, 0xfe5fffff]
      prefetchable memory range [0xfd600000, 0xfd7fffff]
      BAR0: 32 bit memory at 0xfea13000 [0xfea13fff].
      id "pcie_root_port_2"
  Bus  0, device   5, function 0:
    PCI bridge: PCI device 1b36:000c
      IRQ 0.
      BUS 0.
      secondary bus 4.
      subordinate bus 4.
      IO range [0xf000, 0x0fff]
      memory range [0xfe200000, 0xfe3fffff]
      prefetchable memory range [0xfd400000, 0xfd5fffff]
      BAR0: 32 bit memory at 0xfea14000 [0xfea14fff].
      id "pcie.0-root-port-5"
  Bus  4, device   0, function 0:
    USB controller: PCI device 1b36:000d
      PCI subsystem 1af4:1100
      IRQ 0.
      BAR0: 64 bit memory at 0xfe200000 [0xfe203fff].
      id "usb1"
  Bus  0, device   6, function 0:
    PCI bridge: PCI device 1b36:000c
      IRQ 0.
      BUS 0.
      secondary bus 5.
      subordinate bus 5.
      IO range [0xf000, 0x0fff]
      memory range [0xfe000000, 0xfe1fffff]
      prefetchable memory range [0xfd200000, 0xfd3fffff]
      BAR0: 32 bit memory at 0xfea15000 [0xfea15fff].
      id "pcie.0-root-port-6"
  Bus  5, device   0, function 0:
    SCSI controller: PCI device 1af4:1048
      PCI subsystem 1af4:1100
      IRQ 0.
      BAR1: 32 bit memory at 0xfe000000 [0xfe000fff].
      BAR4: 64 bit prefetchable memory at 0xfd200000 [0xfd203fff].
      id "virtio_scsi_pci0"
  Bus  0, device   7, function 0:
    PCI bridge: PCI device 1b36:000c
      IRQ 0.
      BUS 0.
      secondary bus 6.
      subordinate bus 6.
      IO range [0xf000, 0x0fff]
      memory range [0xfde00000, 0xfdffffff]
      prefetchable memory range [0xfd000000, 0xfd1fffff]
      BAR0: 32 bit memory at 0xfea16000 [0xfea16fff].
      id "pcie.0-root-port-7"
  Bus  6, device   0, function 0:
    Ethernet controller: PCI device 1af4:1041
      PCI subsystem 1af4:1100
      IRQ 0.
      BAR1: 32 bit memory at 0xfde40000 [0xfde40fff].
      BAR4: 64 bit prefetchable memory at 0xfd000000 [0xfd003fff].
      BAR6: 32 bit memory at 0xffffffffffffffff [0x0003fffe].
      id "idPO8uVS"
  Bus  0, device  31, function 0:
    ISA bridge: PCI device 8086:2918
      PCI subsystem 1af4:1100
      id ""
  Bus  0, device  31, function 2:
    SATA controller: PCI device 8086:2922
      PCI subsystem 1af4:1100
      IRQ 0.
      BAR4: I/O at 0xffffffffffffffff [0x001e].
      BAR5: 32 bit memory at 0xfea17000 [0xfea17fff].
      id ""
  Bus  0, device  31, function 3:
    SMBus: PCI device 8086:2930
      PCI subsystem 1af4:1100
      IRQ 10.
      BAR4: I/O at 0x0700 [0x073f].
      id ""
===========================================================================================
The "info pci" result after step 8(Check the guest vm can be booted in and works normally):
===========================================================================================
(qemu) info pci
  Bus  0, device   0, function 0:
    Host bridge: PCI device 8086:29c0
      id ""
  Bus  0, device   1, function 0:
    VGA controller: PCI device 1234:1111
      BAR0: 32 bit prefetchable memory at 0xfc000000 [0xfcffffff].
      BAR2: 32 bit memory at 0xfea10000 [0xfea10fff].
      BAR6: 32 bit memory at 0xffffffffffffffff [0x0000fffe].
      id ""
  Bus  0, device   2, function 0:
    PCI bridge: PCI device 1b36:000c
      IRQ 0.
      BUS 0.
      secondary bus 1.
      subordinate bus 1.
      IO range [0xf000, 0x0fff]
      memory range [0xfe800000, 0xfe9fffff]
      prefetchable memory range [0xfda00000, 0xfdbfffff]
      BAR0: 32 bit memory at 0xfea11000 [0xfea11fff].
      id "pcie_root_port_0"
  Bus  0, device   3, function 0:
    PCI bridge: PCI device 1b36:000c
      IRQ 0.
      BUS 0.
      secondary bus 2.
      subordinate bus 2.
      IO range [0xf000, 0x0fff]
      memory range [0xfe600000, 0xfe7fffff]
      prefetchable memory range [0xfd800000, 0xfd9fffff]
      BAR0: 32 bit memory at 0xfea12000 [0xfea12fff].
      id "pcie_root_port_1"
  Bus  0, device   4, function 0:
    PCI bridge: PCI device 1b36:000c
      IRQ 0.
      BUS 0.
      secondary bus 3.
      subordinate bus 3.
      IO range [0xf000, 0x0fff]
      memory range [0xfe400000, 0xfe5fffff]
      prefetchable memory range [0xfd600000, 0xfd7fffff]
      BAR0: 32 bit memory at 0xfea13000 [0xfea13fff].
      id "pcie_root_port_2"
  Bus  0, device   5, function 0:
    PCI bridge: PCI device 1b36:000c
      IRQ 0.
      BUS 0.
      secondary bus 4.
      subordinate bus 4.
      IO range [0xf000, 0x0fff]
      memory range [0xfe200000, 0xfe3fffff]
      prefetchable memory range [0xfd400000, 0xfd5fffff]
      BAR0: 32 bit memory at 0xfea14000 [0xfea14fff].
      id "pcie.0-root-port-5"
  Bus  4, device   0, function 0:
    USB controller: PCI device 1b36:000d
      IRQ 0.
      BAR0: 64 bit memory at 0xfe200000 [0xfe203fff].
      id "usb1"
  Bus  0, device   6, function 0:
    PCI bridge: PCI device 1b36:000c
      IRQ 0.
      BUS 0.
      secondary bus 5.
      subordinate bus 5.
      IO range [0xf000, 0x0fff]
      memory range [0xfe000000, 0xfe1fffff]
      prefetchable memory range [0xfd200000, 0xfd3fffff]
      BAR0: 32 bit memory at 0xfea15000 [0xfea15fff].
      id "pcie.0-root-port-6"
  Bus  5, device   0, function 0:
    SCSI controller: PCI device 1af4:1048
      IRQ 0.
      BAR1: 32 bit memory at 0xfe000000 [0xfe000fff].
      BAR4: 64 bit prefetchable memory at 0xfd200000 [0xfd203fff].
      id "virtio_scsi_pci0"
  Bus  0, device   7, function 0:
    PCI bridge: PCI device 1b36:000c
      IRQ 0.
      BUS 0.
      secondary bus 6.
      subordinate bus 6.
      IO range [0xf000, 0x0fff]
      memory range [0xfde00000, 0xfdffffff]
      prefetchable memory range [0xfd000000, 0xfd1fffff]
      BAR0: 32 bit memory at 0xfea16000 [0xfea16fff].
      id "pcie.0-root-port-7"
  Bus  6, device   0, function 0:
    Ethernet controller: PCI device 1af4:1041
      IRQ 0.
      BAR1: 32 bit memory at 0xfde40000 [0xfde40fff].
      BAR4: 64 bit prefetchable memory at 0xfd000000 [0xfd003fff].
      BAR6: 32 bit memory at 0xffffffffffffffff [0x0003fffe].
      id "idPO8uVS"
  Bus  0, device  31, function 0:
    ISA bridge: PCI device 8086:2918
      id ""
  Bus  0, device  31, function 2:
    SATA controller: PCI device 8086:2922
      IRQ 0.
      BAR4: I/O at 0xc040 [0xc05f].
      BAR5: 32 bit memory at 0xfea17000 [0xfea17fff].
      id ""
  Bus  0, device  31, function 3:
    SMBus: PCI device 8086:2930
      IRQ 10.
      BAR4: I/O at 0x0700 [0x073f].
      id ""
===========================================================================================
6. Tested with q35 + drive format + win2012-r2 guest on rhel8 slow train, not reproduced this issue, tried 2 times, no bsod, after step8, guest works normally.


Best Regards~
Peixiu

Comment 6 Vadim Rozenfeld 2019-02-14 11:21:14 UTC
Thanks a lot.
It is a quite helpful. At least we can see that for some reason there is a fallback
from MSI to IRQ interrupt mode. Could you please give it another try with a smaller
number of vCPUs - four or even two?

Another question is if viostor has the same problem, because viostor and vioscsi are
almost the same in terms of interrupt handling.

Best regards,
Vadim.

Comment 8 Peixiu Hou 2019-02-15 07:33:25 UTC
(In reply to Vadim Rozenfeld from comment #6)
> Thanks a lot.
> It is a quite helpful. At least we can see that for some reason there is a
> fallback
> from MSI to IRQ interrupt mode. Could you please give it another try with a
> smaller
> number of vCPUs - four or even two?
> 
> Another question is if viostor has the same problem, because viostor and
> vioscsi are
> almost the same in terms of interrupt handling.
> 
Hi Vadim,

I tried reproduce this issue with new installed images, get follows results:

1. Tested on a new installed win2019 image(image installed with -blockdev mode), other steps as comment#0(with -blockdev mode), reproduced this issue.
qemu commands for win2019 image installation as follows:
    -device pcie-root-port,id=pcie.0-root-port-6,slot=6,chassis=6,addr=0x6,bus=pcie.0 \
    -device virtio-scsi-pci,id=virtio_scsi_pci0,bus=pcie.0-root-port-6,addr=0x0 \
    -blockdev node-name=file_image1,driver=file,cache.direct=on,cache.no-flush=off,filename=/home/kvm_autotest_root/images/win2019-64-virtio-scsi.qcow2,aio=threads \
    -blockdev node-name=drive_image1,driver=qcow2,cache.direct=on,cache.no-flush=off,file=file_image1 \
    -device scsi-hd,id=image1,drive=drive_image1,write-cache=on \


2. Tested on a new installed win2019 image(image installed with -drive mode), other steps as comment#0(replaced -blockdev mode with -drive mode), cannot reproduce this issue.
qemu commands for win2019 image installation as follows:
    -device pcie-root-port,id=pcie.0-root-port-6,slot=6,chassis=6,addr=0x6,bus=pcie.0 \
    -device virtio-scsi-pci,id=virtio_scsi_pci0,bus=pcie.0-root-port-6,addr=0x0 \
    -drive id=drive_image1,if=none,snapshot=off,aio=threads,cache=none,format=qcow2,file=/home/kvm_autotest_root/images/win2019-64-virtio-scsi-drive.qcow2 \
    -device scsi-hd,id=image1,drive=drive_image1 \

3. Tested on a new installed win10-64 image(image installed with -blockdev mode), other steps as comment#0(with -blockdev mode), reproduced this issue.

4. Tested on a new installed win10-64 image(image installed with -drive mode), other steps as comment#0(replaced -blockdev mode with -drive mode), cannot reproduce this issue.

5. Tested on a win2019 image(image installed with -blockdev mode), but replaced -blockdev mode with -drive mode for other steps as comment#0, reproduced this issue. I'm sorry for some results with -drive provided before, I do reproduce with the image installed by -blockdev, so tried for -drive also failed.

6. Tested on a new installed win10-64 image(image installed with -blockdev mode), other steps as comment#0(with -blockdev mode and vcpu=2), reproduced this issue.

7. Tested on a new installed win2019 image(image installed with -blockdev mode + virtio-blk-pci), other steps as comment#0(with -blockdev mode + virtio-blk-pci), also reproduced this issue.
qemu commands for win2019 image installation as follows:
    -blockdev node-name=file_image1,driver=file,cache.direct=on,cache.no-flush=off,filename=/home/kvm_autotest_root/images/win2019-64-virtio.qcow2,aio=threads \
    -blockdev node-name=drive_image1,driver=qcow2,cache.direct=on,cache.no-flush=off,file=file_image1 \
    -device pcie-root-port,id=pcie.0-root-port-6,slot=6,chassis=6,addr=0x6,bus=pcie.0 \
    -device virtio-blk-pci,id=image1,drive=drive_image1,bootindex=0,write-cache=on,bus=pcie.0-root-port-6,addr=0x0 \

Used versions:
kernel-4.18.0-62.el8.x86_64
qemu-kvm-2.12.0-57.module+el8+2683+02b3b955.x86_64
virtio-win-1.9.7-3.el8.iso
seabios-bin-1.11.1-3.module+el8+2529+a9686a4d.noarch

Best Regards~
Peixiu

Comment 10 Vadim Rozenfeld 2019-02-17 23:46:04 UTC
(In reply to Peixiu Hou from comment #8)
> (In reply to Vadim Rozenfeld from comment #6)
> > Thanks a lot.
> > It is a quite helpful. At least we can see that for some reason there is a
> > fallback
> > from MSI to IRQ interrupt mode. Could you please give it another try with a
> > smaller
> > number of vCPUs - four or even two?
> > 
> > Another question is if viostor has the same problem, because viostor and
> > vioscsi are
> > almost the same in terms of interrupt handling.
> > 
> Hi Vadim,
> 
> I tried reproduce this issue with new installed images, get follows results:
> 
> 1. Tested on a new installed win2019 image(image installed with -blockdev
> mode), other steps as comment#0(with -blockdev mode), reproduced this issue.
> qemu commands for win2019 image installation as follows:
>     -device
> pcie-root-port,id=pcie.0-root-port-6,slot=6,chassis=6,addr=0x6,bus=pcie.0 \
>     -device
> virtio-scsi-pci,id=virtio_scsi_pci0,bus=pcie.0-root-port-6,addr=0x0 \
>     -blockdev
> node-name=file_image1,driver=file,cache.direct=on,cache.no-flush=off,
> filename=/home/kvm_autotest_root/images/win2019-64-virtio-scsi.qcow2,
> aio=threads \
>     -blockdev
> node-name=drive_image1,driver=qcow2,cache.direct=on,cache.no-flush=off,
> file=file_image1 \
>     -device scsi-hd,id=image1,drive=drive_image1,write-cache=on \
> 
> 
> 2. Tested on a new installed win2019 image(image installed with -drive
> mode), other steps as comment#0(replaced -blockdev mode with -drive mode),
> cannot reproduce this issue.
> qemu commands for win2019 image installation as follows:
>     -device
> pcie-root-port,id=pcie.0-root-port-6,slot=6,chassis=6,addr=0x6,bus=pcie.0 \
>     -device
> virtio-scsi-pci,id=virtio_scsi_pci0,bus=pcie.0-root-port-6,addr=0x0 \
>     -drive
> id=drive_image1,if=none,snapshot=off,aio=threads,cache=none,format=qcow2,
> file=/home/kvm_autotest_root/images/win2019-64-virtio-scsi-drive.qcow2 \
>     -device scsi-hd,id=image1,drive=drive_image1 \
> 
> 3. Tested on a new installed win10-64 image(image installed with -blockdev
> mode), other steps as comment#0(with -blockdev mode), reproduced this issue.
> 
> 4. Tested on a new installed win10-64 image(image installed with -drive
> mode), other steps as comment#0(replaced -blockdev mode with -drive mode),
> cannot reproduce this issue.
> 
> 5. Tested on a win2019 image(image installed with -blockdev mode), but
> replaced -blockdev mode with -drive mode for other steps as comment#0,
> reproduced this issue. I'm sorry for some results with -drive provided
> before, I do reproduce with the image installed by -blockdev, so tried for
> -drive also failed.
> 
> 6. Tested on a new installed win10-64 image(image installed with -blockdev
> mode), other steps as comment#0(with -blockdev mode and vcpu=2), reproduced
> this issue.
> 
> 7. Tested on a new installed win2019 image(image installed with -blockdev
> mode + virtio-blk-pci), other steps as comment#0(with -blockdev mode +
> virtio-blk-pci), also reproduced this issue.
> qemu commands for win2019 image installation as follows:
>     -blockdev
> node-name=file_image1,driver=file,cache.direct=on,cache.no-flush=off,
> filename=/home/kvm_autotest_root/images/win2019-64-virtio.qcow2,aio=threads \
>     -blockdev
> node-name=drive_image1,driver=qcow2,cache.direct=on,cache.no-flush=off,
> file=file_image1 \
>     -device
> pcie-root-port,id=pcie.0-root-port-6,slot=6,chassis=6,addr=0x6,bus=pcie.0 \
>     -device
> virtio-blk-pci,id=image1,drive=drive_image1,bootindex=0,write-cache=on,
> bus=pcie.0-root-port-6,addr=0x0 \
> 
> Used versions:
> kernel-4.18.0-62.el8.x86_64
> qemu-kvm-2.12.0-57.module+el8+2683+02b3b955.x86_64
> virtio-win-1.9.7-3.el8.iso
> seabios-bin-1.11.1-3.module+el8+2529+a9686a4d.noarch
> 
> Best Regards~
> Peixiu

Thanks a lot.
Could you please try booting into Safe mode?
Vadim.

Comment 11 Peixiu Hou 2019-02-18 04:44:11 UTC
(In reply to Vadim Rozenfeld from comment #10)
> (In reply to Peixiu Hou from comment #8)
> > (In reply to Vadim Rozenfeld from comment #6)
> > > Thanks a lot.
> > > It is a quite helpful. At least we can see that for some reason there is a
> > > fallback
> > > from MSI to IRQ interrupt mode. Could you please give it another try with a
> > > smaller
> > > number of vCPUs - four or even two?
> > > 
> > > Another question is if viostor has the same problem, because viostor and
> > > vioscsi are
> > > almost the same in terms of interrupt handling.
> > > 
> > Hi Vadim,
> > 
> > I tried reproduce this issue with new installed images, get follows results:
> > 
> > 1. Tested on a new installed win2019 image(image installed with -blockdev
> > mode), other steps as comment#0(with -blockdev mode), reproduced this issue.
> > qemu commands for win2019 image installation as follows:
> >     -device
> > pcie-root-port,id=pcie.0-root-port-6,slot=6,chassis=6,addr=0x6,bus=pcie.0 \
> >     -device
> > virtio-scsi-pci,id=virtio_scsi_pci0,bus=pcie.0-root-port-6,addr=0x0 \
> >     -blockdev
> > node-name=file_image1,driver=file,cache.direct=on,cache.no-flush=off,
> > filename=/home/kvm_autotest_root/images/win2019-64-virtio-scsi.qcow2,
> > aio=threads \
> >     -blockdev
> > node-name=drive_image1,driver=qcow2,cache.direct=on,cache.no-flush=off,
> > file=file_image1 \
> >     -device scsi-hd,id=image1,drive=drive_image1,write-cache=on \
> > 
> > 
> > 2. Tested on a new installed win2019 image(image installed with -drive
> > mode), other steps as comment#0(replaced -blockdev mode with -drive mode),
> > cannot reproduce this issue.
> > qemu commands for win2019 image installation as follows:
> >     -device
> > pcie-root-port,id=pcie.0-root-port-6,slot=6,chassis=6,addr=0x6,bus=pcie.0 \
> >     -device
> > virtio-scsi-pci,id=virtio_scsi_pci0,bus=pcie.0-root-port-6,addr=0x0 \
> >     -drive
> > id=drive_image1,if=none,snapshot=off,aio=threads,cache=none,format=qcow2,
> > file=/home/kvm_autotest_root/images/win2019-64-virtio-scsi-drive.qcow2 \
> >     -device scsi-hd,id=image1,drive=drive_image1 \
> > 
> > 3. Tested on a new installed win10-64 image(image installed with -blockdev
> > mode), other steps as comment#0(with -blockdev mode), reproduced this issue.
> > 
> > 4. Tested on a new installed win10-64 image(image installed with -drive
> > mode), other steps as comment#0(replaced -blockdev mode with -drive mode),
> > cannot reproduce this issue.
> > 
> > 5. Tested on a win2019 image(image installed with -blockdev mode), but
> > replaced -blockdev mode with -drive mode for other steps as comment#0,
> > reproduced this issue. I'm sorry for some results with -drive provided
> > before, I do reproduce with the image installed by -blockdev, so tried for
> > -drive also failed.
> > 
> > 6. Tested on a new installed win10-64 image(image installed with -blockdev
> > mode), other steps as comment#0(with -blockdev mode and vcpu=2), reproduced
> > this issue.
> > 
> > 7. Tested on a new installed win2019 image(image installed with -blockdev
> > mode + virtio-blk-pci), other steps as comment#0(with -blockdev mode +
> > virtio-blk-pci), also reproduced this issue.
> > qemu commands for win2019 image installation as follows:
> >     -blockdev
> > node-name=file_image1,driver=file,cache.direct=on,cache.no-flush=off,
> > filename=/home/kvm_autotest_root/images/win2019-64-virtio.qcow2,aio=threads \
> >     -blockdev
> > node-name=drive_image1,driver=qcow2,cache.direct=on,cache.no-flush=off,
> > file=file_image1 \
> >     -device
> > pcie-root-port,id=pcie.0-root-port-6,slot=6,chassis=6,addr=0x6,bus=pcie.0 \
> >     -device
> > virtio-blk-pci,id=image1,drive=drive_image1,bootindex=0,write-cache=on,
> > bus=pcie.0-root-port-6,addr=0x0 \
> > 
> > Used versions:
> > kernel-4.18.0-62.el8.x86_64
> > qemu-kvm-2.12.0-57.module+el8+2683+02b3b955.x86_64
> > virtio-win-1.9.7-3.el8.iso
> > seabios-bin-1.11.1-3.module+el8+2529+a9686a4d.noarch
> > 
> > Best Regards~
> > Peixiu
> 
> Thanks a lot.
> Could you please try booting into Safe mode?
> Vadim.

Hi Vadim,

I tried booting into Safe mode for step8(with blockdev+win2019), booted in safe mode successfully, and then disabled boot safe mode in the guest -->reboot the guest, booted in os normally.

Best Regards~
Peixiu

Comment 12 Vadim Rozenfeld 2019-02-18 04:59:13 UTC
(In reply to Peixiu Hou from comment #11)
> (In reply to Vadim Rozenfeld from comment #10)
> > (In reply to Peixiu Hou from comment #8)
> > > (In reply to Vadim Rozenfeld from comment #6)
> > > > Thanks a lot.
> > > > It is a quite helpful. At least we can see that for some reason there is a
> > > > fallback
> > > > from MSI to IRQ interrupt mode. Could you please give it another try with a
> > > > smaller
> > > > number of vCPUs - four or even two?
> > > > 
> > > > Another question is if viostor has the same problem, because viostor and
> > > > vioscsi are
> > > > almost the same in terms of interrupt handling.
> > > > 
> > > Hi Vadim,
> > > 
> > > I tried reproduce this issue with new installed images, get follows results:
> > > 
> > > 1. Tested on a new installed win2019 image(image installed with -blockdev
> > > mode), other steps as comment#0(with -blockdev mode), reproduced this issue.
> > > qemu commands for win2019 image installation as follows:
> > >     -device
> > > pcie-root-port,id=pcie.0-root-port-6,slot=6,chassis=6,addr=0x6,bus=pcie.0 \
> > >     -device
> > > virtio-scsi-pci,id=virtio_scsi_pci0,bus=pcie.0-root-port-6,addr=0x0 \
> > >     -blockdev
> > > node-name=file_image1,driver=file,cache.direct=on,cache.no-flush=off,
> > > filename=/home/kvm_autotest_root/images/win2019-64-virtio-scsi.qcow2,
> > > aio=threads \
> > >     -blockdev
> > > node-name=drive_image1,driver=qcow2,cache.direct=on,cache.no-flush=off,
> > > file=file_image1 \
> > >     -device scsi-hd,id=image1,drive=drive_image1,write-cache=on \
> > > 
> > > 
> > > 2. Tested on a new installed win2019 image(image installed with -drive
> > > mode), other steps as comment#0(replaced -blockdev mode with -drive mode),
> > > cannot reproduce this issue.
> > > qemu commands for win2019 image installation as follows:
> > >     -device
> > > pcie-root-port,id=pcie.0-root-port-6,slot=6,chassis=6,addr=0x6,bus=pcie.0 \
> > >     -device
> > > virtio-scsi-pci,id=virtio_scsi_pci0,bus=pcie.0-root-port-6,addr=0x0 \
> > >     -drive
> > > id=drive_image1,if=none,snapshot=off,aio=threads,cache=none,format=qcow2,
> > > file=/home/kvm_autotest_root/images/win2019-64-virtio-scsi-drive.qcow2 \
> > >     -device scsi-hd,id=image1,drive=drive_image1 \
> > > 
> > > 3. Tested on a new installed win10-64 image(image installed with -blockdev
> > > mode), other steps as comment#0(with -blockdev mode), reproduced this issue.
> > > 
> > > 4. Tested on a new installed win10-64 image(image installed with -drive
> > > mode), other steps as comment#0(replaced -blockdev mode with -drive mode),
> > > cannot reproduce this issue.
> > > 
> > > 5. Tested on a win2019 image(image installed with -blockdev mode), but
> > > replaced -blockdev mode with -drive mode for other steps as comment#0,
> > > reproduced this issue. I'm sorry for some results with -drive provided
> > > before, I do reproduce with the image installed by -blockdev, so tried for
> > > -drive also failed.
> > > 
> > > 6. Tested on a new installed win10-64 image(image installed with -blockdev
> > > mode), other steps as comment#0(with -blockdev mode and vcpu=2), reproduced
> > > this issue.
> > > 
> > > 7. Tested on a new installed win2019 image(image installed with -blockdev
> > > mode + virtio-blk-pci), other steps as comment#0(with -blockdev mode +
> > > virtio-blk-pci), also reproduced this issue.
> > > qemu commands for win2019 image installation as follows:
> > >     -blockdev
> > > node-name=file_image1,driver=file,cache.direct=on,cache.no-flush=off,
> > > filename=/home/kvm_autotest_root/images/win2019-64-virtio.qcow2,aio=threads \
> > >     -blockdev
> > > node-name=drive_image1,driver=qcow2,cache.direct=on,cache.no-flush=off,
> > > file=file_image1 \
> > >     -device
> > > pcie-root-port,id=pcie.0-root-port-6,slot=6,chassis=6,addr=0x6,bus=pcie.0 \
> > >     -device
> > > virtio-blk-pci,id=image1,drive=drive_image1,bootindex=0,write-cache=on,
> > > bus=pcie.0-root-port-6,addr=0x0 \
> > > 
> > > Used versions:
> > > kernel-4.18.0-62.el8.x86_64
> > > qemu-kvm-2.12.0-57.module+el8+2683+02b3b955.x86_64
> > > virtio-win-1.9.7-3.el8.iso
> > > seabios-bin-1.11.1-3.module+el8+2529+a9686a4d.noarch
> > > 
> > > Best Regards~
> > > Peixiu
> > 
> > Thanks a lot.
> > Could you please try booting into Safe mode?
> > Vadim.
> 
> Hi Vadim,
> 
> I tried booting into Safe mode for step8(with blockdev+win2019), booted in
> safe mode successfully, and then disabled boot safe mode in the guest
> -->reboot the guest, booted in os normally.
> 
> Best Regards~
> Peixiu

Thanks a lot.

Then it is not a bug but expected behaviour.
IIRC, the recent Windows OSes disable third-party boot-time storage devices
if they are not in use. THe only way to bring them back is booting up into 
Safe mode. It probably should be documented somewhere. I will try to fine this 
info on msdn site, and then we need to crate some notes to document this feature.

All the best,
Vadim.

Comment 13 Peixiu Hou 2019-02-18 07:23:55 UTC
(In reply to Vadim Rozenfeld from comment #12)
> (In reply to Peixiu Hou from comment #11)
> > (In reply to Vadim Rozenfeld from comment #10)
> > > (In reply to Peixiu Hou from comment #8)
> > > > (In reply to Vadim Rozenfeld from comment #6)
> > > > > Thanks a lot.
> > > > > It is a quite helpful. At least we can see that for some reason there is a
> > > > > fallback
> > > > > from MSI to IRQ interrupt mode. Could you please give it another try with a
> > > > > smaller
> > > > > number of vCPUs - four or even two?
> > > > > 
> > > > > Another question is if viostor has the same problem, because viostor and
> > > > > vioscsi are
> > > > > almost the same in terms of interrupt handling.
> > > > > 
> > > > Hi Vadim,
> > > > 
> > > > I tried reproduce this issue with new installed images, get follows results:
> > > > 
> > > > 1. Tested on a new installed win2019 image(image installed with -blockdev
> > > > mode), other steps as comment#0(with -blockdev mode), reproduced this issue.
> > > > qemu commands for win2019 image installation as follows:
> > > >     -device
> > > > pcie-root-port,id=pcie.0-root-port-6,slot=6,chassis=6,addr=0x6,bus=pcie.0 \
> > > >     -device
> > > > virtio-scsi-pci,id=virtio_scsi_pci0,bus=pcie.0-root-port-6,addr=0x0 \
> > > >     -blockdev
> > > > node-name=file_image1,driver=file,cache.direct=on,cache.no-flush=off,
> > > > filename=/home/kvm_autotest_root/images/win2019-64-virtio-scsi.qcow2,
> > > > aio=threads \
> > > >     -blockdev
> > > > node-name=drive_image1,driver=qcow2,cache.direct=on,cache.no-flush=off,
> > > > file=file_image1 \
> > > >     -device scsi-hd,id=image1,drive=drive_image1,write-cache=on \
> > > > 
> > > > 
> > > > 2. Tested on a new installed win2019 image(image installed with -drive
> > > > mode), other steps as comment#0(replaced -blockdev mode with -drive mode),
> > > > cannot reproduce this issue.
> > > > qemu commands for win2019 image installation as follows:
> > > >     -device
> > > > pcie-root-port,id=pcie.0-root-port-6,slot=6,chassis=6,addr=0x6,bus=pcie.0 \
> > > >     -device
> > > > virtio-scsi-pci,id=virtio_scsi_pci0,bus=pcie.0-root-port-6,addr=0x0 \
> > > >     -drive
> > > > id=drive_image1,if=none,snapshot=off,aio=threads,cache=none,format=qcow2,
> > > > file=/home/kvm_autotest_root/images/win2019-64-virtio-scsi-drive.qcow2 \
> > > >     -device scsi-hd,id=image1,drive=drive_image1 \
> > > > 
> > > > 3. Tested on a new installed win10-64 image(image installed with -blockdev
> > > > mode), other steps as comment#0(with -blockdev mode), reproduced this issue.
> > > > 
> > > > 4. Tested on a new installed win10-64 image(image installed with -drive
> > > > mode), other steps as comment#0(replaced -blockdev mode with -drive mode),
> > > > cannot reproduce this issue.
> > > > 
> > > > 5. Tested on a win2019 image(image installed with -blockdev mode), but
> > > > replaced -blockdev mode with -drive mode for other steps as comment#0,
> > > > reproduced this issue. I'm sorry for some results with -drive provided
> > > > before, I do reproduce with the image installed by -blockdev, so tried for
> > > > -drive also failed.
> > > > 
> > > > 6. Tested on a new installed win10-64 image(image installed with -blockdev
> > > > mode), other steps as comment#0(with -blockdev mode and vcpu=2), reproduced
> > > > this issue.
> > > > 
> > > > 7. Tested on a new installed win2019 image(image installed with -blockdev
> > > > mode + virtio-blk-pci), other steps as comment#0(with -blockdev mode +
> > > > virtio-blk-pci), also reproduced this issue.
> > > > qemu commands for win2019 image installation as follows:
> > > >     -blockdev
> > > > node-name=file_image1,driver=file,cache.direct=on,cache.no-flush=off,
> > > > filename=/home/kvm_autotest_root/images/win2019-64-virtio.qcow2,aio=threads \
> > > >     -blockdev
> > > > node-name=drive_image1,driver=qcow2,cache.direct=on,cache.no-flush=off,
> > > > file=file_image1 \
> > > >     -device
> > > > pcie-root-port,id=pcie.0-root-port-6,slot=6,chassis=6,addr=0x6,bus=pcie.0 \
> > > >     -device
> > > > virtio-blk-pci,id=image1,drive=drive_image1,bootindex=0,write-cache=on,
> > > > bus=pcie.0-root-port-6,addr=0x0 \
> > > > 
> > > > Used versions:
> > > > kernel-4.18.0-62.el8.x86_64
> > > > qemu-kvm-2.12.0-57.module+el8+2683+02b3b955.x86_64
> > > > virtio-win-1.9.7-3.el8.iso
> > > > seabios-bin-1.11.1-3.module+el8+2529+a9686a4d.noarch
> > > > 
> > > > Best Regards~
> > > > Peixiu
> > > 
> > > Thanks a lot.
> > > Could you please try booting into Safe mode?
> > > Vadim.
> > 
> > Hi Vadim,
> > 
> > I tried booting into Safe mode for step8(with blockdev+win2019), booted in
> > safe mode successfully, and then disabled boot safe mode in the guest
> > -->reboot the guest, booted in os normally.
> > 
> > Best Regards~
> > Peixiu
> 
> Thanks a lot.
> 
> Then it is not a bug but expected behaviour.
> IIRC, the recent Windows OSes disable third-party boot-time storage devices
> if they are not in use. THe only way to bring them back is booting up into 
> Safe mode. It probably should be documented somewhere. I will try to fine
> this 
> info on msdn site, and then we need to crate some notes to document this
> feature.
> 
> All the best,
> Vadim.


Hi Vadim,

I have a confuse here, with -blockdev format hit this issue, but with -drive format not hit this issue, if due to this reason, with -drive format should also reproduce this issue. 
and btw, we used to test with -drive format before, many virtio-win-scsi automation tests can cover this issue steps, never hit this issue.

Thanks~
Peixiu

Comment 14 Vadim Rozenfeld 2019-02-19 01:57:44 UTC
(In reply to Peixiu Hou from comment #13)
> (In reply to Vadim Rozenfeld from comment #12)
> > (In reply to Peixiu Hou from comment #11)
> > > (In reply to Vadim Rozenfeld from comment #10)
> > > > (In reply to Peixiu Hou from comment #8)
> > > > > (In reply to Vadim Rozenfeld from comment #6)
> > > > > > Thanks a lot.
> > > > > > It is a quite helpful. At least we can see that for some reason there is a
> > > > > > fallback
> > > > > > from MSI to IRQ interrupt mode. Could you please give it another try with a
> > > > > > smaller
> > > > > > number of vCPUs - four or even two?
> > > > > > 
> > > > > > Another question is if viostor has the same problem, because viostor and
> > > > > > vioscsi are
> > > > > > almost the same in terms of interrupt handling.
> > > > > > 
> > > > > Hi Vadim,
> > > > > 
> > > > > I tried reproduce this issue with new installed images, get follows results:
> > > > > 
> > > > > 1. Tested on a new installed win2019 image(image installed with -blockdev
> > > > > mode), other steps as comment#0(with -blockdev mode), reproduced this issue.
> > > > > qemu commands for win2019 image installation as follows:
> > > > >     -device
> > > > > pcie-root-port,id=pcie.0-root-port-6,slot=6,chassis=6,addr=0x6,bus=pcie.0 \
> > > > >     -device
> > > > > virtio-scsi-pci,id=virtio_scsi_pci0,bus=pcie.0-root-port-6,addr=0x0 \
> > > > >     -blockdev
> > > > > node-name=file_image1,driver=file,cache.direct=on,cache.no-flush=off,
> > > > > filename=/home/kvm_autotest_root/images/win2019-64-virtio-scsi.qcow2,
> > > > > aio=threads \
> > > > >     -blockdev
> > > > > node-name=drive_image1,driver=qcow2,cache.direct=on,cache.no-flush=off,
> > > > > file=file_image1 \
> > > > >     -device scsi-hd,id=image1,drive=drive_image1,write-cache=on \
> > > > > 
> > > > > 
> > > > > 2. Tested on a new installed win2019 image(image installed with -drive
> > > > > mode), other steps as comment#0(replaced -blockdev mode with -drive mode),
> > > > > cannot reproduce this issue.
> > > > > qemu commands for win2019 image installation as follows:
> > > > >     -device
> > > > > pcie-root-port,id=pcie.0-root-port-6,slot=6,chassis=6,addr=0x6,bus=pcie.0 \
> > > > >     -device
> > > > > virtio-scsi-pci,id=virtio_scsi_pci0,bus=pcie.0-root-port-6,addr=0x0 \
> > > > >     -drive
> > > > > id=drive_image1,if=none,snapshot=off,aio=threads,cache=none,format=qcow2,
> > > > > file=/home/kvm_autotest_root/images/win2019-64-virtio-scsi-drive.qcow2 \
> > > > >     -device scsi-hd,id=image1,drive=drive_image1 \
> > > > > 
> > > > > 3. Tested on a new installed win10-64 image(image installed with -blockdev
> > > > > mode), other steps as comment#0(with -blockdev mode), reproduced this issue.
> > > > > 
> > > > > 4. Tested on a new installed win10-64 image(image installed with -drive
> > > > > mode), other steps as comment#0(replaced -blockdev mode with -drive mode),
> > > > > cannot reproduce this issue.
> > > > > 
> > > > > 5. Tested on a win2019 image(image installed with -blockdev mode), but
> > > > > replaced -blockdev mode with -drive mode for other steps as comment#0,
> > > > > reproduced this issue. I'm sorry for some results with -drive provided
> > > > > before, I do reproduce with the image installed by -blockdev, so tried for
> > > > > -drive also failed.
> > > > > 
> > > > > 6. Tested on a new installed win10-64 image(image installed with -blockdev
> > > > > mode), other steps as comment#0(with -blockdev mode and vcpu=2), reproduced
> > > > > this issue.
> > > > > 
> > > > > 7. Tested on a new installed win2019 image(image installed with -blockdev
> > > > > mode + virtio-blk-pci), other steps as comment#0(with -blockdev mode +
> > > > > virtio-blk-pci), also reproduced this issue.
> > > > > qemu commands for win2019 image installation as follows:
> > > > >     -blockdev
> > > > > node-name=file_image1,driver=file,cache.direct=on,cache.no-flush=off,
> > > > > filename=/home/kvm_autotest_root/images/win2019-64-virtio.qcow2,aio=threads \
> > > > >     -blockdev
> > > > > node-name=drive_image1,driver=qcow2,cache.direct=on,cache.no-flush=off,
> > > > > file=file_image1 \
> > > > >     -device
> > > > > pcie-root-port,id=pcie.0-root-port-6,slot=6,chassis=6,addr=0x6,bus=pcie.0 \
> > > > >     -device
> > > > > virtio-blk-pci,id=image1,drive=drive_image1,bootindex=0,write-cache=on,
> > > > > bus=pcie.0-root-port-6,addr=0x0 \
> > > > > 
> > > > > Used versions:
> > > > > kernel-4.18.0-62.el8.x86_64
> > > > > qemu-kvm-2.12.0-57.module+el8+2683+02b3b955.x86_64
> > > > > virtio-win-1.9.7-3.el8.iso
> > > > > seabios-bin-1.11.1-3.module+el8+2529+a9686a4d.noarch
> > > > > 
> > > > > Best Regards~
> > > > > Peixiu
> > > > 
> > > > Thanks a lot.
> > > > Could you please try booting into Safe mode?
> > > > Vadim.
> > > 
> > > Hi Vadim,
> > > 
> > > I tried booting into Safe mode for step8(with blockdev+win2019), booted in
> > > safe mode successfully, and then disabled boot safe mode in the guest
> > > -->reboot the guest, booted in os normally.
> > > 
> > > Best Regards~
> > > Peixiu
> > 
> > Thanks a lot.
> > 
> > Then it is not a bug but expected behaviour.
> > IIRC, the recent Windows OSes disable third-party boot-time storage devices
> > if they are not in use. THe only way to bring them back is booting up into 
> > Safe mode. It probably should be documented somewhere. I will try to fine
> > this 
> > info on msdn site, and then we need to crate some notes to document this
> > feature.
> > 
> > All the best,
> > Vadim.
> 
> 
> Hi Vadim,
> 
> I have a confuse here, with -blockdev format hit this issue, but with -drive
> format not hit this issue, if due to this reason, with -drive format should
> also reproduce this issue. 
> and btw, we used to test with -drive format before, many virtio-win-scsi
> automation tests can cover this issue steps, never hit this issue.
> 
> Thanks~
> Peixiu

It can be related to the device model.
There is some information on msdn sife here
https://support.microsoft.com/en-gb/help/2795397/inaccessible-boot-device-error-message-after-you-install-a-third-party

But being hones, I don't know the exact MS policy regarding to disabling a boot time device driver or keeping it active.
There is still a chance that there is a bug in the driver, which triggers an error when Windows tries to fast boot after 
changing the storage type (form sata/ide to virtio), and this error makes Windows to disable the virtio driver. I will 
try to investigate this scenario a bit latter.

Best,
Vadim.

Comment 19 Ademar Reis 2020-02-05 22:54:08 UTC
QEMU has been recently split into sub-components and as a one-time operation to avoid breakage of tools, we are setting the QEMU sub-component of this BZ to "General". Please review and change the sub-component if necessary the next time you review this BZ. Thanks

Comment 20 Vadim Rozenfeld 2020-03-03 09:12:13 UTC
Based on my previous comment https://bugzilla.redhat.com/show_bug.cgi?id=1670673#c14
I'm going to close this issue as WONTFIX. 
The BSOD in this situation is a normal thing and can be fixed by booting into safe mode
as mentioned on MSDN site https://support.microsoft.com/en-gb/help/2795397/inaccessible-boot-device-error-message-after-you-install-a-third-party


Note You need to log in before you can comment on or make changes to this bug.