RHEL Engineering is moving the tracking of its product development work on RHEL 6 through RHEL 9 to Red Hat Jira (issues.redhat.com). If you're a Red Hat customer, please continue to file support cases via the Red Hat customer portal. If you're not, please head to the "RHEL project" in Red Hat Jira and file new tickets here. Individual Bugzilla bugs in the statuses "NEW", "ASSIGNED", and "POST" are being migrated throughout September 2023. Bugs of Red Hat partners with an assigned Engineering Partner Manager (EPM) are migrated in late September as per pre-agreed dates. Bugs against components "kernel", "kernel-rt", and "kpatch" are only migrated if still in "NEW" or "ASSIGNED". If you cannot log in to RH Jira, please consult article #7032570. That failing, please send an e-mail to the RH Jira admins at rh-issues@redhat.com to troubleshoot your issue as a user management inquiry. The email creates a ServiceNow ticket with Red Hat. Individual Bugzilla bugs that are migrated will be moved to status "CLOSED", resolution "MIGRATED", and set with "MigratedToJIRA" in "Keywords". The link to the successor Jira issue will be found under "Links", have a little "two-footprint" icon next to it, and direct you to the "RHEL project" in Red Hat Jira (issue links are of type "https://issues.redhat.com/browse/RHEL-XXXX", where "X" is a digit). This same link will be available in a blue banner at the top of the page informing you that that bug has been migrated.
Bug 1667330 - [virtio-win][viostor] guest cannot shutdown after unplug virtio-blk-pci device at sometimes on windows guest.
Summary: [virtio-win][viostor] guest cannot shutdown after unplug virtio-blk-pci devic...
Keywords:
Status: CLOSED CURRENTRELEASE
Alias: None
Product: Red Hat Enterprise Linux 9
Classification: Red Hat
Component: virtio-win
Version: 9.2
Hardware: Unspecified
OS: Unspecified
medium
unspecified
Target Milestone: rc
: ---
Assignee: Vadim Rozenfeld
QA Contact: menli@redhat.com
URL:
Whiteboard:
: 1683599 (view as bug list)
Depends On: 1682882
Blocks: 1744438 1897024
TreeView+ depends on / blocked
 
Reported: 2019-01-18 07:43 UTC by Peixiu Hou
Modified: 2023-07-11 09:19 UTC (History)
7 users (show)

Fixed In Version:
Doc Type: If docs needed, set a value
Doc Text:
Clone Of:
Environment:
Last Closed: 2023-07-11 07:58:58 UTC
Type: Bug
Target Upstream Version:
Embargoed:


Attachments (Terms of Use)
long time in shutting down page (14.65 KB, image/png)
2019-01-18 07:44 UTC, Peixiu Hou
no flags Details
Download the windows system event log from win2012-r2 guest (2.07 MB, application/octet-stream)
2019-01-18 07:55 UTC, Peixiu Hou
no flags Details

Description Peixiu Hou 2019-01-18 07:43:21 UTC
Description of problem:
After unplug virtio-blk-pci device, shutdown the windows vm, the vm cannot be shutdown, In shutting down page, the status icon is always in circles, waiting 17 hours, also same, no bsod or others status.

It's not 100% reproduced, with automation test, tested 10 times, hit 1 time.
with manually test, tested about 5 times, hit 1 time.

Version-Release number of selected component (if applicable):
kernel-4.18.0-58.el8.x86_64
qemu-kvm-2.12.0-57.module+el8+2683+02b3b955.x86_64
virtio-win-prewhql-163
seabios-bin-1.11.1-3.module+el8+2529+a9686a4d.noarch

How reproducible:
10%

Steps to Reproduce:
1.Boot the guest up:
------------------------------------------------------------------------------
MALLOC_PERTURB_=1  /usr/libexec/qemu-kvm \
    -name 'avocado-vt-vm1' \
    -machine q35  \
    -nodefaults \
    -device VGA,bus=pcie.0,addr=0x1 \
    -device pcie-root-port,id=pcie_root_port_0,slot=2,chassis=2,addr=0x2,bus=pcie.0 \
    -device pcie-root-port,id=pcie_root_port_1,slot=3,chassis=3,addr=0x3,bus=pcie.0 \
    -device pcie-root-port,id=pcie_root_port_2,slot=4,chassis=4,addr=0x4,bus=pcie.0  \
    -device pvpanic,ioport=0x505,id=idiQ69rd  \
    -device pcie-root-port,id=pcie.0-root-port-5,slot=5,chassis=5,addr=0x5,bus=pcie.0 \
    -device qemu-xhci,id=usb1,bus=pcie.0-root-port-5,addr=0x0 \
    -drive id=drive_image1,if=none,snapshot=off,aio=threads,cache=none,format=qcow2,file=/home/kvm_autotest_root/images/win2012-64r2-virtio.qcow2 \
    -device pcie-root-port,id=pcie.0-root-port-6,slot=6,chassis=6,addr=0x6,bus=pcie.0 \
    -device virtio-blk-pci,id=image1,drive=drive_image1,bootindex=0,bus=pcie.0-root-port-6,addr=0x0 \
    -device pcie-root-port,id=pcie.0-root-port-7,slot=7,chassis=7,addr=0x7,bus=pcie.0 \
    -device virtio-net-pci,mac=9a:f9:fa:fb:fc:fd,id=idMMmR7j,vectors=4,netdev=idhlyOBa,bus=pcie.0-root-port-7,addr=0x0  \
    -netdev tap,id=idhlyOBa,vhost=on \
    -m 14336  \
    -smp 24,maxcpus=24,cores=12,threads=1,sockets=2  \
    -cpu 'Skylake-Server',+kvm_pv_unhalt,hv_relaxed,hv_spinlocks=0x1fff,hv_vapic,hv_time \
    -drive id=drive_cd1,if=none,snapshot=off,aio=threads,cache=none,media=cdrom,file=/home/kvm_autotest_root/iso/windows/winutils.iso \
    -device ide-cd,id=cd1,drive=drive_cd1,bootindex=1,bus=ide.0,unit=0 \
    -device usb-tablet,id=usb-tablet1,bus=usb1.0,port=1  \
    -vnc :0  \
    -rtc base=localtime,clock=host,driftfix=slew  \
    -boot order=cdn,once=c,menu=off,strict=off \
    -enable-kvm \
    -monitor stdio \
    -qmp tcp:0:4446,server,nowait \
-------------------------------------------------------------------------------
2. hot-plug a drive:
(qemu)drive_add auto id=drive_stg0,if=none,snapshot=off,aio=threads,cache=none,format=raw,file=/home/storage0.raw
OK

3. hot-plug a virtio-blk-pci device:
#telnet host_ip 4446
{"execute": "qmp_capabilities"}
{"execute":"device_add","arguments":{"driver":"virtio-blk-pci","id":"stg0","drive":"drive_stg0","bus":"pcie_root_port_0"}}

4. Format disk:
C:\>echo list disk > disk && echo exit >> disk && diskpart /s disk
C:\>echo list disk > disk &&echo select disk 1 >> disk && echo detail disk >> disk && echo exit>> disk && diskpart /s disk&& del /f disk
C:\>echo list disk > disk &&echo select disk 1 >> disk && echo create partition primary >> disk && echo exit>> disk && diskpart /s disk&& del /f disk
C:\>echo list disk > disk &&echo select disk 1 >> disk && echo list partition >> disk && echo select partition 1 >> disk && echo assign letter=I >> disk && echo format fs=ntfs quick >> disk  && echo exit>> disk && diskpart /s disk&& del /f disk

5. Do iozone test on new added disk:
C:\>D:\Iozone\iozone.exe -azR -r 64k -n 125M -g 512M -M -i 0 -i 1 -b I:\iozone_test -f I:\testfile

6. After iozone test finished, unplug the disk:
{'execute': 'device_del', 'arguments': {'id': 'stg0'}, 'id': '4EnspY6c'}

7. Shutdown the guest vm.
C:\>shutdown -s 

Actual results:
In shutting down page, the status icon is always in circles, waiting 17 hours, no bsod or others status.

Expected results:
shutdown vm normally

Additional info:
1. Also hit on win10-64, win2019, win7-64 guests.
2. Reproduced this bug with virtio-win-prewhql-160, so it's not a regression bug.
3. Tried test on rhel8.0 guest 10 times, not hit this issue.
4. Also reproduced on rhel7.6 host with win2012-r2 guest.

Comment 1 Peixiu Hou 2019-01-18 07:44:15 UTC
Created attachment 1521428 [details]
long time in shutting down page

Comment 2 Peixiu Hou 2019-01-18 07:55:18 UTC
Created attachment 1521430 [details]
Download the windows system event log from win2012-r2 guest

Comment 4 lchai 2019-03-04 09:32:23 UTC
*** Bug 1683599 has been marked as a duplicate of this bug. ***

Comment 7 Vadim Rozenfeld 2019-04-10 02:04:42 UTC
Can we check if the problem is reproducible with the latest sdrivers from build 170?
http://download.eng.bos.redhat.com/brewroot/packages/virtio-win-prewhql/0.1/170/win/virtio-win-prewhql-0.1.zip

Thanks,
Vadim.

Comment 8 Peixiu Hou 2019-04-11 03:04:42 UTC
(In reply to Vadim Rozenfeld from comment #7)
> Can we check if the problem is reproducible with the latest sdrivers from
> build 170?
> http://download.eng.bos.redhat.com/brewroot/packages/virtio-win-prewhql/0.1/
> 170/win/virtio-win-prewhql-0.1.zip
> 
> Thanks,
> Vadim.

Hi vadim,

Tried testwith virtio-win-prewhql-170 on win2012-r2 geust, it also can be reproduced, I tried run 100 times with automation, hit this issue 40 times.

Best Regards~
Peixiu

Comment 10 menli@redhat.com 2020-05-09 09:12:07 UTC
hit a similar issue for case 'migration_with_block.with_dataplane_on2off.send_shell.shutdown_vm' in auto test,failed to shutdown vm,just track it here.

Host:
qemu-kvm-4.2.0-19.module+el8.2.0+6296+6b821950.x86_64
kernel-4.18.0-193.el8.x86_649bf
seabios-1.13.0-1.module+el8.2.0+5520+4e5817f3.x86_64

Guest:
windows 2016(q35) with virtio-win-prewhql-181.iso

Comment 11 menli@redhat.com 2020-05-09 10:14:53 UTC
(In reply to menli from comment #10)
> hit a similar issue for case
> 'migration_with_block.with_dataplane_on2off.send_shell.shutdown_vm' in auto
> test,failed to shutdown vm,just track it here.
> 
> Host:
> qemu-kvm-4.2.0-19.module+el8.2.0+6296+6b821950.x86_64
> kernel-4.18.0-193.el8.x86_649bf
> seabios-1.13.0-1.module+el8.2.0+5520+4e5817f3.x86_64
> 
> Guest:
> windows 2016(q35) with virtio-win-prewhql-181.iso

1. ceate 40G data disk
qemu-img create -f qcow2 /home/kvm_autotest_root/images/storage0.qcow2 40G

2. Boot guest with dataplane on src host

MALLOC_PERTURB_=1  /usr/libexec/qemu-kvm \
    -S  \
    -name 'avocado-vt-vm1'  \
    -sandbox on  \
    -machine q35 \
    -device pcie-root-port,id=pcie-root-port-0,multifunction=on,bus=pcie.0,addr=0x1,chassis=1 \
    -device pcie-pci-bridge,id=pcie-pci-bridge-0,addr=0x0,bus=pcie-root-port-0  \
    -nodefaults \
    -device VGA,bus=pcie.0,addr=0x2 \
    -m 14336  \
    -smp 24,maxcpus=24,cores=12,threads=1,dies=1,sockets=2  \
    -cpu 'Skylake-Server',hv_stimer,hv_synic,hv_vpindex,hv_relaxed,hv_spinlocks=0x1fff,hv_vapic,hv_time,hv_frequencies,hv_runtime,hv_tlbflush,hv_reenlightenment,hv_stimer_direct,hv_ipi,+kvm_pv_unhalt \
    -chardev socket,server,nowait,path=/var/tmp/avocado_3s1051dg/monitor-qmpmonitor1-20200429-015214-HL3oizxq,id=qmp_id_qmpmonitor1  \
    -mon chardev=qmp_id_qmpmonitor1,mode=control \
    -chardev socket,server,nowait,path=/var/tmp/avocado_3s1051dg/monitor-catch_monitor-20200429-015214-HL3oizxq,id=qmp_id_catch_monitor  \
    -mon chardev=qmp_id_catch_monitor,mode=control \
    -device pvpanic,ioport=0x505,id=id3UJFmZ \
    -chardev socket,server,nowait,path=/var/tmp/avocado_3s1051dg/serial-serial0-20200429-015214-HL3oizxq,id=chardev_serial0 \
    -device isa-serial,id=serial0,chardev=chardev_serial0  \
    -chardev socket,id=seabioslog_id_20200429-015214-HL3oizxq,path=/var/tmp/avocado_3s1051dg/seabios-20200429-015214-HL3oizxq,server,nowait \
    -device isa-debugcon,chardev=seabioslog_id_20200429-015214-HL3oizxq,iobase=0x402 \
    -device pcie-root-port,id=pcie-root-port-1,port=0x1,addr=0x1.0x1,bus=pcie.0,chassis=2 \
    -device qemu-xhci,id=usb1,bus=pcie-root-port-1,addr=0x0 \
    -device usb-tablet,id=usb-tablet1,bus=usb1.0,port=1 \
    -object iothread,id=iothread0 \
    -object iothread,id=iothread1 \
    -blockdev node-name=file_image1,driver=file,aio=threads,filename=/home/kvm_autotest_root/images/win2016-64-virtio.qcow2,cache.direct=on,cache.no-flush=off \
    -blockdev node-name=drive_image1,driver=qcow2,cache.direct=on,cache.no-flush=off,file=file_image1 \
    -device pcie-root-port,id=pcie-root-port-2,port=0x2,addr=0x1.0x2,bus=pcie.0,chassis=3 \
    -device virtio-blk-pci,id=image1,drive=drive_image1,bootindex=0,write-cache=on,iothread=iothread0,bus=pcie-root-port-2,addr=0x0 \
    -blockdev node-name=file_stg0,driver=file,aio=threads,filename=/home/kvm_autotest_root/images/storage0.qcow2,cache.direct=on,cache.no-flush=off \
    -blockdev node-name=drive_stg0,driver=qcow2,cache.direct=on,cache.no-flush=off,file=file_stg0 \
    -device pcie-root-port,id=pcie-root-port-3,port=0x3,addr=0x1.0x3,bus=pcie.0,chassis=4 \
    -device virtio-blk-pci,id=stg0,drive=drive_stg0,bootindex=1,write-cache=on,iothread=iothread1,bus=pcie-root-port-3,addr=0x0 \
    -device pcie-root-port,id=pcie-root-port-4,port=0x4,addr=0x1.0x4,bus=pcie.0,chassis=5 \
    -device virtio-net-pci,mac=9a:49:63:b8:8b:03,id=id4BeT93,netdev=id3m5Beo,bus=pcie-root-port-4,addr=0x0  \
    -netdev tap,id=id3m5Beo,vhost=on,vhostfd=21,fd=15 \
    -blockdev node-name=file_cd1,driver=file,read-only=on,aio=threads,filename=/home/kvm_autotest_root/iso/windows/winutils.iso,cache.direct=on,cache.no-flush=off \
    -blockdev node-name=drive_cd1,driver=raw,read-only=on,cache.direct=on,cache.no-flush=off,file=file_cd1 \
    -device ide-cd,id=cd1,drive=drive_cd1,bootindex=2,write-cache=on,bus=ide.0,unit=0  \
    -vnc :0  \
    -rtc base=localtime,clock=host,driftfix=slew  \
    -boot menu=off,order=cdn,once=c,strict=off \
    -enable-kvm \
    -device pcie-root-port,id=pcie_extra_root_port_0,multifunction=on,bus=pcie.0,addr=0x3,chassis=6

3. Boot guest without dataplane on dst host.

MALLOC_PERTURB_=1  /usr/libexec/qemu-kvm \
    -S  \
    -name 'avocado-vt-vm1'  \
    -sandbox on  \
    -machine q35 \
    -device pcie-root-port,id=pcie-root-port-0,multifunction=on,bus=pcie.0,addr=0x1,chassis=1 \
    -device pcie-pci-bridge,id=pcie-pci-bridge-0,addr=0x0,bus=pcie-root-port-0  \
    -nodefaults \
    -device VGA,bus=pcie.0,addr=0x2 \
    -m 14336  \
    -smp 24,maxcpus=24,cores=12,threads=1,dies=1,sockets=2  \
    -cpu 'Skylake-Server',hv_stimer,hv_synic,hv_vpindex,hv_relaxed,hv_spinlocks=0x1fff,hv_vapic,hv_time,hv_frequencies,hv_runtime,hv_tlbflush,hv_reenlightenment,hv_stimer_direct,hv_ipi,+kvm_pv_unhalt \
    -chardev socket,server,nowait,path=/var/tmp/avocado_3s1051dg/monitor-qmpmonitor1-20200429-015247-jtpIzwSr,id=qmp_id_qmpmonitor1  \
    -mon chardev=qmp_id_qmpmonitor1,mode=control \
    -chardev socket,server,nowait,path=/var/tmp/avocado_3s1051dg/monitor-catch_monitor-20200429-015247-jtpIzwSr,id=qmp_id_catch_monitor  \
    -mon chardev=qmp_id_catch_monitor,mode=control \
    -device pvpanic,ioport=0x505,id=idZ7GSeT \
    -chardev socket,server,nowait,path=/var/tmp/avocado_3s1051dg/serial-serial0-20200429-015247-jtpIzwSr,id=chardev_serial0 \
    -device isa-serial,id=serial0,chardev=chardev_serial0  \
    -chardev socket,id=seabioslog_id_20200429-015247-jtpIzwSr,path=/var/tmp/avocado_3s1051dg/seabios-20200429-015247-jtpIzwSr,server,nowait \
    -device isa-debugcon,chardev=seabioslog_id_20200429-015247-jtpIzwSr,iobase=0x402 \
    -device pcie-root-port,id=pcie-root-port-1,port=0x1,addr=0x1.0x1,bus=pcie.0,chassis=2 \
    -device qemu-xhci,id=usb1,bus=pcie-root-port-1,addr=0x0 \
    -device usb-tablet,id=usb-tablet1,bus=usb1.0,port=1 \
    -object iothread,id=iothread0 \
    -object iothread,id=iothread1 \
    -blockdev node-name=file_image1,driver=file,aio=threads,filename=/home/kvm_autotest_root/images/win2016-64-virtio.qcow2,cache.direct=on,cache.no-flush=off \
    -blockdev node-name=drive_image1,driver=qcow2,cache.direct=on,cache.no-flush=off,file=file_image1 \
    -device pcie-root-port,id=pcie-root-port-2,port=0x2,addr=0x1.0x2,bus=pcie.0,chassis=3 \
    -device virtio-blk-pci,id=image1,drive=drive_image1,bootindex=0,write-cache=on,bus=pcie-root-port-2,addr=0x0 \
    -blockdev node-name=file_stg0,driver=file,aio=threads,filename=/home/kvm_autotest_root/images/storage0.qcow2,cache.direct=on,cache.no-flush=off \
    -blockdev node-name=drive_stg0,driver=qcow2,cache.direct=on,cache.no-flush=off,file=file_stg0 \
    -device pcie-root-port,id=pcie-root-port-3,port=0x3,addr=0x1.0x3,bus=pcie.0,chassis=4 \
    -device virtio-blk-pci,id=stg0,drive=drive_stg0,bootindex=1,write-cache=on,bus=pcie-root-port-3,addr=0x0 \
    -device pcie-root-port,id=pcie-root-port-4,port=0x4,addr=0x1.0x4,bus=pcie.0,chassis=5 \
    -device virtio-net-pci,mac=9a:49:63:b8:8b:03,id=idZTC1OV,netdev=idD0Xgvl,bus=pcie-root-port-4,addr=0x0  \
    -netdev tap,id=idD0Xgvl,vhost=on,vhostfd=42,fd=27 \
    -blockdev node-name=file_cd1,driver=file,read-only=on,aio=threads,filename=/home/kvm_autotest_root/iso/windows/winutils.iso,cache.direct=on,cache.no-flush=off \
    -blockdev node-name=drive_cd1,driver=raw,read-only=on,cache.direct=on,cache.no-flush=off,file=file_cd1 \
    -device ide-cd,id=cd1,drive=drive_cd1,bootindex=2,write-cache=on,bus=ide.0,unit=0  \
    -vnc :1  \
    -rtc base=localtime,clock=host,driftfix=slew  \
    -boot menu=off,order=cdn,once=c,strict=off \
    -enable-kvm \
    -device pcie-root-port,id=pcie_extra_root_port_0,multifunction=on,bus=pcie.0,addr=0x3,chassis=6 \
    -incoming tcp:0:5846

4.do migration

5.format data disk in diskpart

6.run iozone on data disk
D:\Iozone\iozone.exe -az -b C:\E_stress_test -g 1g -y 32k -i 0 -i 1 -I -f E:\iozone_test

7.shut down guest
shutdown -t 0 -s


After step7,guest fail to shutdown.

Comment 12 Vadim Rozenfeld 2020-05-10 11:54:17 UTC
Can you check if there is event 129 or any other storage/disk
related events in the system events log file triggered by system
during shutting down the system?

Thank you,
Vadim.

Comment 13 menli@redhat.com 2020-05-11 13:13:21 UTC
(In reply to Vadim Rozenfeld from comment #12)
> Can you check if there is event 129 or any other 
> related events in the system events log file triggered by system
> during shutting down the system?
> 
> Thank you,
> Vadim.

Hi vadim,

I check the system events log,only found error event 10016,no storage/disk related events.


Thanks

Menghuan

Comment 15 Vadim Rozenfeld 2020-05-26 07:15:36 UTC
(In reply to menli from comment #14)
> hit the similar issue when do reboot operation after unplug a disk and hit
> BSOD.(hit this issue quite often this time)
> 
> Host:
> qemu-kvm-4.2.0-21.module+el8.2.1+6586+8b7713b9.x86_64
> kernel-4.18.0-193.el8.x86_64
> seabios-1.13.0-1.module+el8.2.0+5520+4e5817f3.x86_64
> 
> Guest:
> Win8.1-32-q35
> virtio-win-prewhql-184
> 
> The completed dump log is saved:
> 
> http://fileshare.englab.nay.redhat.com/pub/section2/coredump/bz1667330/
> Memory-win8.1.dmp
> 
> Check the system events log,found Warning Event ID 157 ,219 and error Event
> ID 1001,hope it will be useful for this issue.
> 
> 
> 
> Thanks
> Menghuan

Can you please upload the events log file as well?
Thanks,
Vadim.

Comment 17 qing.wang 2020-07-27 08:44:21 UTC
Sometimes reboot failed after hotplug /unplug 
4.18.0-226.el8.x86_64
qemu-kvm-core-4.2.0-30.module+el8.3.0+7298+c26a06b8.x86_64
seabios-1.13.0-1.module+el8.3.0+6423+e4cb6418.x86_64

Test steps:
1./usr/libexec/qemu-kvm \
    -S  \
    -name 'avocado-vt-vm1'  \
    -sandbox on  \
    -machine q35 \
    -device pcie-root-port,id=pcie-root-port-0,multifunction=on,bus=pcie.0,addr=0x1,chassis=1 \
    -device pcie-pci-bridge,id=pcie-pci-bridge-0,addr=0x0,bus=pcie-root-port-0  \
    -nodefaults \
    -device VGA,bus=pcie.0,addr=0x2 \
    -m 15360  \
    -smp 12,maxcpus=12,cores=6,threads=1,dies=1,sockets=2  \
    -cpu 'Opteron_G5',hv_stimer,hv_synic,hv_vpindex,hv_relaxed,hv_spinlocks=0xfff,hv_vapic,hv_time,hv_frequencies,hv_runtime,hv_tlbflush,hv_reenlightenment,hv_stimer_direct,hv_ipi,+kvm_pv_unhalt \
    -chardev socket,path=/var/tmp/avocado_pm9mpl_2/monitor-qmpmonitor1-20200722-011514-fXJXRHcC,id=qmp_id_qmpmonitor1,server,nowait  \
    -mon chardev=qmp_id_qmpmonitor1,mode=control \
    -chardev socket,path=/var/tmp/avocado_pm9mpl_2/monitor-catch_monitor-20200722-011514-fXJXRHcC,id=qmp_id_catch_monitor,server,nowait  \
    -mon chardev=qmp_id_catch_monitor,mode=control \
    -device pvpanic,ioport=0x505,id=idhEF81t \
    -chardev socket,path=/var/tmp/avocado_pm9mpl_2/serial-serial0-20200722-011514-fXJXRHcC,id=chardev_serial0,server,nowait \
    -device isa-serial,id=serial0,chardev=chardev_serial0  \
    -chardev socket,id=seabioslog_id_20200722-011514-fXJXRHcC,path=/var/tmp/avocado_pm9mpl_2/seabios-20200722-011514-fXJXRHcC,server,nowait \
    -device isa-debugcon,chardev=seabioslog_id_20200722-011514-fXJXRHcC,iobase=0x402 \
    -device pcie-root-port,id=pcie-root-port-1,port=0x1,addr=0x1.0x1,bus=pcie.0,chassis=2 \
    -device qemu-xhci,id=usb1,bus=pcie-root-port-1,addr=0x0 \
    -device usb-tablet,id=usb-tablet1,bus=usb1.0,port=1 \
    -object iothread,id=iothread0 \
    -object iothread,id=iothread1 \
    -blockdev node-name=file_image1,driver=file,aio=threads,filename=/home/kvm_autotest_root/images/win2019-64-virtio.qcow2,cache.direct=on,cache.no-flush=off \
    -blockdev node-name=drive_image1,driver=qcow2,cache.direct=on,cache.no-flush=off,file=file_image1 \
    -device pcie-root-port,id=pcie-root-port-2,port=0x2,addr=0x1.0x2,bus=pcie.0,chassis=3 \
    -device virtio-blk-pci,id=image1,drive=drive_image1,bootindex=0,write-cache=on,bus=pcie-root-port-2,addr=0x0,iothread=iothread0 \
    -device pcie-root-port,id=pcie-root-port-3,port=0x3,addr=0x1.0x3,bus=pcie.0,chassis=4 \
    -device virtio-net-pci,mac=9a:d9:c2:3c:da:ae,id=idr2hpK1,netdev=idoofjVJ,bus=pcie-root-port-3,addr=0x0  \
    -netdev tap,id=idoofjVJ,vhost=on,vhostfd=21,fd=15 \
    -blockdev node-name=file_cd1,driver=file,read-only=on,aio=threads,filename=/home/kvm_autotest_root/iso/windows/winutils.iso,cache.direct=on,cache.no-flush=off \
    -blockdev node-name=drive_cd1,driver=raw,read-only=on,cache.direct=on,cache.no-flush=off,file=file_cd1 \
    -device ide-cd,id=cd1,drive=drive_cd1,bootindex=1,write-cache=on,bus=ide.0,unit=0  \
    -vnc :0  \
    -rtc base=localtime,clock=host,driftfix=slew  \
    -boot menu=off,order=cdn,once=c,strict=off \
    -enable-kvm \
    -device pcie-root-port,id=pcie_extra_root_port_0,multifunction=on,bus=pcie.0,addr=0x3,chassis=5

2.hotplug disk

{"execute": "blockdev-add", "arguments": {"node-name": "file_stg0", "driver": "file", "aio": "threads", "filename": "/home/kvm_autotest_root/images/storage0.qcow2", "cache": {"direct": true, "no-flush": false}}, "id": "t5gzWAv0"}
{"execute": "blockdev-add", "arguments": {"node-name": "drive_stg0", "driver": "qcow2", "cache": {"direct": true, "no-flush": false}, "file": "file_stg0"}, "id": "y5hD3thX"}
{"execute": "device_add", "arguments": {"driver": "virtio-blk-pci", "id": "stg0", "drive": "drive_stg0", "write-cache": "on", "bus": "pcie_extra_root_port_0", "addr": "0x0", "iothread": "iothread1"}, "id": "7qjgpmDO"}

3.reboot guest

4 houunplug disk
{"execute": "device_del", "arguments": {"id": "stg0"}, "id": "UlINKVie"}
{"execute": "blockdev-del", "arguments": {"node-name": "drive_stg0"}, "id": "5toNSNyg"}
{"execute": "blockdev-del", "arguments": {"node-name": "file_stg0"}, "id": "xWonev5b"}

5.reboot guest
shutdown /r /f /t 0

It will hang on step 5.

Reproduced by automation:
python ConfigTest.py --testcase=block_hotplug.block_virtio.fmt_qcow2.default.with_plug.with_reboot.one_pci.q35 --iothread_scheme=roundrobin --nr_iothreads=2 --platform=x86_64 --guestname=Win2019 --driveformat=virtio_blk --nicmodel=virtio_net --imageformat=qcow2 --machines=q35 --customsparams="qemu_force_use_drive_expression = no\nimage_aio=threads\ncd_format=ide"

Comment 19 menli@redhat.com 2020-12-11 11:58:20 UTC
hit the same issue like comment 11

host:
qemu-kvm-5.1.0-15.module+el8.3.1+8772+a3fdeccd.x86_64
kernel-4.18.0-240.el8.x86_64
seabios-1.14.0-1.module+el8.3.0+7638+07cf13d2.x86_64

guest:
win8.1 32 q35, win8.1-32 pc
virtio-win-prewhql-0.1-191.iso

Comment 21 menli@redhat.com 2021-02-07 03:27:39 UTC
hit the same issue like comment 11

host:
kernel-4.18.0-278.el8.dt3.x86_64
qemu-kvm-5.2.0-4.module+el8.4.0+9676+589043b9.x86_64
virtio-win-prewhql-193
seabios-bin-1.14.0-1.module+el8.4.0+8855+a9e237a9.noarch

guest:
win10 20H2

Comment 22 menli@redhat.com 2021-03-01 01:31:39 UTC
hit the same issue on win10-64

RHEL-8.4.0-20210217.d.2
kernel-4.18.0-287.el8.dt3.x86_64
qemu-kvm-5.2.0-7.module+el8.4.0+9943+d64b3717.x86_64
seabios-1.14.0-1.module+el8.4.0+8855+a9e237a9.x86_64
virtio-win-prewhql-195


auto case: block_hotplug.block_virtio.fmt_raw.default.with_plug.with_shutdown.after_unplug.one_pci

Comment 23 menli@redhat.com 2021-06-21 01:25:58 UTC
hit the same issue on win2016(q35) like comment 11.

RHEL-8.5.0-20210609.n.3
qemu-kvm-4.2.0-51.module+el8.5.0+11141+9dff516f.x86_64
seabios-bin-1.13.0-2.module+el8.3.0+7353+9de0a3cc.noarch
kernel-4.18.0-310.el8.x86_64
virtio-win-prewhql-202


auto case: migration_with_block.default.with_dataplane_on2off.send_shell.shutdown_vm

Comment 34 yimsong 2021-08-03 01:42:45 UTC
hit the same issue on win10-32 when run virtio-win-prewhql-205 viostor testing on rhel850-av

    kernel-4.18.0-324.el8.x86_64
    qemu-kvm-6.0.0-25.module+el8.5.0+11890+8e7c3f51.x86_64
    virtio-win-prewhql-205
    seabios-bin-1.14.0-1.module+el8.4.0+8855+a9e237a9.noarch
    RHEL-8.5.0-20210727.n.0

auto case:
    block_hotplug.block_virtio.fmt_raw.with_plug.with_shutdown.after_unplug.one_pci

Comment 38 menli@redhat.com 2022-05-06 07:35:36 UTC
hit a similar issue for case 'migration_with_block.with_dataplane_on2off.send_shell.shutdown_vm' in auto test,failed to shutdown vm.

Host:
qemu-kvm-docs-6.2.0-11.el9_0.2.x86_64
kernel-5.14.0-70.7.1.el9_0.x86_64
seabios-bin-1.15.0-1.el9.noarch

Guest:
win10 21h2_32 with virtio-win-prewhql-219.iso

Comment 39 menli@redhat.com 2022-09-06 06:54:47 UTC
Hi Vadim,

Actually I didn't  hit this issue recently, but could we extend the "Stale Date" first and keep watch it for some time ?


Thanks 
Menghuan

Comment 40 Vadim Rozenfeld 2023-07-11 07:58:58 UTC
Closing this case based on https://bugzilla.redhat.com/show_bug.cgi?id=1667330#c39.
Please feel frre to reopen it in case you have any similar issues.

Comment 41 Peixiu Hou 2023-07-11 09:19:59 UTC
Change status to close as currentrealse, since it can be reproduced till 2022-05-06, just recently, we cannot reproduce it~

Any Questions please let me known, thanks~

BR~
Peixiu


Note You need to log in before you can comment on or make changes to this bug.