This bug has been migrated to another issue tracking site. It has been closed here and may no longer be being monitored.

If you would like to get updates for this issue, or to participate in it, you may do so at Red Hat Issue Tracker .
RHEL Engineering is moving the tracking of its product development work on RHEL 6 through RHEL 9 to Red Hat Jira (issues.redhat.com). If you're a Red Hat customer, please continue to file support cases via the Red Hat customer portal. If you're not, please head to the "RHEL project" in Red Hat Jira and file new tickets here. Individual Bugzilla bugs in the statuses "NEW", "ASSIGNED", and "POST" are being migrated throughout September 2023. Bugs of Red Hat partners with an assigned Engineering Partner Manager (EPM) are migrated in late September as per pre-agreed dates. Bugs against components "kernel", "kernel-rt", and "kpatch" are only migrated if still in "NEW" or "ASSIGNED". If you cannot log in to RH Jira, please consult article #7032570. That failing, please send an e-mail to the RH Jira admins at rh-issues@redhat.com to troubleshoot your issue as a user management inquiry. The email creates a ServiceNow ticket with Red Hat. Individual Bugzilla bugs that are migrated will be moved to status "CLOSED", resolution "MIGRATED", and set with "MigratedToJIRA" in "Keywords". The link to the successor Jira issue will be found under "Links", have a little "two-footprint" icon next to it, and direct you to the "RHEL project" in Red Hat Jira (issue links are of type "https://issues.redhat.com/browse/RHEL-XXXX", where "X" is a digit). This same link will be available in a blue banner at the top of the page informing you that that bug has been migrated.
Bug 1833187 - [virtio-win][viostor+vioscsi] Failed hotunplug in guest
Summary: [virtio-win][viostor+vioscsi] Failed hotunplug in guest
Keywords:
Status: CLOSED MIGRATED
Alias: None
Product: Red Hat Enterprise Linux 9
Classification: Red Hat
Component: virtio-win
Version: 9.0
Hardware: Unspecified
OS: Unspecified
high
high
Target Milestone: rc
: 9.2
Assignee: Vadim Rozenfeld
QA Contact: menli@redhat.com
URL:
Whiteboard:
: 1752465 (view as bug list)
Depends On:
Blocks: 1744438 1771318
TreeView+ depends on / blocked
 
Reported: 2020-05-08 01:56 UTC by menli@redhat.com
Modified: 2023-08-24 05:11 UTC (History)
13 users (show)

Fixed In Version:
Doc Type: If docs needed, set a value
Doc Text:
Clone Of:
Environment:
Last Closed: 2023-07-27 02:53:05 UTC
Type: Bug
Target Upstream Version:
Embargoed:


Attachments (Terms of Use)
bug (28.31 KB, image/png)
2020-05-08 01:56 UTC, menli@redhat.com
no flags Details
Logs for job block_hotplug.block_scsi.fmt_qcow2.with_plug.with_repetition.one_pci.q35 (512 bytes, application/zip)
2021-06-09 05:32 UTC, Peixiu Hou
no flags Details


Links
System ID Private Priority Status Summary Last Updated
Red Hat Issue Tracker   RHEL-869 0 None None None 2023-08-24 05:11:17 UTC

Description menli@redhat.com 2020-05-08 01:56:58 UTC
Created attachment 1686369 [details]
bug

Description of problem:
data disk still in disk manager after hotunplug.


Version-Release number of selected component (if applicable):

qemu-kvm-4.2.0-19.module+el8.2.0+6296+6b821950.x86_64
kernel-4.18.0-193.el8.x86_64
virtio-win-prewhql-181
seabios-1.13.0-1.module+el8.2.0+5520+4e5817f3.x86_64

How reproducible:
1/5

Steps to Reproduce:
1. boot guest with below cmd lines.
 /usr/libexec/qemu-kvm \
    -name 'avocado-vt-vm1'  \
    -sandbox on  \
    -machine q35 \
    -device pcie-root-port,id=pcie-root-port-0,multifunction=on,bus=pcie.0,addr=0x1,chassis=1 \
    -device pcie-pci-bridge,id=pcie-pci-bridge-0,addr=0x0,bus=pcie-root-port-0  \
    -nodefaults \
    -device VGA,bus=pcie.0,addr=0x2 \
    -m 14336  \
    -smp 24,maxcpus=24,cores=12,threads=1,dies=1,sockets=2  \
    -cpu 'Skylake-Server',hv_stimer,hv_synic,hv_vpindex,hv_relaxed,hv_spinlocks=0x1fff,hv_vapic,hv_time,hv_frequencies,hv_runtime,hv_tlbflush,hv_reenlightenment,hv_stimer_direct,hv_ipi,+kvm_pv_unhalt \
    -object iothread,id=iothread0 \
    -object iothread,id=iothread1 \
    -blockdev node-name=file_image1,driver=file,aio=threads,filename=/home/kvm_autotest_root/images/win2016-64-virtio.qcow2,cache.direct=on,cache.no-flush=off \
    -blockdev node-name=drive_image1,driver=qcow2,cache.direct=on,cache.no-flush=off,file=file_image1 \
    -device pcie-root-port,id=pcie-root-port-2,port=0x2,addr=0x1.0x2,bus=pcie.0,chassis=3 \
    -device virtio-blk-pci,id=image1,drive=drive_image1,bootindex=0,write-cache=on,iothread=iothread0,bus=pcie-root-port-2,addr=0x0 \
    -device pcie-root-port,id=pcie-root-port-4,port=0x4,addr=0x1.0x4,bus=pcie.0,chassis=5 \
    -device virtio-net-pci,mac=9a:49:63:b8:8b:03,id=id4BeT93,netdev=id3m5Beo,bus=pcie-root-port-4,addr=0x0  \
    -netdev tap,id=id3m5Beo  \
    -blockdev node-name=file_cd1,driver=file,read-only=on,aio=threads,filename=/home/kvm_autotest_root/iso/windows/winutils.iso,cache.direct=on,cache.no-flush=off \
    -blockdev node-name=drive_cd1,driver=raw,read-only=on,cache.direct=on,cache.no-flush=off,file=file_cd1 \
    -device ide-cd,id=cd1,drive=drive_cd1,bootindex=2,write-cache=on,bus=ide.0,unit=0  \
    -vnc :0  \
    -rtc base=localtime,clock=host,driftfix=slew  \
    -boot menu=off,order=cdn,once=c,strict=off \
    -enable-kvm \
    -qmp tcp:0:1231,server,nowait \
    -device pcie-root-port,id=pcie-root-port-3,port=0x3,addr=0x1.0x3,bus=pcie.0,chassis=4 \


2. hotplug blk device
{"execute": "blockdev-add", "arguments": {"node-name": "file_stg0", "driver": "file", "aio": "threads", "filename": "/home/kvm_autotest_root/images/storage0.raw", "cache": {"direct": true, "no-flush": false}}, "id": "ZODSKhzq"}
{"execute": "blockdev-add", "arguments": {"node-name": "drive_stg0", "driver": "raw", "cache": {"direct": true, "no-flush": false}, "file": "file_stg0"}, "id": "PDBqM4ab"}
{"execute": "device_add", "arguments": {"driver": "virtio-blk-pci", "id": "stg0", "drive": "drive_stg0", "write-cache": "on", "bus": "pcie-root-port-3", "addr": "0x0"}, "id": "AfUx0F55"}


3.format disk in disk manager

4.hotunplug blk device
{'execute': 'device_del', 'arguments': {'id': 'stg0'}, 'id': 'w4Tj7zKN'}
{'execute': 'blockdev-del', 'arguments': {'node-name': 'drive_stg0'}, 'id': 'inkmmsdp'}
{'execute': 'blockdev-del', 'arguments': {'node-name': 'file_stg0'}, 'id': 'WqOdPb0n'}

4.repeate step 2 and 4

5.send command :'wmic logicaldisk get drivetype,name,description & wmic diskdrive list brief /format:list' to check disk info



Actual results:
After step5,disk still display in disk manager,but can not get its info.

Expected results:
data disk should not display in disk manager after hotunplug.

Additional info:

Comment 1 qing.wang 2020-05-08 07:48:05 UTC
Hit same issue on 

qemu-kvm-common-4.2.0-19.module+el8.3.0+6478+69f490bb.x86_64

This issue can reproduce by automation 

python ConfigTest.py --testcase=block_hotplug.block_virtio.fmt_raw.default.with_plug.with_repetition.one_pci.q35 --guestname=Win2019  --driveformat=virtio_blk  --machine=q35 --clone=no --customsparams="qemu_force_use_drive_expression = no"

Comment 2 menli@redhat.com 2020-05-08 08:25:17 UTC
Additional info:

test it with build virtio-win-prewhql-0.1-181, drive format can also hit this issue.
after hotunplug device ,qmp can return right "DEVICE_DELETED" message.

Comment 3 menli@redhat.com 2020-06-03 03:10:17 UTC
hit similiar issue on win10-32-q35 (hit this similiar issue quite often this time)

build info:
qemu-kvm-4.2.0-22.module+el8.2.1+6758+cb8d64c2.x86_64
kernel-4.18.0-193.el8.x86_64
virtio-win-prewhql-184
seabios-1.13.0-1.module+el8.2.0+5520+4e5817f3.x86_64

step:
qemu-img create -f raw /home/kvm_autotest_root/images/storage0.raw  10g

1.start a guest
MALLOC_PERTURB_=1  /usr/libexec/qemu-kvm \
    -name 'avocado-vt-vm1'  \
    -sandbox on  \
    -machine q35 \
    -device pcie-root-port,id=pcie-root-port-0,multifunction=on,bus=pcie.0,addr=0x1,chassis=1 \
    -device pcie-pci-bridge,id=pcie-pci-bridge-0,addr=0x0,bus=pcie-root-port-0  \
    -nodefaults \
    -device VGA,bus=pcie.0,addr=0x2 \
    -m 30720  \
    -smp 32,maxcpus=32,cores=16,threads=1,dies=1,sockets=2  \
    -device pcie-root-port,id=pcie-root-port-1,port=0x1,addr=0x1.0x1,bus=pcie.0,chassis=2 \
    -device qemu-xhci,id=usb1,bus=pcie-root-port-1,addr=0x0 \
    -device usb-tablet,id=usb-tablet1,bus=usb1.0,port=1 \
    -blockdev node-name=file_image1,driver=file,aio=threads,filename=/home/kvm_autotest_root/images/win10-32-virtio.qcow2,cache.direct=on,cache.no-flush=off \
    -blockdev node-name=drive_image1,driver=qcow2,cache.direct=on,cache.no-flush=off,file=file_image1 \
    -device pcie-root-port,id=pcie-root-port-2,port=0x2,addr=0x1.0x2,bus=pcie.0,chassis=3 \
    -device virtio-blk-pci,id=image1,drive=drive_image1,bootindex=0,write-cache=on,bus=pcie-root-port-2,addr=0x0 \
    -device pcie-root-port,id=pcie-root-port-3,port=0x3,addr=0x1.0x3,bus=pcie.0,chassis=4 \
    -device virtio-net-pci,mac=9a:1c:f4:cf:73:f9,id=idF3Sf2y,netdev=idFsUWtI,bus=pcie-root-port-3,addr=0x0  \
    -netdev tap,id=idFsUWtI \
    -blockdev node-name=file_cd1,driver=file,read-only=on,aio=threads,filename=/home/kvm_autotest_root/iso/windows/winutils.iso,cache.direct=on,cache.no-flush=off \
    -blockdev node-name=drive_cd1,driver=raw,read-only=on,cache.direct=on,cache.no-flush=off,file=file_cd1 \
    -device ide-cd,id=cd1,drive=drive_cd1,bootindex=1,write-cache=on,bus=ide.0,unit=0  \
    -vnc :13  \
    -rtc base=localtime,clock=host,driftfix=slew  \
    -boot menu=off,order=cdn,once=c,strict=off \
    -enable-kvm \
    -qmp tcp:0:1231,server,nowait \
    -monitor stdio \
    -device pcie-root-port,id=pcie_extra_root_port_0,multifunction=on,bus=pcie.0,addr=0x3,chassis=5

2.hotplug a disk

{"execute": "blockdev-add", "arguments": {"node-name": "file_stg0", "driver": "file", "aio": "threads", "filename": "/home/kvm_autotest_root/images/storage0.raw", "cache": {"direct": true, "no-flush": false}}, "id": "Eya5CNPR"}
{"return": {}, "id": "Eya5CNPR"}
{"execute": "blockdev-add", "arguments": {"node-name": "drive_stg0", "driver": "raw", "cache": {"direct": true, "no-flush": false}, "file": "file_stg0"}, "id": "XhqojPvi"}
{"return": {}, "id": "XhqojPvi"}
{"execute": "device_add", "arguments": {"driver": "virtio-blk-pci", "id": "stg0", "drive": "drive_stg0", "write-cache": "on", "bus": "pcie_extra_root_port_0", "addr": "0x0"}, "id": "tX8tKy8x"}
{"return": {}, "id": "tX8tKy8x"}

3.reboot guest
{"execute": "system_reset", "id": "8h8OX7y9"}

4.hotunplug the disk
{"execute": "device_del", "arguments": {"id": "stg0"}, "id": "SAizT9xt"}


Actual result:
after step4 hit two different result:
1.normal delete event return as following, but viostor controller still display on device manager and disk manger seems to struck.
{"timestamp": {"seconds": 1591152793, "microseconds": 625814}, "event": "DEVICE_DELETED", "data": {"path": "/machine/peripheral/stg0/virtio-backend"}}
{"timestamp": {"seconds": 1591152793, "microseconds": 683652}, "event": "DEVICE_DELETED", "data": {"device": "stg0", "path": "/machine/peripheral/stg0"}}

2.no return "DEVICE_DELETED" message, disk still display on guest.


Thanks

Menghuan

Comment 5 qing.wang 2020-07-27 07:49:43 UTC
Hit same issue 
4.18.0-226.el8.x86_64
qemu-kvm-core-4.2.0-30.module+el8.3.0+7298+c26a06b8.x86_64
seabios-1.13.0-1.module+el8.3.0+6423+e4cb6418.x86_64

Reproduced by automation:
python ConfigTest.py --testcase=block_hotplug.block_virtio.fmt_qcow2.default.with_plug.with_repetition.one_pci.q35 --iothread_scheme=roundrobin --nr_iothreads=2 --platform=x86_64 --guestname=Win2019 --driveformat=virtio_blk --nicmodel=virtio_net --imageformat=qcow2 --machines=q35 --customsparams="qemu_force_use_drive_expression = no\nimage_aio=threads\ncd_format=ide"

Comment 6 Li Xiaohui 2020-07-29 08:19:58 UTC
I hit similar issue on rhel8.3.0-av hosts with migration: 
do migration after hot-unplug scsi and blk vdisks, qemu on dst host quit with error:
(qemu) qemu-kvm: get_pci_config_device: Bad config data: i=0x6e read: 40 device: 0 cmask: ff wmask: 0 w1cmask:19
qemu-kvm: Failed to load PCIDevice:config
qemu-kvm: Failed to load pcie-root-port:parent_obj.parent_obj.parent_obj
qemu-kvm: error while loading state for instance 0x0 of device '0000:00:02.4/pcie-root-port'
qemu-kvm: load of migration failed: Invalid argument


And I tried again without migration: 
hot-unplug scsi and blk vdisks, then check disks via wmic commands like Comment 0 on src host, wmic couldn't return the whole result and hang on in cmd:
> wmic logicaldisk get drivetype,name,description & wmic diskdrive list brief /format:list


So from above results, I think devices on src host don't match with dst host, then migration failed. 
This issue isn't related with migration.

Comment 8 menli@redhat.com 2020-12-11 12:28:12 UTC
hit the same issue on win2012-q35, win10 ovmf, win8.1-32 q35

qemu-kvm-5.1.0-15.module+el8.3.1+8772+a3fdeccd.x86_64
kernel-4.18.0-240.el8.x86_64
seabios-1.14.0-1.module+el8.3.0+7638+07cf13d2.x86_64
virtio-win-prewhql-0.1-191.iso

Comment 9 qing.wang 2020-12-17 02:15:48 UTC
Hit same issue on win2019
Red Hat Enterprise Linux release 8.4 Beta (Ootpa)
4.18.0-252.el8.dt4.x86_64
qemu-kvm-common-5.2.0-0.module+el8.4.0+8855+a9e237a9.x86_64
virtio-win-prewhql-0.1-191.iso

reproduced on automation:
python ConfigTest.py --testcase=block_hotplug.block_virtio.fmt_qcow2.default.with_plug.with_reboot.one_pci.q35,block_hotplug.block_virtio.fmt_raw.default.with_plug.with_shutdown.after_unplug.multi_pci.q35 --iothread_scheme=roundrobin --nr_iothreads=2 --platform=x86_64 --guestname=Win2019 --driveformat=virtio_blk --nicmodel=virtio_net --imageformat=qcow2 --machines=q35 --customsparams="vm_mem_limit = 12G\nimage_aio=threads\ncd_format=ide"

http://fileshare.englab.nay.redhat.com/pub/section2/images_backup/qlogs/%5b8.4-AV%5d-3-Q35+Seabios+Win2019+Qcow2+Virtio_blk+Local+aio_threads+qemu-5.2/test-results/020-type_specific.io-github-autotest-qemu.block_hotplug.block_virtio.fmt_qcow2.default.with_plug.with_reboot.one_pci/

Comment 10 qing.wang 2021-02-07 02:40:43 UTC
Hit same issue on win2019 
{'kvm_version': '4.18.0-268.el8.x86_64', 'qemu_version': 'qemu-kvm-core-5.2.0-2.module+el8.4.0+9186+ec44380f.x86_64'}

reproduced on automation:
python ConfigTest.py --testcase=block_hotplug.block_virtio.fmt_qcow2.default.with_plug.with_repetition.one_pci.q35  --iothread_scheme=roundrobin --nr_iothreads=2 --platform=x86_64 --guestname=Win2019 --driveformat=virtio_blk --nicmodel=virtio_net --imageformat=qcow2 --machines=q35  --firmware=ovmf --customsparams="vm_mem_limit = 12G\nimage_aio=native\ncd_format=ide"

http://fileshare.englab.nay.redhat.com/pub/section2/images_backup/qlogs/%5bQinwang%5d%5bW53%5d%5bVirtio_blk%5d%5b2019%5d-%5b8.4-AV%5d-11-Q35+Ovmf+Win2019+Qcow2+Virtio_blk+Local+aio_native+qemu-5.2/test-results/014-Host_RHEL.m8.u4.product_av.ovmf.qcow2.virtio_blk.up.virtio_net.Guest.Win2019.x86_64.io-github-autotest-qemu.block_hotplug.block_virtio.fmt_qcow2.with_plug.with_repetition.one_pci.q35/

Comment 11 menli@redhat.com 2021-03-01 01:25:54 UTC
hit the same issue on win10-32/64

RHEL-8.4.0-20210217.d.2
kernel-4.18.0-287.el8.dt3.x86_64
qemu-kvm-5.2.0-7.module+el8.4.0+9943+d64b3717.x86_64
seabios-1.14.0-1.module+el8.4.0+8855+a9e237a9.x86_64
virtio-win-prewhql-195

auto case: block_hotplug.block_virtio.fmt_raw.default.with_plug.with_repetition.one_pci
           block_hotplug.block_virtio.fmt_raw.default.with_plug.with_reboot.one_pci

Comment 12 Amnon Ilan 2021-03-13 12:16:30 UTC
*** Bug 1752465 has been marked as a duplicate of this bug. ***

Comment 14 Peixiu Hou 2021-06-09 05:32:23 UTC
Created attachment 1789496 [details]
Logs for job block_hotplug.block_scsi.fmt_qcow2.with_plug.with_repetition.one_pci.q35

Hit the similar issue for vioscsi on Win2022 guest.

Used version:
kernel-4.18.0-310.el8.x86_64
qemu-kvm-6.0.0-18.module+el8.5.0+11243+5269aaa1.x86_64
seabios-bin-1.14.0-1.module+el8.4.0+8855+a9e237a9.noarch
virtio-win-prewhql-199

auto case: block_hotplug.block_scsi.fmt_qcow2.with_plug.with_repetition.one_pci.q35

auto case logs as the attachment log.zip.

Thanks~
Peixiu

Comment 15 yimsong 2021-08-02 16:14:23 UTC
hit the same issue on win10-32 when run virtio-win-prewhql-205 viostor testing on rhel850-av

    kernel-4.18.0-324.el8.x86_64
    qemu-kvm-6.0.0-25.module+el8.5.0+11890+8e7c3f51.x86_64
    virtio-win-prewhql-205
    seabios-bin-1.14.0-1.module+el8.4.0+8855+a9e237a9.noarch
    RHEL-8.5.0-20210727.n.0

auto case: block_hotplug.block_virtio.fmt_raw.with_plug.with_repetition.one_pci
           block_hotplug.block_virtio.fmt_raw.with_plug.with_reboot.one_pci
           block_hotplug.block_virtio.fmt_raw.with_plug.with_shutdown.after_unplug.one_pci

Comment 18 menli@redhat.com 2021-10-14 01:17:16 UTC
The bug will be closed automatically on 2021-11-08, we use this bug to track hotplug related issues, could we reset the stale date ?

Thanks
Menghuan

Comment 19 menli@redhat.com 2021-10-20 11:04:17 UTC
(In reply to menli from comment #18)
> The bug will be closed automatically on 2021-11-08, we use this bug to track
> hotplug related issues, could we reset the stale date ?
> 
> Thanks
> Menghuan

let me update the stable date first for the bz is important and need keep it.

Comment 20 qing.wang 2021-11-22 10:55:17 UTC
Hit similar issue on 

Red Hat Enterprise Linux release 8.4 (Ootpa)
4.18.0-305.el8.x86_64
qemu-kvm-5.2.0-16.module+el8.4.0+13043+9eb47245.11.x86_64
seabios-bin-1.13.0-2.module+el8.3.0+7353+9de0a3cc.noarch
edk2-ovmf-20200602gitca407c7246bf-4.el8.noarch
virtio-win-prewhql-0.1-214.iso


Both exist seabios and ovmf

python ConfigTest.py --testcase=block_hotplug.block_virtio.fmt_qcow2.default.with_plug.with_reboot.one_pci.q35  --platform=x86_64 --guestname=Win2016 --driveformat=virtio_scsi --nicmodel=virtio_net --imageformat=qcow2 --machines=q35  --customsparams="vm_mem_limit = 12G\nimage_aio=threads\ncd_format=ide"  --clone=no



python ConfigTest.py --testcase=block_hotplug.block_virtio.fmt_qcow2.default.with_plug.with_reboot.one_pci.q35  --platform=x86_64 --guestname=Win10,Win2019 --driveformat=virtio_scsi --nicmodel=virtio_net --imageformat=qcow2 --machines=q35  --customsparams="vm_mem_limit = 12G\nimage_aio=threads\ncd_format=ide" --firmware=ovmf  --clone=no 


Steps:
1.boot vm
2.hotplug disk
3.init disk and format it
4.reboot
5.hotunplug disk
6.shutshow

Thoes cases have different phenomenon, some failed on unplug(step5), some failed on shutdown(step6), but think they due to same reason.

Comment 21 menli@redhat.com 2021-12-15 06:05:50 UTC
hit the same issue on win8.1-32(pc), win11(ovmf), win2022(ovmf)

Packages:
qemu-kvm-6.1.0-7.el9.x86_64
kernel-5.14.0-24.el9.x86_64
seabios-bin-1.14.0-7.el9.noarch
edk2-ovmf-20210527gite1999b264f1f-7.el9.noarch
virtio-win-prewhql-0.1-215.iso

Comment 22 qing.wang 2022-01-13 10:12:17 UTC
Hit same issue on

Red Hat Enterprise Linux release 9.0 Beta (Plow)
5.14.0-30.el9.x86_64
qemu-kvm-6.2.0-1.el9.x86_64
seabios-1.15.0-1.el9.x86_64
edk2-ovmf-20210527gite1999b264f1f-7.el9.noarch
virtio-win-prewhql-0.1-215.iso


python ConfigTest.py --testcase=block_hotplug.block_virtio.fmt_qcow2.default.with_plug.with_repetition.one_pci.i440fx --iothread_scheme=roundrobin --nr_iothreads=2 --platform=x86_64 --guestname=Win2019 --driveformat=virtio_scsi --nicmodel=virtio_net --imageformat=qcow2 --machines=i440fx --customsparams="vm_mem_limit = 12G\nimage_aio=threads\ncd_format=ide"

Comment 24 menli@redhat.com 2022-05-06 07:40:10 UTC
hit the same issue on win10 21h2 (both 32 and 64 bit).

Host:
qemu-kvm-6.2.0-11.el9_0.2.x86_64
kernel-5.14.0-70.7.1.el9_0.x86_64
seabios-bin-1.15.0-1.el9.noarch
virtio-win-prewhql-219.iso

Comment 29 menli@redhat.com 2022-08-10 09:49:37 UTC
One thing to clarify. For windows hotplug/unplug issues, we use this bug to track similar issues, not caused by 'wmic' command. In fact the 'wmic' works well in our testing although it will be deprecated.

The root cause is failed on ‘hotplug’ or 'unplug', most of all are relevant on "unplug".  
It maybe result in different behavior. eg: comment0 and comment3.



Thanks
Menghuan

Comment 30 menli@redhat.com 2022-08-23 09:21:30 UTC
hit the same issue as comment 3. 


Packages
kernel-5.14.0-145.el9.x86_64
qemu-kvm-7.0.0-9.el9.x86_64
seabios-bin-1.16.0-4.el9.noarch
RHEL-9.1.0-20220814.1
virtio-win-prewhql-224

Comment 31 menli@redhat.com 2022-09-06 06:37:09 UTC
Hi Vadim,

Base on the last comment, we can still hit this issue, so could you please help extend the 'Stale Date'?


Thanks
Menghuan

Comment 32 menli@redhat.com 2022-11-29 06:59:52 UTC
hit the same issue as comment 3. 

packages:
kernel-5.14.0-162.6.1.el9_1.x86_64
qemu-kvm-7.0.0-13.el9.x86_64
edk2-ovmf-20220526git16779ede2d36-3.el9.noarch
seabios-bin-1.16.0-4.el9.noarch
virtio-win-prewhql-229.iso

Comment 33 menli@redhat.com 2023-06-30 07:05:20 UTC
hit the same issue as comment 3 on win10(22H2)+q35.

packages:
    qemu-kvm-8.0.0-5.el9.x86_64
    kernel-5.14.0-331.el9.x86_64
    seabios-bin-1.16.1-1.el9.noarch
    virtio-win-prewhql-237


Note You need to log in before you can comment on or make changes to this bug.