RHEL Engineering is moving the tracking of its product development work on RHEL 6 through RHEL 9 to Red Hat Jira (issues.redhat.com). If you're a Red Hat customer, please continue to file support cases via the Red Hat customer portal. If you're not, please head to the "RHEL project" in Red Hat Jira and file new tickets here. Individual Bugzilla bugs in the statuses "NEW", "ASSIGNED", and "POST" are being migrated throughout September 2023. Bugs of Red Hat partners with an assigned Engineering Partner Manager (EPM) are migrated in late September as per pre-agreed dates. Bugs against components "kernel", "kernel-rt", and "kpatch" are only migrated if still in "NEW" or "ASSIGNED". If you cannot log in to RH Jira, please consult article #7032570. That failing, please send an e-mail to the RH Jira admins at rh-issues@redhat.com to troubleshoot your issue as a user management inquiry. The email creates a ServiceNow ticket with Red Hat. Individual Bugzilla bugs that are migrated will be moved to status "CLOSED", resolution "MIGRATED", and set with "MigratedToJIRA" in "Keywords". The link to the successor Jira issue will be found under "Links", have a little "two-footprint" icon next to it, and direct you to the "RHEL project" in Red Hat Jira (issue links are of type "https://issues.redhat.com/browse/RHEL-XXXX", where "X" is a digit). This same link will be available in a blue banner at the top of the page informing you that that bug has been migrated.
Bug 1669931 - It's failed to hot unplug virtio-serial device on win2019 and win10 guest sometimes
Summary: It's failed to hot unplug virtio-serial device on win2019 and win10 guest som...
Keywords:
Status: CLOSED CURRENTRELEASE
Alias: None
Product: Red Hat Enterprise Linux 9
Classification: Red Hat
Component: qemu-kvm
Version: unspecified
Hardware: Unspecified
OS: Unspecified
high
high
Target Milestone: rc
: ---
Assignee: Julia Suvorova
QA Contact: liunana
URL:
Whiteboard:
Depends On:
Blocks: 1744438
TreeView+ depends on / blocked
 
Reported: 2019-01-28 04:33 UTC by xiagao
Modified: 2023-03-14 19:52 UTC (History)
14 users (show)

Fixed In Version:
Doc Type: If docs needed, set a value
Doc Text:
Clone Of:
Environment:
Last Closed: 2022-03-02 07:27:21 UTC
Type: Bug
Target Upstream Version:
Embargoed:


Attachments (Terms of Use)
qmpmonitor1-win8-64.log (322.13 KB, text/plain)
2021-02-09 06:47 UTC, dehanmeng
no flags Details
qmpmonitor1-win2019-64.log (322.89 KB, text/plain)
2021-02-09 06:49 UTC, dehanmeng
no flags Details
Win2019_qemu_command_line.sh (3.25 KB, text/plain)
2021-02-09 06:50 UTC, dehanmeng
no flags Details
Win8-64_qemu-_command_line.sh (3.24 KB, text/plain)
2021-02-09 06:51 UTC, dehanmeng
no flags Details

Description xiagao 2019-01-28 04:33:51 UTC
Description of problem:
Try to hot unplug virtio serial device from qmp command, sometimes it will fail. 

Version-Release number of selected component (if applicable):
qemu-kvm-3.1.0-4.module+el8+2676+33bd6e2b.x86_64
kernel-4.18.0-62.el8.x86_64
virtio-win-prewhql-0.1-163

How reproducible:
1/10

Steps to Reproduce:
1.boot up guest with serial device with q35 machine type and driver is installed
-device virtio-serial-pci,id=virtio-serial1,max_ports=31,bus=pcie-root-port-4  \
-chardev socket,id=channel1,host=127.0.0.1,port=2222,server,nowait  \
-device virtserialport,bus=virtio-serial1.0,chardev=channel1,name=com.redhat.rhevm.vdsm1,id=port1 \

2.hot unplug serial port and device from qmp

{ 'execute': 'device_del', 'arguments': {'id': 'port1' }}
{"timestamp": {"seconds": 1548644403, "microseconds": 427927}, "event": "DEVICE_DELETED", "data": {"device": "port1", "path": "/machine/peripheral/port1"}}
{"return": {}}

{"execute":"device_del","arguments":{"id":"virtio-serial1"}}
{"return": {}}
{"timestamp": {"seconds": 1548644412, "microseconds": 505019}, "event": "DEVICE_DELETED", "data": {"path": "/machine/peripheral/virtio-serial1/virtio-backend"}}
{"timestamp": {"seconds": 1548644412, "microseconds": 556297}, "event": "DEVICE_DELETED", "data": {"device": "virtio-serial1", "path": "/machine/peripheral/virtio-serial1"}}

3.hot plug serial device and port again
"execute":"device_add","arguments":{"driver":"virtio-serial-pci","id":"virtio-serial1","max_ports":"31","bus":"pcie-root-port-4"}}
{"return": {}}

{"execute":"device_add","arguments":{"driver":"virtserialport","name":"com.redhat.rhevm.vdsm1","chardev":"channel1","bus":"virtio-serial1.0","id":"port1","nr":"1"}}
{"return": {}}

4.repeat step 2-3 for several times(I tried ten times).

Actual results:
Couldn't hot unplug virtio-serial device sometimes.
Can not get "DEVICE_DELETED" qmp info.
Check guest and "info qtree", virtio serial device still existed.


Expected results:
Hot unplug virtio serial device can success.

Additional info:
1. Didn't hit in win8..1 guest.
2. Can reproduce in qemu-kvm-2.12.0-57.module+el8+2683+02b3b955.x86_64

Comment 1 Li Xiaohui 2019-01-28 06:02:15 UTC
Hi all,
I reported one similar bug ago, we can confirm whether they are same problem.
https://bugzilla.redhat.com/show_bug.cgi?id=1658144

Comment 2 xiagao 2019-01-28 06:25:15 UTC
Thanks xiaohui, but they seem different.
1. In your reported one, the steps are only hotplug/unplug serial port.
2. I didn't get error prompt in qemu: (qemu) qemu-kvm: virtio-serial-bus: Guest failure in adding port 1 for device virtio-serial0.0

If there is something wrong, pls feel free to correct me.

Comment 5 Luiz Capitulino 2019-01-29 21:27:48 UTC
After talking to Pankaj and reading this BZ more carefully, I think
this is hardly a QEMU issue.

device_del works with guest OS cooperation. If the guest OS refuses
to hot-unplug the device or if the guest is slow to respond, then
device_del will silently fail. Which is what seems to be happening here.

If this eventually works (that is, after failing to hot-unplug once,
you try again and it works), then I'd think this is not a bug at all.

In any case, I think we need the windows virtio team to debug the
guest OS side of this bug and confirm if there's any issue on the
guest OS driver. Re-assigning.

Comment 6 xiagao 2019-01-31 02:19:45 UTC
(In reply to Luiz Capitulino from comment #5)
> After talking to Pankaj and reading this BZ more carefully, I think
> this is hardly a QEMU issue.
> 
> device_del works with guest OS cooperation. If the guest OS refuses
> to hot-unplug the device or if the guest is slow to respond, then
> device_del will silently fail. Which is what seems to be happening here.
> 
> If this eventually works (that is, after failing to hot-unplug once,
> you try again and it works), then I'd think this is not a bug at all.

After failing to hot-unplug once, I try again but it doesn't work.

> 
> In any case, I think we need the windows virtio team to debug the
> guest OS side of this bug and confirm if there's any issue on the
> guest OS driver. Re-assigning.

Comment 7 lijin 2019-01-31 05:55:21 UTC
may dup of bug1523017

Comment 8 Luiz Capitulino 2019-01-31 13:20:52 UTC
(In reply to xiagao from comment #6)

> > If this eventually works (that is, after failing to hot-unplug once,
> > you try again and it works), then I'd think this is not a bug at all.
> 
> After failing to hot-unplug once, I try again but it doesn't work.

Yeah, if it never works after the first failure, then I think it's a bug.

Comment 9 Gal Hammer 2019-02-14 12:37:45 UTC
Can you please add your qemu's command line?

Is there an application running inside the guest which is reading to or writing from the virtio-serial port during the unplug?

Thanks.

Comment 10 xiagao 2019-02-15 02:23:50 UTC
1. qemu cmd line
/usr/libexec/qemu-kvm -name $1 -enable-kvm -m 3G -smp 4,maxcpus=8,sockets=8,cores=1,threads=1 -nodefaults -cpu 'Skylake-Server',+kvm_pv_unhalt,hv_relaxed,hv_spinlocks=0x1fff,hv_vapic,hv_time -rtc base=localtime,driftfix=none -boot order=cd,menu=on -monitor stdio -qmp tcp:0:1234,server,nowait -M q35 -vga std -vnc :10 \

-device pcie-root-port,id=pcie-root-port-6,slot=6,chassis=6,bus=pcie.0  \
-object secret,id=sec0,data=xiagao \
-blockdev driver=file,cache.direct=off,cache.no-flush=on,filename=$1,node-name=system_disk_file \
-blockdev driver=luks,key-secret=sec0,node-name=system_disk,file=system_disk_file \
-device virtio-blk-pci,bus=pcie-root-port-6,drive=system_disk,id=disk_system,werror=stop,rerror=stop,serial=MYDISK-1 \
        
-device pcie-root-port,id=pcie-root-port-7,slot=7,chassis=7,addr=0x7,bus=pcie.0  \
-device virtio-net-pci,mac=9a:d0:d1:d2:d3:d4,id=net1,vectors=4,netdev=hostnet1,bus=pcie-root-port-7,addr=0x0  \
-netdev tap,id=hostnet1,vhost=on \
-drive file=/home/kvm_autotest_root/iso/ISO/Win2019/en_windows_server_2019_x64_dvd_4cb967d8.iso,if=none,media=cdrom,id=drive-ide0-1-0,readonly=on,format=raw -device ide-drive,drive=drive-ide0-1-0,id=ide0-1-0 \
-cdrom /home/kvm_autotest_root/iso/windows/virtio-win.iso.el8 \


-device pcie-root-port,id=pcie-root-port-4,slot=4,chassis=4,addr=0x4,bus=pcie.0  \
-device pcie-root-port,id=pcie-root-port-5,slot=5,chassis=5,addr=0x5,bus=pcie.0  \

-device virtio-serial-pci,id=virtio-serial1,max_ports=31,bus=pcie-root-port-4  \
-chardev socket,id=channel1,host=127.0.0.1,port=2222,server,nowait  \
-device virtserialport,bus=virtio-serial1.0,chardev=channel1,name=com.redhat.rhevm.vdsm1,id=port1 \

2. there is no application running inside guest during test.

Comment 17 xiagao 2020-01-08 08:55:09 UTC
Hit the same issue for balloon device, try 5 times and hit twice.

1. hotplug balloon device.
{"execute": "device_add", "arguments": {"id": "balloon0", "driver": "virtio-balloon-pci", "bus": "pci.6", "addr": "0x0"}, "id": "SI2l9wJY"}
{"return": {}, "id": "SI2l9wJY"}
2. unplug it.
{'execute': 'device_del', 'arguments': {'id': 'balloon0'}, 'id': 'G1UUZxyI'}
{"return": {}, "id": "G1UUZxyI"}
{"timestamp": {"seconds": 1578473113, "microseconds": 406587}, "event": "DEVICE_DELETED", "data": {"path": "/machine/peripheral/balloon0/virtio-backend"}}
{"timestamp": {"seconds": 1578473113, "microseconds": 414253}, "event": "DEVICE_DELETED", "data": {"device": "balloon0", "path": "/machine/peripheral/balloon0"}}

3. hotplug it again.
{"execute": "device_add", "arguments": {"id": "balloon0", "driver": "virtio-balloon-pci", "bus": "pci.6", "addr": "0x0"}, "id": "SI2l9wJY"}
{"return": {}, "id": "SI2l9wJY"}
4. unplug it again,and wait 5min
{'execute': 'device_del', 'arguments': {'id': 'balloon0'}, 'id': 'G1UUZxyI'}
{"return": {}, "id": "G1UUZxyI"}
----------------------------> no "DEVICE_DELETED" event, and it's still in device manager of guest.

pkg version:
kernel-4.18.0-167.el8.x86_64
qemu-kvm-4.2.0-4.module+el8.2.0+5220+e82621dc.x86_64
virtio-win-prewhql-175
seabios-1.12.0-5.module+el8.2.0+4793+b09dd2fb.x86_64

qemu cmd line:
/usr/libexec/qemu-kvm \
    -name 'avocado-vt-vm1'  \
    -sandbox on  \
    -machine q35  \
    -nodefaults \
    -device VGA,bus=pcie.0,addr=0x1 \
    -m 14336  \
    -smp 24,maxcpus=24,cores=12,threads=1,dies=1,sockets=2  \
    -cpu 'Skylake-Server',hv_stimer,hv_synic,hv_vpindex,hv_reset,hv_relaxed,hv_spinlocks=0x1fff,hv_vapic,hv_time,hv-tlbflush,+kvm_pv_unhalt  \
    -device pvpanic,ioport=0x505,id=idYZctHp \
    -device pcie-root-port,port=0x10,chassis=1,id=pci.1,bus=pcie.0,multifunction=on,addr=0x3 \
    -device pcie-root-port,port=0x11,chassis=2,id=pci.2,bus=pcie.0,addr=0x3.0x1 \
    -device pcie-root-port,port=0x12,chassis=3,id=pci.3,bus=pcie.0,addr=0x3.0x2 \
    -device pcie-root-port,port=0x13,chassis=4,id=pci.4,bus=pcie.0,addr=0x3.0x3 \
    -device pcie-root-port,port=0x14,chassis=5,id=pci.5,bus=pcie.0,addr=0x3.0x4 \
    -device pcie-root-port,port=0x15,chassis=6,id=pci.6,bus=pcie.0,addr=0x3.0x5 \
    -device pcie-root-port,port=0x16,chassis=7,id=pci.7,bus=pcie.0,addr=0x3.0x6 \
    -device pcie-root-port,port=0x17,chassis=8,id=pci.8,bus=pcie.0,addr=0x3.0x7 \
    -device qemu-xhci,id=usb1,bus=pci.1,addr=0x0 \
    -device virtio-scsi-pci,id=virtio_scsi_pci0,bus=pci.2,addr=0x0 \
    -blockdev node-name=file_image1,driver=file,aio=threads,filename=/home/kvm_autotest_root/images/win2019-64-virtio-scsi.qcow2,cache.direct=on,cache.no-flush=off \
    -blockdev node-name=drive_image1,driver=qcow2,cache.direct=on,cache.no-flush=off,file=file_image1 \
    -device scsi-hd,id=image1,drive=drive_image1,write-cache=on \
    -device virtio-net-pci,mac=9a:b5:fb:1e:d9:12,id=idfW0CSo,netdev=idks1hFn,bus=pci.3,addr=0x0  \
    -netdev tap,id=idks1hFn,vhost=on \
    -blockdev node-name=file_cd1,driver=file,read-only=on,aio=threads,filename=/home/kvm_autotest_root/iso/windows/winutils.iso,cache.direct=on,cache.no-flush=off \
    -blockdev node-name=drive_cd1,driver=raw,read-only=on,cache.direct=on,cache.no-flush=off,file=file_cd1 \
    -device scsi-cd,id=cd1,drive=drive_cd1,write-cache=on \
    -blockdev node-name=file_virtio,driver=file,read-only=on,aio=threads,filename=/home/kvm_autotest_root/iso/windows/virtio-win-prewhql-0.1-175.iso,cache.direct=on,cache.no-flush=off \
    -blockdev node-name=drive_virtio,driver=raw,read-only=on,cache.direct=on,cache.no-flush=off,file=file_virtio \
    -device scsi-cd,id=virtio,drive=drive_virtio,write-cache=on \
    -device usb-tablet,id=usb-tablet1,bus=usb1.0,port=1  \
    -vnc :0  \
    -rtc base=localtime,clock=host,driftfix=slew  \
    -boot menu=off,order=cdn,once=c,strict=off \
    -enable-kvm  -monitor stdio -qmp tcp:0:4444,server,nowait \

Comment 18 xiagao 2020-01-08 09:24:46 UTC
(In reply to xiagao from comment #17)
> Hit the same issue for balloon device, try 5 times and hit twice.
> 
> 1. hotplug balloon device.
> {"execute": "device_add", "arguments": {"id": "balloon0", "driver":
> "virtio-balloon-pci", "bus": "pci.6", "addr": "0x0"}, "id": "SI2l9wJY"}
> {"return": {}, "id": "SI2l9wJY"}
> 2. unplug it.
> {'execute': 'device_del', 'arguments': {'id': 'balloon0'}, 'id': 'G1UUZxyI'}
> {"return": {}, "id": "G1UUZxyI"}
> {"timestamp": {"seconds": 1578473113, "microseconds": 406587}, "event":
> "DEVICE_DELETED", "data": {"path":
> "/machine/peripheral/balloon0/virtio-backend"}}
> {"timestamp": {"seconds": 1578473113, "microseconds": 414253}, "event":
> "DEVICE_DELETED", "data": {"device": "balloon0", "path":
> "/machine/peripheral/balloon0"}}

after hotplug balloon device, enlarge/shrink memory.
{'execute': 'balloon', 'arguments': {'value': 7808745472}, 'id': '1KXUo23E'}
{"return": {}, "id": "1KXUo23E"}
{"timestamp": {"seconds": 1578475113, "microseconds": 598808}, "event": "BALLOON_CHANGE", "data": {"actual": 15030288384}}
{"timestamp": {"seconds": 1578475114, "microseconds": 600418}, "event": "BALLOON_CHANGE", "data": {"actual": 14331936768}}
{"timestamp": {"seconds": 1578475115, "microseconds": 601522}, "event": "BALLOON_CHANGE", "data": {"actual": 13627293696}}
{"timestamp": {"seconds": 1578475116, "microseconds": 600966}, "event": "BALLOON_CHANGE", "data": {"actual": 12916359168}}
{"timestamp": {"seconds": 1578475118, "microseconds": 607602}, "event": "BALLOON_CHANGE", "data": {"actual": 11504975872}}
{"timestamp": {"seconds": 1578475119, "microseconds": 606849}, "event": "BALLOON_CHANGE", "data": {"actual": 10812915712}}
{"timestamp": {"seconds": 1578475121, "microseconds": 610098}, "event": "BALLOON_CHANGE", "data": {"actual": 9458155520}}
{"timestamp": {"seconds": 1578475122, "microseconds": 614419}, "event": "BALLOON_CHANGE", "data": {"actual": 8772386816}}
{"timestamp": {"seconds": 1578475124, "microseconds": 30988}, "event": "BALLOON_CHANGE", "data": {"actual": 7808745472}}

> 
> 3. hotplug it again.
> {"execute": "device_add", "arguments": {"id": "balloon0", "driver":
> "virtio-balloon-pci", "bus": "pci.6", "addr": "0x0"}, "id": "SI2l9wJY"}
> {"return": {}, "id": "SI2l9wJY"}
> 4. unplug it again,and wait 5min
> {'execute': 'device_del', 'arguments': {'id': 'balloon0'}, 'id': 'G1UUZxyI'}
> {"return": {}, "id": "G1UUZxyI"}
> ----------------------------> no "DEVICE_DELETED" event, and it's still in
> device manager of guest.
> 
> pkg version:
> kernel-4.18.0-167.el8.x86_64
> qemu-kvm-4.2.0-4.module+el8.2.0+5220+e82621dc.x86_64
> virtio-win-prewhql-175
> seabios-1.12.0-5.module+el8.2.0+4793+b09dd2fb.x86_64
> 
> qemu cmd line:
> /usr/libexec/qemu-kvm \
>     -name 'avocado-vt-vm1'  \
>     -sandbox on  \
>     -machine q35  \
>     -nodefaults \
>     -device VGA,bus=pcie.0,addr=0x1 \
>     -m 14336  \
>     -smp 24,maxcpus=24,cores=12,threads=1,dies=1,sockets=2  \
>     -cpu
> 'Skylake-Server',hv_stimer,hv_synic,hv_vpindex,hv_reset,hv_relaxed,
> hv_spinlocks=0x1fff,hv_vapic,hv_time,hv-tlbflush,+kvm_pv_unhalt  \
>     -device pvpanic,ioport=0x505,id=idYZctHp \
>     -device
> pcie-root-port,port=0x10,chassis=1,id=pci.1,bus=pcie.0,multifunction=on,
> addr=0x3 \
>     -device
> pcie-root-port,port=0x11,chassis=2,id=pci.2,bus=pcie.0,addr=0x3.0x1 \
>     -device
> pcie-root-port,port=0x12,chassis=3,id=pci.3,bus=pcie.0,addr=0x3.0x2 \
>     -device
> pcie-root-port,port=0x13,chassis=4,id=pci.4,bus=pcie.0,addr=0x3.0x3 \
>     -device
> pcie-root-port,port=0x14,chassis=5,id=pci.5,bus=pcie.0,addr=0x3.0x4 \
>     -device
> pcie-root-port,port=0x15,chassis=6,id=pci.6,bus=pcie.0,addr=0x3.0x5 \
>     -device
> pcie-root-port,port=0x16,chassis=7,id=pci.7,bus=pcie.0,addr=0x3.0x6 \
>     -device
> pcie-root-port,port=0x17,chassis=8,id=pci.8,bus=pcie.0,addr=0x3.0x7 \
>     -device qemu-xhci,id=usb1,bus=pci.1,addr=0x0 \
>     -device virtio-scsi-pci,id=virtio_scsi_pci0,bus=pci.2,addr=0x0 \
>     -blockdev
> node-name=file_image1,driver=file,aio=threads,filename=/home/
> kvm_autotest_root/images/win2019-64-virtio-scsi.qcow2,cache.direct=on,cache.
> no-flush=off \
>     -blockdev
> node-name=drive_image1,driver=qcow2,cache.direct=on,cache.no-flush=off,
> file=file_image1 \
>     -device scsi-hd,id=image1,drive=drive_image1,write-cache=on \
>     -device
> virtio-net-pci,mac=9a:b5:fb:1e:d9:12,id=idfW0CSo,netdev=idks1hFn,bus=pci.3,
> addr=0x0  \
>     -netdev tap,id=idks1hFn,vhost=on \
>     -blockdev
> node-name=file_cd1,driver=file,read-only=on,aio=threads,filename=/home/
> kvm_autotest_root/iso/windows/winutils.iso,cache.direct=on,cache.no-
> flush=off \
>     -blockdev
> node-name=drive_cd1,driver=raw,read-only=on,cache.direct=on,cache.no-
> flush=off,file=file_cd1 \
>     -device scsi-cd,id=cd1,drive=drive_cd1,write-cache=on \
>     -blockdev
> node-name=file_virtio,driver=file,read-only=on,aio=threads,filename=/home/
> kvm_autotest_root/iso/windows/virtio-win-prewhql-0.1-175.iso,cache.direct=on,
> cache.no-flush=off \
>     -blockdev
> node-name=drive_virtio,driver=raw,read-only=on,cache.direct=on,cache.no-
> flush=off,file=file_virtio \
>     -device scsi-cd,id=virtio,drive=drive_virtio,write-cache=on \
>     -device usb-tablet,id=usb-tablet1,bus=usb1.0,port=1  \
>     -vnc :0  \
>     -rtc base=localtime,clock=host,driftfix=slew  \
>     -boot menu=off,order=cdn,once=c,strict=off \
>     -enable-kvm  -monitor stdio -qmp tcp:0:4444,server,nowait \

Comment 21 Ademar Reis 2020-02-05 22:54:00 UTC
QEMU has been recently split into sub-components and as a one-time operation to avoid breakage of tools, we are setting the QEMU sub-component of this BZ to "General". Please review and change the sub-component if necessary the next time you review this BZ. Thanks

Comment 37 dehanmeng 2021-02-09 03:26:32 UTC
Hit this issue on win2019-64(q35) and win8-64(q35) guest based on rhel9 host
Version of pkgs:
    kernel-5.11.0-0.rc5.136.el9.x86_64
    qemu-kvm-5.2.0-3.el9.x86_64
    virtio-win-prewhql-193
    seabios-bin-1.14.0-1.el9.1.noarch
    RHEL-9.0.0-20210128.6
check the qmp.log and found that there was just one DELEVE_EVENT:
2021-02-08 19:43:39: {"execute": "device_del", "arguments": {"id": "port1"}, "id": "51AUCjtl"}
2021-02-08 19:43:39: {"return": {}, "id": "51AUCjtl"}
2021-02-08 19:43:40: {"timestamp": {"seconds": 1612831419, "microseconds": 871868}, "event": "DEVICE_DELETED", "data": {"device": "port1", "path": "/machine/peripheral/port1"}}
and virtio-serial-pci still in the output from command '{"execute": "human-monitor-command", "arguments": {"command-line": "info qtree"}, "id": "ib6Xun8R"}'

Comment 38 ybendito 2021-02-09 06:38:27 UTC
(In reply to dehanmeng from comment #37)
> Hit this issue on win2019-64(q35) and win8-64(q35) guest based on rhel9 host
> Version of pkgs:
>     kernel-5.11.0-0.rc5.136.el9.x86_64
>     qemu-kvm-5.2.0-3.el9.x86_64
>     virtio-win-prewhql-193
>     seabios-bin-1.14.0-1.el9.1.noarch
>     RHEL-9.0.0-20210128.6
> check the qmp.log and found that there was just one DELEVE_EVENT:
> 2021-02-08 19:43:39: {"execute": "device_del", "arguments": {"id": "port1"},
> "id": "51AUCjtl"}
> 2021-02-08 19:43:39: {"return": {}, "id": "51AUCjtl"}
> 2021-02-08 19:43:40: {"timestamp": {"seconds": 1612831419, "microseconds":
> 871868}, "event": "DEVICE_DELETED", "data": {"device": "port1", "path":
> "/machine/peripheral/port1"}}
> and virtio-serial-pci still in the output from command '{"execute":
> "human-monitor-command", "arguments": {"command-line": "info qtree"}, "id":
> "ib6Xun8R"}'

First thing is that I do not exactly see what is the relationship between device id "ib6Xun8R", device id "51AUCjtl" and virtio-serial-pci
I'd suggest to attach full text of 'info qtree' and 'info pci'

In assumption that there is some relation:
What is reproduction rate?
Does it run under avocado or libvirt or raw qemu?
For avocado - please refer test logs, for qemu - full command line, for libvirt - please attach the profile dump.

Comment 39 dehanmeng 2021-02-09 06:43:51 UTC
(In reply to ybendito from comment #38)

> First thing is that I do not exactly see what is the relationship between
> device id "ib6Xun8R", device id "51AUCjtl" and virtio-serial-pci
> I'd suggest to attach full text of 'info qtree' and 'info pci'
> 
> In assumption that there is some relation:
> What is reproduction rate?
> Does it run under avocado or libvirt or raw qemu?
> For avocado - please refer test logs, for qemu - full command line, for
> libvirt - please attach the profile dump.

okay, collect those info and will update later. thanks

Comment 40 dehanmeng 2021-02-09 06:47:07 UTC
Created attachment 1755852 [details]
qmpmonitor1-win8-64.log

the whole qmp command/event log here.

Comment 41 dehanmeng 2021-02-09 06:49:38 UTC
Created attachment 1755853 [details]
qmpmonitor1-win2019-64.log

Comment 42 dehanmeng 2021-02-09 06:50:20 UTC
Created attachment 1755854 [details]
Win2019_qemu_command_line.sh

Comment 43 dehanmeng 2021-02-09 06:51:30 UTC
Created attachment 1755855 [details]
Win8-64_qemu-_command_line.sh

Comment 44 dehanmeng 2021-02-09 06:54:18 UTC
(In reply to ybendito from comment #38)

> What is reproduction rate?

the reproduction rate is 100%
all info as above, if you need other info, please needinfo me. 

Thanks
Dehan

Comment 53 liunana 2021-03-09 13:13:20 UTC
Can reproduce this bug with Win2016 guest.


Test Environments:
    amd-milan-02.ml3.eng.bos.redhat.com
    kernel-4.18.0-295.el8.x86_64
    qemu-kvm-5.2.0-10.module+el8.4.0+10217+cbdd2152.x86_64


Please help to check this, thanks.


Best regards
Liu Nana

Comment 56 dehanmeng 2021-06-08 08:27:43 UTC
Hit this issue again on win2019-64/win10-64
PKGS:
    qemu-kvm-6.0.0-16.module+el8.5.0+10848+2dccc46d.x86_64
    kernel-4.18.0-305.1.el8.x86_64
    seabios-bin-1.14.0-1.module+el8.4.0+8855+a9e237a9.noarch
    virtio-win-prewhql-199
    RHEL-8.5.0-20210531.n.0

Comment 57 dehanmeng 2021-06-22 03:29:29 UTC
Hit this issue again 
Version:
qemu-kvm-4.2.0-52.module+el8.5.0+11386+ef5875dd.x86_64
kernel-4.18.0-310.el8.x86_64
seabios-1.13.0-2.module+el8.3.0+7353+9de0a3cc.x86_64
virtio-win-prewhql-202

RHEL850_host_win10_guest
21. http://fileshare.englab.nay.redhat.com/pub/logs/vioser_rhel850-slow-win10-64/test-results/22-Host_RHEL.m8.u5.product_rhel.qcow2.virtio_scsi.up.virtio_net.Guest.Win10.x86_64.io-github-autotest-qemu.virtio_port_hotplug.hotplug_port_pci.with_shutdown_after_plug.q35/debug.log
23. http://fileshare.englab.nay.redhat.com/pub/logs/vioser_rhel850-slow-win10-64/test-results/23-Host_RHEL.m8.u5.product_rhel.qcow2.virtio_scsi.up.virtio_net.Guest.Win10.x86_64.io-github-autotest-qemu.virtio_port_hotplug.hotplug_port_pci.with_live_migration_after_unplug.q35/debug.log
24. http://fileshare.englab.nay.redhat.com/pub/logs/vioser_rhel850-slow-win10-64/test-results/24-Host_RHEL.m8.u5.product_rhel.qcow2.virtio_scsi.up.virtio_net.Guest.Win10.x86_64.io-github-autotest-qemu.virtio_port_hotplug.hotplug_port_pci.repeat_pci_in_loop.q35/debug.log

Versioin:
qemu-kvm-6.0.0-5.el9.x86_64
kernel-5.13.0-0.rc4.33.el9.x86_64
seabios-bin-1.14.0-4.el9.noarch
virtio-win-prewhql-202

RHEL9_host_win2019_guest
23. http://fileshare.englab.nay.redhat.com/pub/logs/vioser_rhel9_win2019/test-results/23-Host_RHEL.m9.u0.qcow2.virtio_scsi.up.virtio_net.Guest.Win2019.x86_64.io-github-autotest-qemu.virtio_port_hotplug.hotplug_port_pci.with_shutdown_after_unplug.q35/debug.log
25. http://fileshare.englab.nay.redhat.com/pub/logs/vioser_rhel9_win2019/test-results/25-Host_RHEL.m9.u0.qcow2.virtio_scsi.up.virtio_net.Guest.Win2019.x86_64.io-github-autotest-qemu.virtio_port_hotplug.hotplug_port_pci.with_live_migration_after_unplug.q35/debug.log
26. http://fileshare.englab.nay.redhat.com/pub/logs/vioser_rhel9_win2019/test-results/26-Host_RHEL.m9.u0.qcow2.virtio_scsi.up.virtio_net.Guest.Win2019.x86_64.io-github-autotest-qemu.virtio_port_hotplug.hotplug_port_pci.repeat_pci_in_loop.q35/debug.log

Comment 58 RHEL Program Management 2021-08-15 07:26:50 UTC
After evaluating this issue, there are no plans to address it further or fix it in an upcoming release.  Therefore, it is being closed.  If plans change such that this issue will be fixed in an upcoming release, then the bug can be reopened.

Comment 61 dehanmeng 2021-08-30 07:04:20 UTC
Hit this issue on win11/win2022 as well.
version:
kernel-4.18.0-335.el8.x86_64
qemu-kvm-6.0.0-29.module+el8.5.0+12386+43574bac.x86_64
seabios-bin-1.14.0-1.module+el8.4.0+8855+a9e237a9.noarch
RHEL-8.5.0-20210825.n.0
virtio-win-prewhql-207

win2022 guest:
1. http://fileshare.englab.nay.redhat.com/pub/logs/win2022-serial-function/test-results/24-Host_RHEL.m8.u5.product_av.qcow2.virtio_scsi.up.virtio_net.Guest.Win2022.x86_64.io-github-autotest-qemu.virtio_port_hotplug.hotplug_port_pci.with_shutdown_after_unplug.q35/
2. http://fileshare.englab.nay.redhat.com/pub/logs/win2022-serial-function/test-results/26-Host_RHEL.m8.u5.product_av.qcow2.virtio_scsi.up.virtio_net.Guest.Win2022.x86_64.io-github-autotest-qemu.virtio_port_hotplug.hotplug_port_pci.with_live_migration_after_unplug.q35/
3. http://fileshare.englab.nay.redhat.com/pub/logs/win2022-serial-function/test-results/27-Host_RHEL.m8.u5.product_av.qcow2.virtio_scsi.up.virtio_net.Guest.Win2022.x86_64.io-github-autotest-qemu.virtio_port_hotplug.hotplug_port_pci.repeat_pci_in_loop.q35/

win11 guest
1. http://fileshare.englab.nay.redhat.com/pub/logs/win11-serial-function/test-results/14-Host_RHEL.m8.u5.product_av.qcow2.virtio_scsi.up.virtio_net.Guest.Win11.x86_64.io-github-autotest-qemu.virtio_port_hotplug.hotplug_port_pci.with_shutdown_after_unplug.q35/
2. http://fileshare.englab.nay.redhat.com/pub/logs/win11-serial-function/test-results/16-Host_RHEL.m8.u5.product_av.qcow2.virtio_scsi.up.virtio_net.Guest.Win11.x86_64.io-github-autotest-qemu.virtio_port_hotplug.hotplug_port_pci.with_live_migration_after_unplug.q35/
3. http://fileshare.englab.nay.redhat.com/pub/logs/win11-serial-function/test-results/17-Host_RHEL.m8.u5.product_av.qcow2.virtio_scsi.up.virtio_net.Guest.Win11.x86_64.io-github-autotest-qemu.virtio_port_hotplug.hotplug_port_pci.repeat_pci_in_loop.q35/

Comment 62 liunana 2021-08-30 08:14:54 UTC
(In reply to Luiz Capitulino from comment #5)
> After talking to Pankaj and reading this BZ more carefully, I think
> this is hardly a QEMU issue.
> 
> device_del works with guest OS cooperation. If the guest OS refuses
> to hot-unplug the device or if the guest is slow to respond, then
> device_del will silently fail. Which is what seems to be happening here.
> 
> If this eventually works (that is, after failing to hot-unplug once,
> you try again and it works), then I'd think this is not a bug at all.
> 
> In any case, I think we need the windows virtio team to debug the
> guest OS side of this bug and confirm if there's any issue on the
> guest OS driver. Re-assigning.

Hi,


Would you please to give some advice about how to debug this issue inside windows guest OS?


Best regards
Liu Nana

Comment 64 John Ferlan 2021-09-09 13:52:16 UTC
Bulk update: Move RHEL-AV bugs to RHEL9. If necessary to resolve in RHEL8, then clone to the current RHEL8 release.

Comment 69 RHEL Program Management 2022-03-02 07:27:21 UTC
After evaluating this issue, there are no plans to address it further or fix it in an upcoming release.  Therefore, it is being closed.  If plans change such that this issue will be fixed in an upcoming release, then the bug can be reopened.

Comment 70 liunana 2022-03-02 12:33:33 UTC
This should been fixed on qemu 6.2, I can't reproduce this bug any more.

Test Env:
    kernel-5.14.0-69.el9.x86_64
    qemu-kvm-6.2.0-10.el9.x86_64
    edk2-ovmf-20220126gitbb1bba3d77-3.el9.noarch
Guest: Win11



Could you please help to check this? 
And if so I would like to close this bug as currentrealease.

Thanks.



Best regards
Liu Nana

Comment 71 liunana 2022-03-21 07:50:47 UTC
Close this as CURRENTRELEASE according to Comment 70, feel free to change this if you have any other suggestion, thanks.

Comment 72 Julia Suvorova 2022-03-22 22:23:14 UTC
(In reply to liunana from comment #71)
> Close this as CURRENTRELEASE according to Comment 70, feel free to change
> this if you have any other suggestion, thanks.

Alright, seems fine to me.

Best regards, Julia Suvorova.


Note You need to log in before you can comment on or make changes to this bug.