RHEL Engineering is moving the tracking of its product development work on RHEL 6 through RHEL 9 to Red Hat Jira (issues.redhat.com). If you're a Red Hat customer, please continue to file support cases via the Red Hat customer portal. If you're not, please head to the "RHEL project" in Red Hat Jira and file new tickets here. Individual Bugzilla bugs in the statuses "NEW", "ASSIGNED", and "POST" are being migrated throughout September 2023. Bugs of Red Hat partners with an assigned Engineering Partner Manager (EPM) are migrated in late September as per pre-agreed dates. Bugs against components "kernel", "kernel-rt", and "kpatch" are only migrated if still in "NEW" or "ASSIGNED". If you cannot log in to RH Jira, please consult article #7032570. That failing, please send an e-mail to the RH Jira admins at rh-issues@redhat.com to troubleshoot your issue as a user management inquiry. The email creates a ServiceNow ticket with Red Hat. Individual Bugzilla bugs that are migrated will be moved to status "CLOSED", resolution "MIGRATED", and set with "MigratedToJIRA" in "Keywords". The link to the successor Jira issue will be found under "Links", have a little "two-footprint" icon next to it, and direct you to the "RHEL project" in Red Hat Jira (issue links are of type "https://issues.redhat.com/browse/RHEL-XXXX", where "X" is a digit). This same link will be available in a blue banner at the top of the page informing you that that bug has been migrated.
Bug 1752465 - [virtio-win][viostor][q35] Disk status will be changed to Unknown after hotplug/unplug it several times in windows guest
Summary: [virtio-win][viostor][q35] Disk status will be changed to Unknown after hotpl...
Keywords:
Status: CLOSED DUPLICATE of bug 1833187
Alias: None
Product: Red Hat Enterprise Linux 8
Classification: Red Hat
Component: virtio-win
Version: 8.1
Hardware: Unspecified
OS: Unspecified
medium
medium
Target Milestone: rc
: 8.0
Assignee: Vadim Rozenfeld
QA Contact: menli@redhat.com
URL:
Whiteboard:
Depends On:
Blocks: 1744438 1771318
TreeView+ depends on / blocked
 
Reported: 2019-09-16 12:13 UTC by Xueqiang Wei
Modified: 2021-06-18 15:18 UTC (History)
18 users (show)

Fixed In Version:
Doc Type: If docs needed, set a value
Doc Text:
Clone Of:
Environment:
Last Closed: 2021-03-13 12:16:30 UTC
Type: Bug
Target Upstream Version:
Embargoed:


Attachments (Terms of Use)

Description Xueqiang Wei 2019-09-16 12:13:59 UTC
Description of problem:

Disk status will be changed to Unknown after repeat hotplug/unplug it in windows guest.

DISKPART> list disk

  Disk ###  Status         Size     Free     Dyn  Gpt
  --------  -------------  -------  -------  ---  ---
  Disk 0    Online           30 GB      0 B
  Disk 1    Unknown        1024 MB      0 B



Version-Release number of selected component (if applicable):
Host:
kernel-4.18.0-131.el8.x86_64
qemu-kvm-4.1.0-9.module+el8.1.0+4210+23b2046a

Guest:
win10 x86_64 with virtio-win-prewhql-0.1-173.iso


How reproducible:
6/10


Steps to Reproduce:
1. create data imgae.
# qemu-img create -f qcow2 /home/kvm_autotest_root/images/storage0.qcow2 1G

2. boot guest with below cmd lines.
/usr/libexec/qemu-kvm \
    -S  \
    -name 'avocado-vt-vm1' \
    -machine q35  \
    -nodefaults \
    -device VGA,bus=pcie.0,addr=0x1  \
    -chardev socket,id=qmp_id_qmpmonitor1,path=/var/tmp/avocado_s6zcbby_/monitor-qmpmonitor1-20190912-075627-uRXP4kw0,server,nowait \
    -mon chardev=qmp_id_qmpmonitor1,mode=control  \
    -chardev socket,id=qmp_id_catch_monitor,path=/var/tmp/avocado_s6zcbby_/monitor-catch_monitor-20190912-075627-uRXP4kw0,server,nowait \
    -mon chardev=qmp_id_catch_monitor,mode=control \
    -device pvpanic,ioport=0x505,id=id88dYsg \
    -chardev socket,path=/var/tmp/avocado_s6zcbby_/serial-serial0-20190912-075627-uRXP4kw0,id=chardev_serial0,nowait,server \
    -device isa-serial,id=serial0,chardev=chardev_serial0  \
    -chardev socket,id=seabioslog_id_20190912-075627-uRXP4kw0,path=/var/tmp/avocado_s6zcbby_/seabios-20190912-075627-uRXP4kw0,server,nowait \
    -device isa-debugcon,chardev=seabioslog_id_20190912-075627-uRXP4kw0,iobase=0x402 \
    -device pcie-root-port,id=pcie.0-root-port-2,slot=2,chassis=2,addr=0x2,bus=pcie.0 \
    -device qemu-xhci,id=usb1,bus=pcie.0-root-port-2,addr=0x0 \
    -device pcie-root-port,id=pcie.0-root-port-3,slot=3,chassis=3,addr=0x3,bus=pcie.0 \
    -device virtio-scsi-pci,id=virtio_scsi_pci0,bus=pcie.0-root-port-3,addr=0x0 \
    -drive id=drive_image1,if=none,snapshot=off,aio=threads,cache=none,format=qcow2,file=/home/kvm_autotest_root/images/win10-64-virtio-scsi.qcow2 \
    -device scsi-hd,id=image1,drive=drive_image1,bootindex=0 \
    -device pcie-root-port,id=pcie.0-root-port-4,slot=4,chassis=4,addr=0x4,bus=pcie.0 \
    -device virtio-net-pci,mac=9a:08:62:dd:bf:0d,id=id7kIXjf,netdev=idmDu0Ql,bus=pcie.0-root-port-4,addr=0x0  \
    -netdev tap,id=idmDu0Ql,vhost=on \
    -m 14336  \
    -smp 12,maxcpus=12,cores=6,threads=1,sockets=2  \
    -cpu 'SandyBridge',hv_stimer,hv_synic,hv_vpindex,hv_reset,hv_relaxed,hv_spinlocks=0x1fff,hv_vapic,hv_time,hv-tlbflush,+kvm_pv_unhalt \
    -drive id=drive_cd1,if=none,snapshot=off,aio=threads,cache=none,media=cdrom,file=/home/kvm_autotest_root/iso/windows/winutils.iso \
    -device scsi-cd,id=cd1,drive=drive_cd1 \
    -device usb-tablet,id=usb-tablet1,bus=usb1.0,port=1  \
    -vnc :0  \
    -rtc base=localtime,clock=host,driftfix=slew  \
    -boot order=cdn,once=c,menu=off,strict=off \
    -enable-kvm \
    -device pcie-root-port,id=pcie_extra_root_port_0,slot=5,chassis=5,addr=0x5,bus=pcie.0 \
    -monitor stdio \

3. hotpulg a disk (virtio-blk)

# nc -U /var/tmp/avocado_s6zcbby_/monitor-qmpmonitor1-20190912-075627-uRXP4kw0 
{"QMP": {"version": {"qemu": {"micro": 0, "minor": 1, "major": 4}, "package": "qemu-kvm-4.1.0-9.module+el8.1.0+4210+23b2046a"}, "capabilities": ["oob"]}}

{"execute": "qmp_capabilities", "id": "x99anzUf"}
{"return": {}, "id": "x99anzUf"}

{"execute": "human-monitor-command", "arguments": {"command-line": "drive_add auto id=drive_stg0,if=none,snapshot=off,aio=threads,cache=none,format=qcow2,file=/home/kvm_autotest_root/images/storage0.qcow2"}, "id": "gFMnNeo9"}
{"return": "OK\r\n", "id": "gFMnNeo9"}

{"execute": "device_add", "arguments": {"driver": "virtio-blk-pci", "id": "stg0", "drive": "drive_stg0", "bus": "pcie_extra_root_port_0", "addr": "0x0"}, "id": "s68c6zsX"}
{"return": {}, "id": "s68c6zsX"}

4. create a partition on new disk and do iozone on it.

(1) wmic diskdrive get index
C:\>
Index
0
1

C:\Users\Administrator>diskpart
DISKPART> list disk

  Disk ###  Status         Size     Free     Dyn  Gpt
  --------  -------------  -------  -------  ---  ---
  Disk 0    Online           30 GB      0 B
  Disk 1    Online         1024 MB  1024 MB

(2) create a partition
DISKPART> select disk 1
DISKPART> create partition primary
DISKPART> assign letter=I
DISKPART> format fs=ntfs quick

(3) iozone test
D:\Iozone\iozone.exe -azR -r 64k -n 125M -g 512M -M -i 0 -i 1 -b I:\iozone_test -f I:\testfile

5. unplug disk after iozone
{"execute": "device_del", "arguments": {"id": "stg0"}, "id": "fvzl1vMr"}
{"return": {}, "id": "fvzl1vMr"}
{"timestamp": {"seconds": 1568621214, "microseconds": 113326}, "event": "DEVICE_DELETED", "data": {"path": "/machine/peripheral/stg0/virtio-backend"}}
{"timestamp": {"seconds": 1568621214, "microseconds": 169202}, "event": "DEVICE_DELETED", "data": {"device": "stg0", "path": "/machine/peripheral/stg0"}}

6. repeat step3 to step5 100 times


Actual results:
The disk status is changed to "Unknown" after repeat several times. Can not operate it.

wmic diskdrive get index (can't get disk index 1)
C:\>
Index
0

DISKPART> list disk

  Disk ###  Status         Size     Free     Dyn  Gpt
  --------  -------------  -------  -------  ---  ---
  Disk 0    Online           30 GB      0 B
  Disk 1    Unknown        1024 MB      0 B


Expected results:
disk works well, run iozone on it successfully.



Additional info:
linux guest, not hit this issue
virtio-scsi disk, not hit this issue, just virtio-blk hit it.
machine type - pc, run 55 times,not hit this issue. But hit Bug 1678290 - No "DEVICE_DELETED" event in qmp after "device_del" after 55 times.)


Found it by automation case:
python3 ConfigTest.py --testcase=block_hotplug.default.block_virtio.fmt_qcow2.default.with_plug.with_repetition.one_pci  --imageformat=qcow2 --guestname=Win10 --driveformat=virtio_scsi --nicmodel=virtio_net  --platform=x86_64 --clone=no --verbose --machine=q35



Tested with qemu-kvm-3.1.0-20.module+el8.0.0.z+3438+2851622e.1 and virtio-win-prewhql-0.1-173.iso, also hit this issue.

Tested with qemu-kvm-3.1.0-20.module+el8.0.0.z+3438+2851622e.1 and virtio-win-1.9.6-1.el8.iso, also hit this issue.

Tested with qemu-kvm-4.1.0-9.module+el8.1.0+4210+23b2046a and virtio-win-1.9.6-1.el8.iso, also hit this issue.

Comment 1 Xueqiang Wei 2019-09-16 12:21:36 UTC
Not sure if it is a virtio-win bug?


wyu,

Please update your results if I missed important information. Many thanks.

Comment 5 Yu Wang 2019-09-18 06:07:06 UTC
Ran with ws2012r2
Still hit this issue with virtio-win-prewhql-158 with q35.
cannot hit this issue with pc chipset with both virtio-win-prewhql-158 and virtio-win-prewhql-172


Thanks
Yu Wang

Comment 9 xiagao 2019-11-07 05:47:23 UTC
Can reproduce this issue with win8.1-64 and q35 machine type.

How reproducible:
3/5

pkg:
virtio-win-prewhql-172
qemu-kvm-4.1.0-14.module+el8.1.0+4548+ed1300f4

Steps:
1) boot up guest with a new data disk(virtio-blk-pci).
2) format this disk.
3) hotuplug this data disk.

After step3,
1) can get response from qmp.
{"execute":"device_del","arguments":{"id":"data-disk"}}
{"return": {}}
{"timestamp": {"seconds": 1573101016, "microseconds": 723609}, "event": "DEVICE_DELETED", "data": {"path": "/machine/peripheral/data-disk/virtio-backend"}}
{"timestamp": {"seconds": 1573101016, "microseconds": 777354}, "event": "DEVICE_DELETED", "data": {"device": "data-disk", "path": "/machine/peripheral/data-disk"}}

2) The disk status is changed to "Unknown" and can not operate it.
wmic diskdrive get index (can't get disk index 1)
C:\>
Index
0

DISKPART> list disk

  Disk ###  Status         Size     Free     Dyn  Gpt
  --------  -------------  -------  -------  ---  ---
  Disk 0    Online           30 GB      0 B
  Disk 1    Unknown        1024 MB      0 B



Thanks
Xiaoling

Comment 15 Yongxue Hong 2020-06-18 14:20:00 UTC
Virtio-scsi also has the same issue.

qemu version: qemu-kvm-5.0.0-0.scrmod+el8.3.0+6977+09119430.wrb200610
Guest OS: Windows 2019

Boot a guest:
/usr/libexec/qemu-kvm \
    -name 'avocado-vt-vm1'  \
    -sandbox on  \
    -machine q35 \
    -device pcie-root-port,id=pcie-root-port-0,multifunction=on,bus=pcie.0,addr=0x1,chassis=1 \
    -device pcie-pci-bridge,id=pcie-pci-bridge-0,addr=0x0,bus=pcie-root-port-0  \
    -nodefaults \
    -device VGA,bus=pcie.0,addr=0x2 \
    -m 10240  \
    -smp 12,maxcpus=12,cores=6,threads=1,dies=1,sockets=2  \
    -cpu 'SandyBridge',hv_stimer,hv_synic,hv_vpindex,hv_relaxed,hv_spinlocks=0xfff,hv_vapic,hv_time,hv_frequencies,hv_runtime,hv_tlbflush,hv_reenlightenment,hv_stimer_direct,hv_ipi,+kvm_pv_unhalt \
    -chardev socket,nowait,path=/var/tmp/monitor-qmpmonitor1-20200618-085617-XdQfEHuQ,server,id=qmp_id_qmpmonitor1  \
    -mon chardev=qmp_id_qmpmonitor1,mode=control \
    -chardev socket,nowait,path=/var/tmp/monitor-catch_monitor-20200618-085617-XdQfEHuQ,server,id=qmp_id_catch_monitor  \
    -mon chardev=qmp_id_catch_monitor,mode=control \
    -device pvpanic,ioport=0x505,id=idlqPtNl \
    -chardev socket,nowait,path=/var/tmp/serial-serial0-20200618-085617-XdQfEHuQ,server,id=chardev_serial0 \
    -device isa-serial,id=serial0,chardev=chardev_serial0  \
    -chardev socket,id=seabioslog_id_20200618-085617-XdQfEHuQ,path=/var/tmp/seabios-20200618-085617-XdQfEHuQ,server,nowait \
    -device isa-debugcon,chardev=seabioslog_id_20200618-085617-XdQfEHuQ,iobase=0x402 \
    -device pcie-root-port,id=pcie-root-port-1,port=0x1,addr=0x1.0x1,bus=pcie.0,chassis=2 \
    -device qemu-xhci,id=usb1,bus=pcie-root-port-1,addr=0x0 \
    -device usb-ehci,id=ehci,bus=pcie.0,addr=0x3 \
    -device usb-tablet,id=usb-tablet1,bus=usb1.0,port=1 \
    -blockdev node-name=file_image1,driver=file,aio=threads,filename=/home/kvm_autotest_root/images/win2019-64-virtio.qcow2,cache.direct=on,cache.no-flush=off \
    -blockdev node-name=drive_image1,driver=qcow2,cache.direct=on,cache.no-flush=off,file=file_image1 \
    -device pcie-root-port,id=pcie-root-port-2,port=0x2,addr=0x1.0x2,bus=pcie.0,chassis=3 \
    -device virtio-blk-pci,id=image1,drive=drive_image1,bootindex=0,write-cache=on,bus=pcie-root-port-2,addr=0x0 \
    -device pcie-root-port,id=pcie-root-port-3,port=0x3,addr=0x1.0x3,bus=pcie.0,chassis=4 \
    -device virtio-net-pci,mac=9a:2e:58:bf:f1:cf,id=idBOPDl6,netdev=idiVKl70,bus=pcie-root-port-3,addr=0x0  \
    -netdev tap,id=idiVKl70,vhost=on \
    -blockdev node-name=file_cd1,driver=file,read-only=on,aio=threads,filename=/home/kvm_autotest_root/iso/windows/winutils.iso,cache.direct=on,cache.no-flush=off \
    -blockdev node-name=drive_cd1,driver=raw,read-only=on,cache.direct=on,cache.no-flush=off,file=file_cd1 \
    -device ide-cd,id=cd1,drive=drive_cd1,bootindex=1,write-cache=on,bus=ide.0,unit=0  \
    -vnc :10  \
    -rtc base=localtime,clock=host,driftfix=slew  \
    -boot menu=off,order=cdn,once=c,strict=off \
    -enable-kvm \
    -device pcie-root-port,id=pcie_extra_root_port_0,multifunction=on,bus=pcie.0,addr=0x4,chassis=5 \

Create a qcow2 image:
qemu-img create -f qcow2 /home/kvm_autotest_root/images/stg.qcow2 10G


Hotplug then unplug:
[root@hp-dl388g8-16 ~]# nc -U /var/tmp/monitor-qmpmonitor1-20200618-085617-XdQfEHuQ
{"QMP": {"version": {"qemu": {"micro": 50, "minor": 0, "major": 5}, "package": "qemu-kvm-5.0.0-0.scrmod+el8.3.0+6977+09119430.wrb200610"}, "capabilities": ["oob"]}}
{"execute": "qmp_capabilities", "id": "yRSKDtOl"}
{"return": {}, "id": "yRSKDtOl"}
{'execute': 'blockdev-add', 'arguments': {'node-name': 'file_stg', 'driver': 'file', 'aio': 'threads', 'filename': '/home/kvm_autotest_root/images/stg.qcow2', 'cache': {'direct': true, 'no-flush': false}}, 'id': 'BVpMEXJy'}
{"return": {}, "id": "BVpMEXJy"}
{'execute': 'blockdev-add', 'arguments': {'node-name': 'drive_stg', 'driver': 'qcow2', 'cache': {'direct': true, 'no-flush': false}, 'file': 'file_stg'}, 'id': 'fIsU5L6M'}
{"return": {}, "id": "fIsU5L6M"}
{"execute": "device_add", "arguments": {"id": "virtio_scsi_pci1", "driver": "virtio-scsi-pci", "bus": "pcie_extra_root_port_0", "addr": "0x0"}, "id": "E0GPZKDS"}
{"return": {}, "id": "E0GPZKDS"}
{"execute": "device_add", "arguments": {"driver": "scsi-hd", "id": "stg", "bus": "virtio_scsi_pci1.0", "drive": "drive_stg", "write-cache": "on"}, "id": "6eu3wZfZ"}
{"return": {}, "id": "6eu3wZfZ"}
{'execute': 'device_del', 'arguments': {'id': 'stg'}, 'id': 'cNdDWa9Z'}
{"timestamp": {"seconds": 1592489036, "microseconds": 823830}, "event": "DEVICE_DELETED", "data": {"device": "stg", "path": "/machine/peripheral/stg"}}
{"return": {}, "id": "cNdDWa9Z"}
{'execute': 'blockdev-del', 'arguments': {'node-name': 'drive_stg'}, 'id': '22HV7WUT'}
{"return": {}, "id": "22HV7WUT"}
{'execute': 'blockdev-del', 'arguments': {'node-name': 'file_stg'}, 'id': 'eRvTLPON'}
{"return": {}, "id": "eRvTLPON"}
{'execute': 'device_del', 'arguments': {'id': 'virtio_scsi_pci1'}, 'id': 'ZPrMG8Md'}
{"return": {}, "id": "ZPrMG8Md"}
{"timestamp": {"seconds": 1592489053, "microseconds": 911792}, "event": "DEVICE_DELETED", "data": {"path": "/machine/peripheral/virtio_scsi_pci1/virtio-backend"}}
{"timestamp": {"seconds": 1592489053, "microseconds": 964424}, "event": "DEVICE_DELETED", "data": {"device": "virtio_scsi_pci1", "path": "/machine/peripheral/virtio_scsi_pci1"}}

{'execute': 'blockdev-add', 'arguments': {'node-name': 'file_stg', 'driver': 'file', 'aio': 'threads', 'filename': '/home/kvm_autotest_root/images/stg.qcow2', 'cache': {'direct': true, 'no-flush': false}}, 'id': 'BVpMEXJy'}
{"return": {}, "id": "BVpMEXJy"}
{'execute': 'blockdev-add', 'arguments': {'node-name': 'drive_stg', 'driver': 'qcow2', 'cache': {'direct': true, 'no-flush': false}, 'file': 'file_stg'}, 'id': 'fIsU5L6M'}
{"return": {}, "id": "fIsU5L6M"}
{"execute": "device_add", "arguments": {"id": "virtio_scsi_pci1", "driver": "virtio-scsi-pci", "bus": "pcie_extra_root_port_0", "addr": "0x0"}, "id": "E0GPZKDS"}
{"return": {}, "id": "E0GPZKDS"}
{"execute": "device_add", "arguments": {"driver": "scsi-hd", "id": "stg", "bus": "virtio_scsi_pci1.0", "drive": "drive_stg", "write-cache": "on"}, "id": "6eu3wZfZ"}
{"return": {}, "id": "6eu3wZfZ"}
{'execute': 'device_del', 'arguments': {'id': 'stg'}, 'id': 'cNdDWa9Z'}
{"timestamp": {"seconds": 1592489081, "microseconds": 450440}, "event": "DEVICE_DELETED", "data": {"device": "stg", "path": "/machine/peripheral/stg"}}
{"return": {}, "id": "cNdDWa9Z"}
{'execute': 'blockdev-del', 'arguments': {'node-name': 'drive_stg'}, 'id': '22HV7WUT'}
{"return": {}, "id": "22HV7WUT"}
{'execute': 'blockdev-del', 'arguments': {'node-name': 'file_stg'}, 'id': 'eRvTLPON'}
{"return": {}, "id": "eRvTLPON"}
{'execute': 'device_del', 'arguments': {'id': 'virtio_scsi_pci1'}, 'id': 'ZPrMG8Md'}
{"return": {}, "id": "ZPrMG8Md"} ------> No DEVICE_DELETED event for virtio_scsi_pci1 after 30 min.

And failed to hotplug the device again since "Duplicate ID 'virtio_scsi_pci1' for device.
{'execute': 'blockdev-add', 'arguments': {'node-name': 'file_stg', 'driver': 'file', 'aio': 'threads', 'filename': '/home/kvm_autotest_root/images/stg.qcow2', 'cache': {'direct': true, 'no-flush': false}}, 'id': 'BVpMEXJy'}
{"return": {}, "id": "BVpMEXJy"}
{'execute': 'blockdev-add', 'arguments': {'node-name': 'drive_stg', 'driver': 'qcow2', 'cache': {'direct': true, 'no-flush': false}, 'file': 'file_stg'}, 'id': 'fIsU5L6M'}
{"return": {}, "id": "fIsU5L6M"}
{"execute": "device_add", "arguments": {"id": "virtio_scsi_pci1", "driver": "virtio-scsi-pci", "bus": "pcie_extra_root_port_0", "addr": "0x0"}, "id": "E0GPZKDS"}
{"id": "E0GPZKDS", "error": {"class": "GenericError", "desc": "Duplicate ID 'virtio_scsi_pci1' for device"}}

Comment 16 Yongxue Hong 2020-06-19 01:30:42 UTC
Sorry for https://bugzilla.redhat.com/show_bug.cgi?id=1752465#c15, please ignore it.

Comment 20 menli@redhat.com 2021-02-24 12:10:27 UTC
Hi vadim,

The bug will be closed automatically, I try to reproduce it  and hit the scenario in https://bugzilla.redhat.com/show_bug.cgi?id=1833187
So this problem seems to similar with Bug 1833187,  both are hot-plug/unplug problems.
I can track this issue with Bug 1833187 from my side. And I agree to close it if not plan to fixed this bug these days.
What's your opinion? Do you prefer to keep it or closed automatically.


Thanks
Menghuan

Comment 21 Vadim Rozenfeld 2021-02-24 13:04:53 UTC
Hi Menghuan,
I have no problem with closing this bug, if we are going to keep https://bugzilla.redhat.com/show_bug.cgi?id=1833187
alive.
Best,
Vadim.

Comment 22 Amnon Ilan 2021-03-13 12:16:30 UTC

*** This bug has been marked as a duplicate of bug 1833187 ***


Note You need to log in before you can comment on or make changes to this bug.