Bug 1683534

Summary: Failed to re-hotplug the deleted disk on win2019 sometimes
Product: Red Hat Enterprise Linux Advanced Virtualization Reporter: lchai <lchai>
Component: qemu-kvmAssignee: Vadim Rozenfeld <vrozenfe>
qemu-kvm sub component: Machine Types QA Contact: qing.wang <qinwang>
Status: CLOSED WONTFIX Docs Contact:
Severity: medium    
Priority: medium CC: ailan, aliang, coli, juzhang, lijin, phou, qinwang, rbalakri, virt-maint, vrozenfe, wyu, xuwei
Version: 8.1   
Target Milestone: rc   
Target Release: 8.0   
Hardware: Unspecified   
OS: Windows   
Whiteboard:
Fixed In Version: Doc Type: If docs needed, set a value
Doc Text:
Story Points: ---
Clone Of: Environment:
Last Closed: 2021-03-15 07:33:38 UTC Type: Bug
Regression: --- Mount Type: ---
Documentation: --- CRM:
Verified Versions: Category: ---
oVirt Team: --- RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: --- Target Upstream Version:
Embargoed:
Bug Depends On:    
Bug Blocks: 1744438    

Description lchai 2019-02-27 07:42:47 UTC
Description of problem:
After unplug the disk, then re-hotplug it again, qmp and hmp could detect this disk, but it could not be found in guest.

Version-Release number of selected component (if applicable):
Host:
kernel-4.18.0-67.el8.x86_64
qemu-kvm-3.1.0-15.module+el8+2792+e33e01a0.x86_64
seabios-bin-1.11.1-3.module+el8+2529+a9686a4d.noarch
sgabios-bin-0.20170427git-2.module+el8+2529+a9686a4d.noarch

Guest:
Win2019
virtio-win-prewhql-0.1-163.iso

How reproducible:
3/10

Steps to Reproduce:
1.Boot the guest with the following cli:
/usr/libexec/qemu-kvm \
        -S \
        -name 'avocado-vt-vm1' \
        -machine q35 \
        -nodefaults \
        -device VGA,bus=pcie.0,addr=0x1 \
        -object iothread,id=iothread0 \
        -drive id=drive_image1,if=none,snapshot=off,aio=threads,cache=none,format=qcow2,file=/home/kvm_autotest_root/images/win2019-64-virtio.qcow2 \
        -device pcie-root-port,id=pcie.0-root-port-3,slot=3,chassis=3,addr=0x3,bus=pcie.0 \
        -device virtio-blk-pci,id=image1,drive=drive_image1,bootindex=0,iothread=iothread0,bus=pcie.0-root-port-3,addr=0x0 \
        -m 15360 \
        -smp 12,maxcpus=12,cores=6,threads=1,sockets=2 \
        -cpu 'Opteron_G5',+kvm_pv_unhalt,hv_relaxed,hv_spinlocks=0x1fff,hv_vapic,hv_time \
        -device pcie-root-port,id=pcie.0-root-port-4,slot=4,chassis=4,addr=0x4,bus=pcie.0 \
        -device virtio-net-pci,mac=9a:31:32:33:34:35,id=idYxHDLn,vectors=4,netdev=idoOknQC,bus=pcie.0-root-port-4,addr=0x0 \
        -netdev tap,id=idoOknQC,vhost=on \
        -device pcie-root-port,id=pcie.0-root-port-5,slot=5,chassis=5,addr=0x5,bus=pcie.0 \
        -device virtio-scsi-pci,id=virtio_scsi_pci0,bus=pcie.0-root-port-5,addr=0x0 \
        -drive id=drive_cd1,if=none,snapshot=off,aio=threads,cache=none,media=cdrom,file=/home/kvm_autotest_root/iso/windows/winutils.iso \
        -device scsi-cd,id=cd1,drive=drive_cd1,bootindex=1 \
        -device pcie-root-port,id=pcie.0-root-port-2,slot=2,chassis=2,addr=0x2,bus=pcie.0 \
        -device qemu-xhci,id=usb1,bus=pcie.0-root-port-2,addr=0x0 \
        -device usb-tablet,id=usb-tablet1,bus=usb1.0,port=1 \
        -device pcie-root-port,id=pcie_extra_root_port_0,slot=6,chassis=6,addr=0x6,bus=pcie.0 \
        -vnc :1 \
        -rtc base=localtime,clock=host,driftfix=slew \
        -enable-kvm \
        -qmp tcp:0:4446,server,nowait \
        -monitor stdio

2. Prepare a data disk, and hotplug it by qmp;
# qemu-img create -f qcow2 storage0.qcow2 1G
# telnet localhost 4446
{"execute": "qmp_capabilities"}
{"return": {}}

{"execute": "human-monitor-command", "arguments": {"command-line": "drive_add auto id=drive_stg0,if=none,snapshot=off,aio=threads,cache=none,format=qcow2,file=/home/kvm_autotest_root/images/storage0.qcow2"}, "id": "Yu6Mfcom"}
{"return": "OK\r\n", "id": "Yu6Mfcom"}

{"execute": "device_add", "arguments": {"driver": "virtio-blk-pci", "id": "stg0", "drive": "drive_stg0", "iothread": "iothread0", "bus": "pcie_extra_root_port_0"}, "id": "t07OBwFH"}
{"return": {}, "id": "t07OBwFH"}

3.In guest, check the disk status, create partition & format this disk, and run iozone test on it;
wmic diskdrive get index
Index
0
1

diskpart>
list disk
select disk 1
create partition primary
select partition 1
assign letter=I
format fs=ntfs
exit

D:\Iozone\iozone.exe -azR -r 64k -n 125M -g 512M -M -i 0 -i 1 -b I:\iozone_test -f I:\testfile

4. After inzone test finished, unplug this disk in qmp, and check the device status in guest;
{"execute": "device_del", "arguments": {"id": "stg0"}, "id": "XVosfhHr"}
{"return": {}, "id": "XVosfhHr"}
{"timestamp": {"seconds": 1551251295, "microseconds": 687533}, "event": "DEVICE_DELETED", "data": {"path": "/machine/peripheral/stg0/virtio-backend"}}
{"timestamp": {"seconds": 1551251295, "microseconds": 740903}, "event": "DEVICE_DELETED", "data": {"device": "stg0", "path": "/machine/peripheral/stg0"}}

guest:
wmic diskdrive get index
Index
0

5. Re-plug this disk, and check its status in qmp/hmp;
{"execute": "human-monitor-command", "arguments": {"command-line": "drive_add auto id=drive_stg0,if=none,snapshot=off,aio=threads,cache=none,format=qcow2,file=/home/kvm_autotest_root/images/storage0.qcow2"}, "id": "vbXJbRsZ"}
{"return": "OK\r\n", "id": "vbXJbRsZ"}
{"execute": "device_add", "arguments": {"driver": "virtio-blk-pci", "id": "stg0", "drive": "drive_stg0", "iothread": "iothread0", "bus": "pcie_extra_root_port_0"}, "id": "ovGz8Gdx"}
{"return": {}, "id": "ovGz8Gdx"}

(qemu) info block
drive_stg0 (#block1118): /home/kvm_autotest_root/images/storage0.qcow2 (qcow2)
    Attached to:      /machine/peripheral/stg0/virtio-backend
    Cache mode:       writeback, direct


6. Check the disk index in guest;
wmic diskdrive get index

Actual results:
Index
0
No data disk could be found in guest.


Expected results:
Index
0
1
The data disk could be detected in guest successfully.

Additional info:
1)3/10, the disk could not be re-hotpluged successfully. And the Disk Management tool would hang when this issue happened. 

2)From the automation log, this issue also happened on virtio-scsi disk.

3)Host cpu info:
Architecture:        x86_64
CPU op-mode(s):      32-bit, 64-bit
Byte Order:          Little Endian
CPU(s):              24
On-line CPU(s) list: 0-23
Thread(s) per core:  2
Core(s) per socket:  6
Socket(s):           2
NUMA node(s):        4
Vendor ID:           AuthenticAMD
CPU family:          21
Model:               2
Model name:          AMD Opteron(tm) Processor 6344
Stepping:            0
CPU MHz:             1192.857
BogoMIPS:            5186.97
Virtualization:      AMD-V
L1d cache:           16K
L1i cache:           64K
L2 cache:            2048K
L3 cache:            6144K
NUMA node0 CPU(s):   0,2,4,6,8,10
NUMA node1 CPU(s):   12,14,16,18,20,22
NUMA node2 CPU(s):   1,3,5,7,9,11
NUMA node3 CPU(s):   13,15,17,19,21,23
Flags:               fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ht syscall nx mmxext fxsr_opt pdpe1gb rdtscp lm constant_tsc rep_good nopl nonstop_tsc cpuid extd_apicid amd_dcm aperfmperf pni pclmulqdq monitor ssse3 fma cx16 sse4_1 sse4_2 popcnt aes xsave avx f16c lahf_lm cmp_legacy svm extapic cr8_legacy abm sse4a misalignsse 3dnowprefetch osvw ibs xop skinit wdt fma4 tce nodeid_msr tbm topoext perfctr_core perfctr_nb cpb hw_pstate ssbd ibpb vmmcall bmi1 arat npt lbrv svm_lock nrip_save tsc_scale vmcb_clean flushbyasid decodeassists pausefilter pfthreshold

Comment 1 lchai 2019-02-28 05:53:47 UTC
Update:
Run above test without date plane for 8 times, hit this issue for 1 time.

Comment 2 Xueqiang Wei 2019-03-06 09:44:21 UTC
Also hit this issue on windows2106

Comment 7 Ademar Reis 2020-02-05 22:54:33 UTC
QEMU has been recently split into sub-components and as a one-time operation to avoid breakage of tools, we are setting the QEMU sub-component of this BZ to "General". Please review and change the sub-component if necessary the next time you review this BZ. Thanks

Comment 12 RHEL Program Management 2021-03-15 07:33:38 UTC
After evaluating this issue, there are no plans to address it further or fix it in an upcoming release.  Therefore, it is being closed.  If plans change such that this issue will be fixed in an upcoming release, then the bug can be reopened.

Comment 13 qing.wang 2021-07-23 09:26:40 UTC
QE agree to close due to all windows hotplug/unplug issues are tracked by
Bug 1833187 - [virtio-win][viostor] Data disk still display in disk manager after hotunplug .