Bug 1671650
Summary: | [virtio-win][vioscsi & viostor]It's so slowly run iometer test on a data disk sometimes | ||||||||||||||
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
Product: | Red Hat Enterprise Linux 9 | Reporter: | Peixiu Hou <phou> | ||||||||||||
Component: | virtio-win | Assignee: | Vadim Rozenfeld <vrozenfe> | ||||||||||||
virtio-win sub component: | virtio-win-prewhql | QA Contact: | Peixiu Hou <phou> | ||||||||||||
Status: | CLOSED WONTFIX | Docs Contact: | |||||||||||||
Severity: | unspecified | ||||||||||||||
Priority: | unspecified | CC: | ailan, coli, demeng, fdeutsch, kanderso, lijin, mdean, menli, vrozenfe, wenli, xiagao, ymankad | ||||||||||||
Version: | 9.1 | Keywords: | Reopened, Triaged | ||||||||||||
Target Milestone: | rc | Flags: | phou:
needinfo-
phou: needinfo- |
||||||||||||
Target Release: | 9.1 | ||||||||||||||
Hardware: | Unspecified | ||||||||||||||
OS: | Unspecified | ||||||||||||||
Whiteboard: | |||||||||||||||
Fixed In Version: | Doc Type: | If docs needed, set a value | |||||||||||||
Doc Text: | Story Points: | --- | |||||||||||||
Clone Of: | Environment: | ||||||||||||||
Last Closed: | 2022-10-25 02:27:43 UTC | Type: | Bug | ||||||||||||
Regression: | --- | Mount Type: | --- | ||||||||||||
Documentation: | --- | CRM: | |||||||||||||
Verified Versions: | Category: | --- | |||||||||||||
oVirt Team: | --- | RHEL 7.3 requirements from Atomic Host: | |||||||||||||
Cloudforms Team: | --- | Target Upstream Version: | |||||||||||||
Embargoed: | |||||||||||||||
Bug Depends On: | 1682882 | ||||||||||||||
Bug Blocks: | 1771318, 1897024 | ||||||||||||||
Attachments: |
|
Paste some test results for win10-32 and win7-32: On win10-32 guest, reproduce rate is low,hit 1 times, rerun 20 times, all can be completed in 1000s. (01/20) repeat1.Host_RHEL.m8.u0.qcow2.virtio_scsi.up.virtio_net.Guest.Win10.i386.io-github-autotest-qemu.iometer_windows.aio_threads: PASS (408.96 s) (02/20) repeat2.Host_RHEL.m8.u0.qcow2.virtio_scsi.up.virtio_net.Guest.Win10.i386.io-github-autotest-qemu.iometer_windows.aio_threads: PASS (388.31 s) (03/20) repeat3.Host_RHEL.m8.u0.qcow2.virtio_scsi.up.virtio_net.Guest.Win10.i386.io-github-autotest-qemu.iometer_windows.aio_threads: PASS (393.77 s) (04/20) repeat4.Host_RHEL.m8.u0.qcow2.virtio_scsi.up.virtio_net.Guest.Win10.i386.io-github-autotest-qemu.iometer_windows.aio_threads: PASS (399.90 s) (05/20) repeat5.Host_RHEL.m8.u0.qcow2.virtio_scsi.up.virtio_net.Guest.Win10.i386.io-github-autotest-qemu.iometer_windows.aio_threads: PASS (406.58 s) On win7-32 guest, reproduced rate is more higher, run 5 times, all exceed 1000s: (2/6) repeat1.Host_RHEL.m8.u0.qcow2.virtio_scsi.up.virtio_net.Guest.Win7.i386.sp1.io-github-autotest-qemu.iometer_windows.aio_threads: PASS (3063.68 s) (3/6) repeat2.Host_RHEL.m8.u0.qcow2.virtio_scsi.up.virtio_net.Guest.Win7.i386.sp1.io-github-autotest-qemu.iometer_windows.aio_threads: PASS (2608.46 s) (4/6) repeat3.Host_RHEL.m8.u0.qcow2.virtio_scsi.up.virtio_net.Guest.Win7.i386.sp1.io-github-autotest-qemu.iometer_windows.aio_threads: PASS (6097.69 s) (5/6) repeat4.Host_RHEL.m8.u0.qcow2.virtio_scsi.up.virtio_net.Guest.Win7.i386.sp1.io-github-autotest-qemu.iometer_windows.aio_threads: PASS (4292.73 s) (6/6) repeat5.Host_RHEL.m8.u0.qcow2.virtio_scsi.up.virtio_net.Guest.Win7.i386.sp1.io-github-autotest-qemu.iometer_windows.aio_threads: PASS (2878.56 s) Best Regards~ Peixiu Hit this issue on viostor data disk with virtio-win-prewhql-171. Used version: virtio-win-prewhql-171 qemu-kvm-3.1.0-22.module+el8.0.1+3032+a09688b9.x86_64 kernel-4.18.0-80.el8.x86_64 seabios-1.12.0-1.module+el8+2706+3c6581b6.x86_64 Best Regards~ Peixiu Hi vadim, I have a question want to confirm with you: I also hit another problem related this issue, when iometer running slowly in guest(like keep in "run 64 of 128" long time), try to do {'execute': 'system_powerdown', 'id': 'Ey1yNefY'} from qmp, the guest cannot be shutdown long time(10mins). If I need to file a new bug for it? or track it on this one? Thanks a lot~ Peixiu (In reply to Peixiu Hou from comment #3) > Hi vadim, > > I have a question want to confirm with you: > I also hit another problem related this issue, when iometer running slowly > in guest(like keep in "run 64 of 128" long time), try to do {'execute': > 'system_powerdown', 'id': 'Ey1yNefY'} from qmp, the guest cannot be shutdown > long time(10mins). If I need to file a new bug for it? or track it on this > one? > > Thanks a lot~ > Peixiu Can you ping this slow running VM from the hos or any other VM to see the ping response time? All the best, Vadim. (In reply to Vadim Rozenfeld from comment #4) > (In reply to Peixiu Hou from comment #3) > > Hi vadim, > > > > I have a question want to confirm with you: > > I also hit another problem related this issue, when iometer running slowly > > in guest(like keep in "run 64 of 128" long time), try to do {'execute': > > 'system_powerdown', 'id': 'Ey1yNefY'} from qmp, the guest cannot be shutdown > > long time(10mins). If I need to file a new bug for it? or track it on this > > one? > > > > Thanks a lot~ > > Peixiu > > Can you ping this slow running VM from the hos or any other VM to see the > ping response time? > I tried to ping the slow running vm from the host, status as follows: [root@dell-per440-03 home]# ping 10.73.75.63 PING 10.73.75.63 (10.73.75.63) 56(84) bytes of data. 64 bytes from 10.73.75.63: icmp_seq=1 ttl=128 time=0.338 ms 64 bytes from 10.73.75.63: icmp_seq=2 ttl=128 time=0.148 ms 64 bytes from 10.73.75.63: icmp_seq=3 ttl=128 time=0.136 ms 64 bytes from 10.73.75.63: icmp_seq=4 ttl=128 time=0.191 ms 64 bytes from 10.73.75.63: icmp_seq=5 ttl=128 time=0.208 ms 64 bytes from 10.73.75.63: icmp_seq=6 ttl=128 time=0.136 ms 64 bytes from 10.73.75.63: icmp_seq=7 ttl=128 time=0.168 ms 64 bytes from 10.73.75.63: icmp_seq=8 ttl=128 time=0.194 ms 64 bytes from 10.73.75.63: icmp_seq=9 ttl=128 time=0.196 ms 64 bytes from 10.73.75.63: icmp_seq=10 ttl=128 time=0.183 ms 64 bytes from 10.73.75.63: icmp_seq=11 ttl=128 time=0.252 ms 64 bytes from 10.73.75.63: icmp_seq=12 ttl=128 time=0.225 ms 64 bytes from 10.73.75.63: icmp_seq=13 ttl=128 time=0.232 ms 64 bytes from 10.73.75.63: icmp_seq=14 ttl=128 time=0.259 ms 64 bytes from 10.73.75.63: icmp_seq=15 ttl=128 time=0.222 ms 64 bytes from 10.73.75.63: icmp_seq=16 ttl=128 time=0.250 ms 64 bytes from 10.73.75.63: icmp_seq=17 ttl=128 time=0.210 ms 64 bytes from 10.73.75.63: icmp_seq=18 ttl=128 time=0.256 ms 64 bytes from 10.73.75.63: icmp_seq=19 ttl=128 time=0.219 ms 64 bytes from 10.73.75.63: icmp_seq=20 ttl=128 time=0.245 ms 64 bytes from 10.73.75.63: icmp_seq=21 ttl=128 time=0.144 ms Thanks~ Peixiu > All the best, > Vadim. And I also tried to test similar scenario with step "shutdown -r -t 0" in guest replace the step {'execute': 'system_powerdown', 'id': 'Ey1yNefY'} from qmp, tried 10 times, cannot reproduce this issue, guest can be shutdown immediately~ (In reply to Peixiu Hou from comment #6) > And I also tried to test similar scenario with step "shutdown -r -t 0" in > guest replace the step {'execute': 'system_powerdown', 'id': 'Ey1yNefY'} > from qmp, tried 10 times, cannot reproduce this issue, guest can be shutdown > immediately~ sorry, wrong description for "shutdown -r -t 0", with step "shutdown -s -t 0" in guest cannot reproduce this issue~ Best Regards~ Peixiu (In reply to Peixiu Hou from comment #7) > (In reply to Peixiu Hou from comment #6) > > And I also tried to test similar scenario with step "shutdown -r -t 0" in > > guest replace the step {'execute': 'system_powerdown', 'id': 'Ey1yNefY'} > > from qmp, tried 10 times, cannot reproduce this issue, guest can be shutdown > > immediately~ > > sorry, wrong description for "shutdown -r -t 0", with step "shutdown -s -t > 0" in guest cannot reproduce this issue~ > > Best Regards~ > Peixiu Thanks a lot for your update. In any case ping response times are quite reasonable, and indicate that system itself is not stalled (does not experienced some sorts of interrupt storm). Best regards, Vadim. (In reply to Vadim Rozenfeld from comment #8) > (In reply to Peixiu Hou from comment #7) > > (In reply to Peixiu Hou from comment #6) > > > And I also tried to test similar scenario with step "shutdown -r -t 0" in > > > guest replace the step {'execute': 'system_powerdown', 'id': 'Ey1yNefY'} > > > from qmp, tried 10 times, cannot reproduce this issue, guest can be shutdown > > > immediately~ > > > > sorry, wrong description for "shutdown -r -t 0", with step "shutdown -s -t > > 0" in guest cannot reproduce this issue~ > > > > Best Regards~ > > Peixiu > > Thanks a lot for your update. > In any case ping response times are quite reasonable, and indicate > that system itself is not stalled (does not experienced some sorts > of interrupt storm). > > Best regards, > Vadim. Thanks vadim, so do you think we need to file a new bug for it? Best regards~ Peixiu (In reply to Peixiu Hou from comment #9) > (In reply to Vadim Rozenfeld from comment #8) > > (In reply to Peixiu Hou from comment #7) > > > (In reply to Peixiu Hou from comment #6) > > > > And I also tried to test similar scenario with step "shutdown -r -t 0" in > > > > guest replace the step {'execute': 'system_powerdown', 'id': 'Ey1yNefY'} > > > > from qmp, tried 10 times, cannot reproduce this issue, guest can be shutdown > > > > immediately~ > > > > > > sorry, wrong description for "shutdown -r -t 0", with step "shutdown -s -t > > > 0" in guest cannot reproduce this issue~ > > > > > > Best Regards~ > > > Peixiu > > > > Thanks a lot for your update. > > In any case ping response times are quite reasonable, and indicate > > that system itself is not stalled (does not experienced some sorts > > of interrupt storm). > > > > Best regards, > > Vadim. > > Thanks vadim, so do you think we need to file a new bug for it? Yes. Please do it. Just could you please confirm that I understand it correctly that system_powerdown from qemu monitor and "shutdown -s -t 0" from VM cmd prompt both work smoothly, while qmp is not. Right? Best regards, Vadim. > > Best regards~ > Peixiu (In reply to Vadim Rozenfeld from comment #10) > (In reply to Peixiu Hou from comment #9) > > (In reply to Vadim Rozenfeld from comment #8) > > > (In reply to Peixiu Hou from comment #7) > > > > (In reply to Peixiu Hou from comment #6) > > > > > And I also tried to test similar scenario with step "shutdown -r -t 0" in > > > > > guest replace the step {'execute': 'system_powerdown', 'id': 'Ey1yNefY'} > > > > > from qmp, tried 10 times, cannot reproduce this issue, guest can be shutdown > > > > > immediately~ > > > > > > > > sorry, wrong description for "shutdown -r -t 0", with step "shutdown -s -t > > > > 0" in guest cannot reproduce this issue~ > > > > > > > > Best Regards~ > > > > Peixiu > > > > > > Thanks a lot for your update. > > > In any case ping response times are quite reasonable, and indicate > > > that system itself is not stalled (does not experienced some sorts > > > of interrupt storm). > > > > > > Best regards, > > > Vadim. > > > > Thanks vadim, so do you think we need to file a new bug for it? > > Yes. Please do it. > Just could you please confirm that I understand it correctly that > system_powerdown from qemu monitor and > "shutdown -s -t 0" from VM cmd prompt both work smoothly, while qmp is not. > Right? > Hi vadim, scenario 1: during iometer running slowly "shutdown -s -t 0" from VM cmd prompt. scenario 2: during iometer running slowly {'execute': 'system_powerdown', 'id': 'Ey1yNefY'} from qmp. Tried upper 2 scenarios, scenario 1 work smoothly, but scenario2 is not. Did not tried system_powerdown from qemu monitor, due to this issue is found from automation test, It's not easily to reproduced iometer running slowly by manual, so also not easily to reproduced this one by manual~ Best Regards~ Peixiu > Best regards, > Vadim. > > > > > Best regards~ > > Peixiu Hi vadim, I reproduced it with system_powerdown from qemu monitor, during iometer running slowly, system_powerdown from qemu monitor also cannot shutdown the vm. here is a summary I tried: scenario 1: during iometer running slowly "shutdown -s -t 0" from VM cmd prompt, shutdown the vm smoothly. scenario 2: during iometer running slowly {'execute': 'system_powerdown', 'id': 'Ey1yNefY'} from qmp, cannot shutdown the vm. scenario 3: during iometer running slowly do system_powerdown from qemu monitor, cannot shutdown the vm. scenario 4: during iometer running slowly do ping vm from host successfully. Best Regards~ Peixiu (In reply to Peixiu Hou from comment #12) > Hi vadim, > > I reproduced it with system_powerdown from qemu monitor, during iometer > running slowly, system_powerdown from qemu monitor also cannot shutdown the > vm. > > here is a summary I tried: > > scenario 1: during iometer running slowly "shutdown -s -t 0" from VM cmd > prompt, shutdown the vm smoothly. > scenario 2: during iometer running slowly {'execute': 'system_powerdown', > 'id': 'Ey1yNefY'} from qmp, cannot shutdown the vm. > scenario 3: during iometer running slowly do system_powerdown from qemu > monitor, cannot shutdown the vm. > scenario 4: during iometer running slowly do ping vm from host successfully. > > Hi vadim, based upper results, if we remain need file a new bug for it? > Best Regards~ > Peixiu (In reply to Peixiu Hou from comment #13) > (In reply to Peixiu Hou from comment #12) > > Hi vadim, > > > > I reproduced it with system_powerdown from qemu monitor, during iometer > > running slowly, system_powerdown from qemu monitor also cannot shutdown the > > vm. > > > > here is a summary I tried: > > > > scenario 1: during iometer running slowly "shutdown -s -t 0" from VM cmd > > prompt, shutdown the vm smoothly. > > scenario 2: during iometer running slowly {'execute': 'system_powerdown', > > 'id': 'Ey1yNefY'} from qmp, cannot shutdown the vm. > > scenario 3: during iometer running slowly do system_powerdown from qemu > > monitor, cannot shutdown the vm. > > scenario 4: during iometer running slowly do ping vm from host successfully. > > > > > Hi vadim, based upper results, if we remain need file a new bug for it? It looks like something is wrong at qemu side. I wouldn't open a new bug then. Just keep qmp amd qemu monitor cases as additional indications of the problem. I also wonder if qmp is slow on response on 'system_powerdown' only or it is a generic problem for all/most qmp/qemu monitor commands? All the best, Vadim. > > > Best Regards~ > > Peixiu (In reply to Vadim Rozenfeld from comment #14) > (In reply to Peixiu Hou from comment #13) > > (In reply to Peixiu Hou from comment #12) > > > Hi vadim, > > > > > > I reproduced it with system_powerdown from qemu monitor, during iometer > > > running slowly, system_powerdown from qemu monitor also cannot shutdown the > > > vm. > > > > > > here is a summary I tried: > > > > > > scenario 1: during iometer running slowly "shutdown -s -t 0" from VM cmd > > > prompt, shutdown the vm smoothly. > > > scenario 2: during iometer running slowly {'execute': 'system_powerdown', > > > 'id': 'Ey1yNefY'} from qmp, cannot shutdown the vm. > > > scenario 3: during iometer running slowly do system_powerdown from qemu > > > monitor, cannot shutdown the vm. > > > scenario 4: during iometer running slowly do ping vm from host successfully. > > > > > > > > Hi vadim, based upper results, if we remain need file a new bug for it? > > It looks like something is wrong at qemu side. I wouldn't open a new > bug then. Just keep qmp amd qemu monitor cases as additional indications of > the problem. > I also wonder if qmp is slow on response on 'system_powerdown' only or it > is a generic problem for all/most qmp/qemu monitor commands? > Hi vadim, I tried to reproduce this issue with "system_powerdown" and "system_reset" from qmp, the result as follows: 1. tried test the job during iometer do system_powerdown from qmp 30 times, reproduced this issue 17 times, once reproduced iometer run slowly, this issue occurred 100%. 2. tried test the job during iometer do system_reset from qmp 30 times, not hit this issue, checked the log, also hit the iometer run slowly problem, but the system_reset can be responded in guest immediately. Thanks~ Peixiu > All the best, > Vadim. > > > > > > Best Regards~ > > > Peixiu Also hit the comment#0 issue on win7-64 guest, tested on rhel7.7 host. Used versions: virtio-win-prewhql-172 kernel-3.10.0-1057.el7 qemu-kvm-rhev-2.12.0-33.el7 seabios-1.11.0-2.el7 Best Regards~ Peixiu Hit this issue on win7-32 version: kernel-4.18.0-129.el8.x86_64 qemu-kvm-4.0.0-6.module+el8.1.0+3736+a2aefea3.x86_64 virtio-win-prewhql-173 seabios-1.12.0-4.module+el8.1.0+3876+ec1667b7.x86_64 Hit this issue on win7-32 version: kernel-4.18.0-147.el8.x86_64 qemu-kvm-4.1.0-14.module+el8.1.0+4548+ed1300f4.x86_64 virtio-win-prewhql-172 seabios-1.12.0-5.module+el8.1.0+4022+29a53beb.x86_64 (In reply to Yu Wang from comment #18) > Hit this issue on win7-32 > > version: > > kernel-4.18.0-147.el8.x86_64 > qemu-kvm-4.1.0-14.module+el8.1.0+4548+ed1300f4.x86_64 > virtio-win-prewhql-172 > seabios-1.12.0-5.module+el8.1.0+4022+29a53beb.x86_64 Also hit it on win8.1-64 in the same version. (In reply to xiagao from comment #19) > (In reply to Yu Wang from comment #18) > > Hit this issue on win7-32 > > > > version: > > > > kernel-4.18.0-147.el8.x86_64 > > qemu-kvm-4.1.0-14.module+el8.1.0+4548+ed1300f4.x86_64 > > virtio-win-prewhql-172 > > seabios-1.12.0-5.module+el8.1.0+4022+29a53beb.x86_64 > > Also hit it on win8.1-64 in the same version. Can you please upload "slow" and "normal" result files? I would like to see the difference bitween them. Thanks, Vadim Created attachment 1635565 [details]
iometer result file for slow one(9520 s)
Created attachment 1635566 [details]
iometer result file for fast one(348 s)
(In reply to Vadim Rozenfeld from comment #20) > (In reply to xiagao from comment #19) > > (In reply to Yu Wang from comment #18) > > > Hit this issue on win7-32 > > > > > > version: > > > > > > kernel-4.18.0-147.el8.x86_64 > > > qemu-kvm-4.1.0-14.module+el8.1.0+4548+ed1300f4.x86_64 > > > virtio-win-prewhql-172 > > > seabios-1.12.0-5.module+el8.1.0+4022+29a53beb.x86_64 > > > > Also hit it on win8.1-64 in the same version. > > Can you please upload "slow" and "normal" result files? > I would like to see the difference bitween them. > Hi Vadim, I added 2 iometer result files as attachment~ One for slow, finish iometer in 9520s, and one for normal, finish iometer in 348s. Thanks~ Peixiu Thanks a lot, Peixiu Hou Could you let me know which exactly two tests you are mentioning? Because one one of files has two tests inside and another combines around fifty different runs. It will be great if you can point me to the timestamps or offsets of that case were the problem happens. All the best, Vadim. I see, here are the differences fast slow Write MBps 184.073189 109.283971 Transactions per Second 23561.368181 13988.348228 Average Response Time 0.527282 18.299974 Maximum Response Time 3.038314 89.562067 Bytes Written 386981888 1026528395264 Write I/Os 47239 125308642 % CPU Utilization 17.003341 4.997737 % User Time 1.649476 10.073912 % Privileged Time 15.86639 17.867497 % DPC Time 4.437876 5.03771 % Interrupt Time 0.039273 3.013814 Interrupts per Second 34792.699194 600772.805485 CPU Effectiveness 1385.690472 2798.936233 Packets/Second 5.529672 9.431819 There is a huge difference between overall and per-second statistics. Bytes Written 1026528395264 / 386981888 = 2652.65 Transactions per Second 13988.348228 / 23561.368181 = 0.59 I will be quite interesting to check the System Event Log after a slow run and see if there are any Disk IO related problems reported. Best, Vadim. Hi vadim, I checked the slow run event log, on system event log filed, no log shown, on application event log, hit a wmi error log when show running, detail as follows: Event filter with query "SELECT * FROM __InstanceModificationEvent WITHIN 60 WHERE TargetInstance ISA "Win32_Processor" AND TargetInstance.LoadPercentage > 99" could not be reactivated in namespace "//./root/CIMV2" because of error 0x80041003. Events cannot be delivered through this filter until the problem is corrected. - System - Provider [ Name] Microsoft-Windows-WMI [ Guid] {1edeee53-0afe-4609-b846-d8c0b2075b1f} [ EventSourceName] WinMgmt - EventID 10 [ Qualifiers] 49152 Version 0 Level 2 Task 0 Opcode 0 Keywords 0x80000000000000 - TimeCreated [ SystemTime] 2019-11-12T19:50:43.000000000Z EventRecordID 1275 Correlation - Execution [ ProcessID] 0 [ ThreadID] 0 Channel Application Computer WIN-GMF90RKEL1K Security - EventData //./root/CIMV2 SELECT * FROM __InstanceModificationEvent WITHIN 60 WHERE TargetInstance ISA "Win32_Processor" AND TargetInstance.LoadPercentage > 99 0x80041003 And I upload all event log when running iometer 50 times, for slow run logs generated from 03:48:12 - 06:26:13, you can check it~ Thanks~ Peixiu Created attachment 1636055 [details]
all event files when iometer running 50 times
(In reply to Peixiu Hou from comment #28) > Created attachment 1636055 [details] > all event files when iometer running 50 times Thanks. Unfortunately, you were right, there is nothing suspicious in the log file. Do you think it is possible to trigger the slow writing event and try to collect some kvm related statistics (something like "./perf stat -e 'kvm:*' -a sleep 60s" or so) ? Best regards, Vadim. phou, Could you handle comment#29 pls? (In reply to Vadim Rozenfeld from comment #29) > (In reply to Peixiu Hou from comment #28) > > Created attachment 1636055 [details] > > all event files when iometer running 50 times > > Thanks. > Unfortunately, you were right, there is nothing suspicious in the log file. > Do you think it is possible to trigger the slow writing event and try to > collect some kvm related statistics (something like > "./perf stat -e 'kvm:*' -a sleep 60s" or so) ? > Hi Vadim, I tried to do it when reproducing the slow run issue, I collect info with sleep 60s and sleep 1h during the bug reproduced, pasted as follows, hope they are useful. 1. root@dell-per440-05 /home/kar # perf stat -e 'kvm:*' -a sleep 60s Performance counter stats for 'system wide': 6,358,287 kvm:kvm_entry 0 kvm:kvm_hypercall 846,721 kvm:kvm_hv_hypercall 3,392,446 kvm:kvm_pio 842,584 kvm:kvm_fast_mmio 0 kvm:kvm_cpuid 449,507 kvm:kvm_apic 6,358,178 kvm:kvm_exit 319,564 kvm:kvm_inj_virq 0 kvm:kvm_inj_exception 9,384 kvm:kvm_page_fault 229,382 kvm:kvm_msr 425,012 kvm:kvm_cr 11,572 kvm:kvm_pic_set_irq 220,102 kvm:kvm_apic_ipi 315,593 kvm:kvm_apic_accept_irq 315,591 kvm:kvm_eoi 306,314 kvm:kvm_pv_eoi 0 kvm:kvm_nested_vmrun 0 kvm:kvm_nested_intercepts 0 kvm:kvm_nested_vmexit 0 kvm:kvm_nested_vmexit_inject 0 kvm:kvm_nested_intr_vmexit 0 kvm:kvm_invlpga 0 kvm:kvm_skinit 0 kvm:kvm_emulate_insn 0 kvm:vcpu_match_mmio 0 kvm:kvm_write_tsc_offset 0 kvm:kvm_update_master_clock 0 kvm:kvm_track_tsc 0 kvm:kvm_pml_full 7,431 kvm:kvm_ple_window_update 0 kvm:kvm_pvclock_update 0 kvm:kvm_wait_lapic_expire 0 kvm:kvm_enter_smm 0 kvm:kvm_pi_irte_update 0 kvm:kvm_hv_notify_acked_sint 0 kvm:kvm_hv_synic_set_irq 0 kvm:kvm_hv_synic_send_eoi 0 kvm:kvm_hv_synic_set_msr 0 kvm:kvm_hv_stimer_set_config 0 kvm:kvm_hv_stimer_set_count 0 kvm:kvm_hv_stimer_start_periodic 0 kvm:kvm_hv_stimer_start_one_shot 0 kvm:kvm_hv_stimer_callback 0 kvm:kvm_hv_stimer_expiration 0 kvm:kvm_hv_stimer_cleanup 0 kvm:kvm_avic_incomplete_ipi 0 kvm:kvm_avic_unaccelerated_access 241,305 kvm:kvm_hv_timer_state 846,437 kvm:kvm_hv_flush_tlb 0 kvm:kvm_hv_flush_tlb_ex 0 kvm:kvm_hv_send_ipi 0 kvm:kvm_hv_send_ipi_ex 0 kvm:kvm_pv_tlb_flush 0 kvm:kvm_nested_vmenter_failed 3,383,509 kvm:kvm_userspace_exit 241,304 kvm:kvm_vcpu_wakeup 11,569 kvm:kvm_set_irq 11,569 kvm:kvm_ioapic_set_irq 0 kvm:kvm_ioapic_delayed_eoi_inj 7,141 kvm:kvm_msi_set_irq 3,864 kvm:kvm_ack_irq 0 kvm:kvm_mmio 6,766,300 kvm:kvm_fpu 0 kvm:kvm_age_page 0 kvm:kvm_try_async_get_page 0 kvm:kvm_async_pf_doublefault 0 kvm:kvm_async_pf_not_present 0 kvm:kvm_async_pf_ready 0 kvm:kvm_async_pf_completed 44,860 kvm:kvm_halt_poll_ns 60.015614150 seconds time elapsed 2. root@dell-per440-05 /home/kar # perf stat -e 'kvm:*' -a sleep 1h Performance counter stats for 'system wide': 374,657,241 kvm:kvm_entry 0 kvm:kvm_hypercall 43,826,947 kvm:kvm_hv_hypercall 175,740,118 kvm:kvm_pio 37,185,470 kvm:kvm_fast_mmio 126 kvm:kvm_cpuid 40,665,492 kvm:kvm_apic 374,657,175 kvm:kvm_exit 26,462,013 kvm:kvm_inj_virq 7,518 kvm:kvm_inj_exception 408,162 kvm:kvm_page_fault 20,680,269 kvm:kvm_msr 39,127,730 kvm:kvm_cr 694,272 kvm:kvm_pic_set_irq 19,985,413 kvm:kvm_apic_ipi 26,043,579 kvm:kvm_apic_accept_irq 26,043,480 kvm:kvm_eoi 25,348,834 kvm:kvm_pv_eoi 0 kvm:kvm_nested_vmrun 0 kvm:kvm_nested_intercepts 0 kvm:kvm_nested_vmexit 0 kvm:kvm_nested_vmexit_inject 0 kvm:kvm_nested_intr_vmexit 0 kvm:kvm_invlpga 0 kvm:kvm_skinit 7,518 kvm:kvm_emulate_insn 0 kvm:vcpu_match_mmio 0 kvm:kvm_write_tsc_offset 0 kvm:kvm_update_master_clock 0 kvm:kvm_track_tsc 0 kvm:kvm_pml_full 8,792,089 kvm:kvm_ple_window_update 0 kvm:kvm_pvclock_update 0 kvm:kvm_wait_lapic_expire 0 kvm:kvm_enter_smm 0 kvm:kvm_pi_irte_update 0 kvm:kvm_hv_notify_acked_sint 0 kvm:kvm_hv_synic_set_irq 0 kvm:kvm_hv_synic_send_eoi 0 kvm:kvm_hv_synic_set_msr 0 kvm:kvm_hv_stimer_set_config 0 kvm:kvm_hv_stimer_set_count 0 kvm:kvm_hv_stimer_start_periodic 0 kvm:kvm_hv_stimer_start_one_shot 0 kvm:kvm_hv_stimer_callback 0 kvm:kvm_hv_stimer_expiration 0 kvm:kvm_hv_stimer_cleanup 0 kvm:kvm_avic_incomplete_ipi 0 kvm:kvm_avic_unaccelerated_access 23,326,365 kvm:kvm_hv_timer_state 43,825,037 kvm:kvm_hv_flush_tlb 0 kvm:kvm_hv_flush_tlb_ex 0 kvm:kvm_hv_send_ipi 0 kvm:kvm_hv_send_ipi_ex 0 kvm:kvm_pv_tlb_flush 0 kvm:kvm_nested_vmenter_failed 175,278,780 kvm:kvm_userspace_exit 23,326,336 kvm:kvm_vcpu_wakeup 694,266 kvm:kvm_set_irq 694,266 kvm:kvm_ioapic_set_irq 0 kvm:kvm_ioapic_delayed_eoi_inj 755,908 kvm:kvm_msi_set_irq 231,934 kvm:kvm_ack_irq 0 kvm:kvm_mmio 350,556,899 kvm:kvm_fpu 0 kvm:kvm_age_page 0 kvm:kvm_try_async_get_page 0 kvm:kvm_async_pf_doublefault 0 kvm:kvm_async_pf_not_present 0 kvm:kvm_async_pf_ready 0 kvm:kvm_async_pf_completed 8,368,961 kvm:kvm_halt_poll_ns 3600.015585224 seconds time elapsed Best Regards~ Peixiu Thank you Peixiu Hou. Can I ask you to check the system event log file if there is something unusual, like warnings or errors, happen in disk/storage subsystems when the problem appears? Best, Vadim. Sorry for reply this late. I try to reproduce this issue with auto,only hit this issue with win7-32 guest and failed as following error. Host_RHEL.m8.u2.product_av.qcow2.virtio_scsi.up.virtio_net.Guest.Win7.i386.sp1.io-github-autotest-qemu.iometer_windows.aio_threads: ERROR: Timeout expired while waiting for shell command to complete: 'cd C:\\Iometer && cmd /c Iometer.exe /c iometer.icf /r C:\\autotest_iometer_result.csv' (output: '') (1056.68 s) There seems a Warning Event ID:129 for source vioscsi in system event log. build info: qemu-kvm-4.2.0-21.module+el8.2.1+6586+8b7713b9.x86_64 kernel-4.18.0-193.el8.x86_64 seabios-1.13.0-1.module+el8.2.0+5520+4e5817f3.x86_64 virtio-win-prewhql-0.1-184.iso Thanks Menghuan Hit the same issue on Win8-64. pkg: virtio-win-prewhql-0.1-185.iso qemu-kvm-4.2.0-19.module+el8.2.0+6296+6b821950.x86_64 kernel-4.18.0-193.el8.x86_64 seabios-1.13.0-1.module+el8.2.0+5520+4e5817f3.x86_64 Created attachment 1697923 [details]
full automation logs
Hit the same issue on Win8-64. pkg: virtio-win-1.9.13-1.el8.iso qemu-kvm-5.1.0-4.module+el8.3.0+7846+ae9b566f.x86_64 kernel-4.18.0-235.el8.x86_64 seabios-1.14.0-1.module+el8.3.0+7638+07cf13d2.x86_64 Hit the same issue on Win8-64 and Win10.i386 pkg: kernel-4.18.0-193.el8.x86_64 qemu-kvm-4.2.0-19.module+el8.2.0+6296+6b821950.x86_64 virtio-win-prewhql-190 seabios-1.13.0-1.module+el8.2.0+5520+4e5817f3.x86_64 Hit this same issue on Win8-32 guest for virtio-win-prewhql-191 test . 023-Host_RHEL.m8.u2.product_av.qcow2.virtio_blk.up.virtio_net.Guest.Win8.i386.io-github-autotest-qemu.single_driver_install.with_netkvm.virtio_win_iso_media.q35 pkg: kernel-4.18.0-193.el8.x86_64 qemu-kvm-4.2.0-19.module+el8.2.0+6296+6b821950.x86_64 virtio-win-prewhql-191 seabios-1.13.0-1.module+el8.2.0+5520+4e5817f3.x86_64 (In reply to menli from comment #34) > Sorry for reply this late. > > I try to reproduce this issue with auto,only hit this issue with win7-32 > guest and failed as following error. > > Host_RHEL.m8.u2.product_av.qcow2.virtio_scsi.up.virtio_net.Guest.Win7.i386. > sp1.io-github-autotest-qemu.iometer_windows.aio_threads: ERROR: Timeout > expired while waiting for shell command to complete: 'cd C:\\Iometer && cmd > /c Iometer.exe /c iometer.icf /r C:\\autotest_iometer_result.csv' > (output: '') (1056.68 s) > > There seems a Warning Event ID:129 for source vioscsi in system event log. > > build info: > qemu-kvm-4.2.0-21.module+el8.2.1+6586+8b7713b9.x86_64 > kernel-4.18.0-193.el8.x86_64 > seabios-1.13.0-1.module+el8.2.0+5520+4e5817f3.x86_64 > virtio-win-prewhql-0.1-184.iso > > > Thanks > Menghuan Event ID 129 means "Reset to device" issued by miniport to device. https://docs.microsoft.com/en-us/archive/blogs/ntdebugging/understanding-storage-timeouts-and-event-129-errors Technically it means that backend was not responsive for more then 30 seconds. I wonder if the storage is backed by local or network located (SAN) file? Thanks, Vadim. (In reply to Vadim Rozenfeld from comment #43) > (In reply to menli from comment #34) > > Sorry for reply this late. > > > > I try to reproduce this issue with auto,only hit this issue with win7-32 > > guest and failed as following error. > > > > Host_RHEL.m8.u2.product_av.qcow2.virtio_scsi.up.virtio_net.Guest.Win7.i386. > > sp1.io-github-autotest-qemu.iometer_windows.aio_threads: ERROR: Timeout > > expired while waiting for shell command to complete: 'cd C:\\Iometer && cmd > > /c Iometer.exe /c iometer.icf /r C:\\autotest_iometer_result.csv' > > (output: '') (1056.68 s) > > > > There seems a Warning Event ID:129 for source vioscsi in system event log. > > > > build info: > > qemu-kvm-4.2.0-21.module+el8.2.1+6586+8b7713b9.x86_64 > > kernel-4.18.0-193.el8.x86_64 > > seabios-1.13.0-1.module+el8.2.0+5520+4e5817f3.x86_64 > > virtio-win-prewhql-0.1-184.iso > > > > > > Thanks > > Menghuan > > Event ID 129 means "Reset to device" issued by miniport to device. > https://docs.microsoft.com/en-us/archive/blogs/ntdebugging/understanding- > storage-timeouts-and-event-129-errors > > Technically it means that backend was not responsive for more then 30 > seconds. I wonder if the storage is backed > by local or network located (SAN) file? > > Thanks, > Vadim. The storage is backed by local, thanks Thanks Menghuan I tried to reproduce this issue on win8.1-32/64 guest. Run 20 times on each guest, also hit this issue on win8.1-64 guest. (07/21) repeat6.Host_RHEL.m8.u5.product_rhel.qcow2.virtio_scsi.up.virtio_net.Guest.Win8.x86_64.1.io-github-autotest-qemu.iometer_windows.aio_threads.q35: ERROR: Timeout expired while waiting for shell command to complete: 'cd C:\\Iometer && cmd /c Iometer.exe /c iometer.icf /r C:\\autotest_iometer_result.csv' (output: '') (1065.92 s) Used version: kernel-4.18.0-311.el8.kpq1.x86_64 qemu-kvm-4.2.0-52.module+el8.5.0+11386+ef5875dd.x86_64 seabios-bin-1.13.0-2.module+el8.3.0+7353+9de0a3cc.noarch virtio-win-prewhql-202 As upper result, I prefer to keep this issue open~ Thanks~ Peixiu (In reply to Peixiu Hou from comment #46) > I tried to reproduce this issue on win8.1-32/64 guest. > > Run 20 times on each guest, also hit this issue on win8.1-64 guest. > (07/21) > repeat6.Host_RHEL.m8.u5.product_rhel.qcow2.virtio_scsi.up.virtio_net.Guest. > Win8.x86_64.1.io-github-autotest-qemu.iometer_windows.aio_threads.q35: > ERROR: Timeout expired while waiting for shell command to complete: 'cd > C:\\Iometer && cmd /c Iometer.exe /c iometer.icf /r > C:\\autotest_iometer_result.csv' (output: '') (1065.92 s) > > Used version: > kernel-4.18.0-311.el8.kpq1.x86_64 > qemu-kvm-4.2.0-52.module+el8.5.0+11386+ef5875dd.x86_64 > seabios-bin-1.13.0-2.module+el8.3.0+7353+9de0a3cc.noarch > virtio-win-prewhql-202 > > As upper result, I prefer to keep this issue open~ > Fully agree with you, Peixiu. Let's keep it open. Meanwhile can you please share iometer.reg file? All the best, Vadim. > Thanks~ > Peixiu After evaluating this issue, there are no plans to address it further or fix it in an upcoming release. Therefore, it is being closed. If plans change such that this issue will be fixed in an upcoming release, then the bug can be reopened. Hi Vadim, This issue is not only hit on win7 os, but also hit on win8.1 and win10(not 100%). Could you help to extend the stale date for this issue? Thanks~ Peixiu Hi Vadim, Could you help to move this bug to RHEL9.1 Product? thanks~ Best Regards~ Peixiu Change Stale Date as RHEL9.1 schedule. After evaluating this issue, there are no plans to address it further or fix it in an upcoming release. Therefore, it is being closed. If plans change such that this issue will be fixed in an upcoming release, then the bug can be reopened. |
Created attachment 1525734 [details] Iometer configuration Description of problem: Run iometer test on a vioscsi data disk(1G) so slowly sometimes. Normally, it can be finished in 1000s, but some times, it need more than 3000s or more than 9000s to complete it. Follows are parts of Iometer configuration, also can check the attachment: Version 2006.07.27 'TEST SETUP ==================================================================== 'Run Time ' hours minutes seconds 0 0 2 'Ramp Up Time (s) 0 'Default Disk Workers to Spawn NUMBER_OF_CPUS 'Default Network Workers to Spawn 0 'Record Results ALL 'Worker Cycling ' start step step type 1 1 LINEAR 'Disk Cycling ' start step step type 1 1 LINEAR 'Queue Depth Cycling ' start end step step type 1 128 2 EXPONENTIAL 'Test Type CYCLE_OUTSTANDING_IOS_AND_TARGETS 'END test setup 'RESULTS DISPLAY =============================================================== 'Update Frequency,Update Type 0,WHOLE_TEST 'Bar chart 1 statistic Total I/Os per Second 'Bar chart 2 statistic Total MBs per Second 'Bar chart 3 statistic Average I/O Response Time (ms) 'Bar chart 4 statistic Maximum I/O Response Time (ms) 'Bar chart 5 statistic % CPU Utilization (total) 'Bar chart 6 statistic Total Error Count 'END results display 'ACCESS SPECIFICATIONS ========================================================= 'Access specification name,default assignment Default,NONE 'size,% of size,% reads,% random,delay,burst,align,reply 2048,100,67,100,0,1,0,0 'Access specification name,default assignment 512B; 100% Read; 0% random,NONE 'size,% of size,% reads,% random,delay,burst,align,reply 512,100,100,0,0,1,0,0 'Access specification name,default assignment 512B; 75% Read; 0% random,NONE 'size,% of size,% reads,% random,delay,burst,align,reply 512,100,75,0,0,1,0,0 'Access specification name,default assignment 512B; 50% Read; 0% random,NONE 'size,% of size,% reads,% random,delay,burst,align,reply 512,100,50,0,0,1,0,0 'Access specification name,default assignment 512B; 25% Read; 0% random,NONE 'size,% of size,% reads,% random,delay,burst,align,reply 512,100,25,0,0,1,0,0 'Access specification name,default assignment 512B; 0% Read; 0% random,NONE 'size,% of size,% reads,% random,delay,burst,align,reply 512,100,0,0,0,1,0,0 'Access specification name,default assignment 4K; 100% Read; 0% random,NONE 'size,% of size,% reads,% random,delay,burst,align,reply 4096,100,100,0,0,1,0,0 'Access specification name,default assignment 4K; 75% Read; 0% random,NONE 'size,% of size,% reads,% random,delay,burst,align,reply 4096,100,75,0,0,1,0,0 'Access specification name,default assignment 4K; 50% Read; 0% random,NONE 'size,% of size,% reads,% random,delay,burst,align,reply 4096,100,50,0,0,1,0,0 'Access specification name,default assignment 4K; 25% Read; 0% random,NONE 'size,% of size,% reads,% random,delay,burst,align,reply 4096,100,25,0,0,1,0,0 'Access specification name,default assignment 4K; 0% Read; 0% random,NONE 'size,% of size,% reads,% random,delay,burst,align,reply 4096,100,0,0,0,1,0,0 'Access specification name,default assignment 16K; 100% Read; 0% random,NONE 'size,% of size,% reads,% random,delay,burst,align,reply 16384,100,100,0,0,1,0,0 'Access specification name,default assignment 16K; 75% Read; 0% random,NONE 'size,% of size,% reads,% random,delay,burst,align,reply 16384,100,75,0,0,1,0,0 'Access specification name,default assignment 16K; 50% Read; 0% random,NONE 'size,% of size,% reads,% random,delay,burst,align,reply 16384,100,50,0,0,1,0,0 'Access specification name,default assignment 16K; 25% Read; 0% random,NONE 'size,% of size,% reads,% random,delay,burst,align,reply 16384,100,25,0,0,1,0,0 'Access specification name,default assignment 16K; 0% Read; 0% random,NONE 'size,% of size,% reads,% random,delay,burst,align,reply 16384,100,0,0,0,1,0,0 'Access specification name,default assignment 32K; 100% Read; 0% random,NONE 'size,% of size,% reads,% random,delay,burst,align,reply 32768,100,100,0,0,1,0,0 'Access specification name,default assignment 32K; 75% Read; 0% random,NONE 'size,% of size,% reads,% random,delay,burst,align,reply 32768,100,75,0,0,1,0,0 'Access specification name,default assignment 32K; 50% Read; 0% random,NONE 'size,% of size,% reads,% random,delay,burst,align,reply 32768,100,50,0,0,1,0,0 'Access specification name,default assignment 32K; 25% Read; 0% random,NONE 'size,% of size,% reads,% random,delay,burst,align,reply 32768,100,25,0,0,1,0,0 'Access specification name,default assignment 32K; 0% Read; 0% random,NONE 'size,% of size,% reads,% random,delay,burst,align,reply 32768,100,0,0,0,1,0,0 'Access specification name,default assignment All in one,NONE 'size,% of size,% reads,% random,delay,burst,align,reply 512,5,100,0,0,1,0,0 512,5,75,0,0,1,0,0 512,5,50,0,0,1,0,0 512,5,25,0,0,1,0,0 512,5,0,0,0,1,0,0 4096,5,100,0,0,1,0,0 4096,5,75,0,0,1,0,0 4096,5,50,0,0,1,0,0 4096,5,25,0,0,1,0,0 4096,5,0,0,0,1,0,0 16384,5,100,0,0,1,0,0 16384,5,75,0,0,1,0,0 16384,5,50,0,0,1,0,0 16384,5,25,0,0,1,0,0 16384,5,0,0,0,1,0,0 32768,5,100,0,0,1,0,0 32768,5,75,0,0,1,0,0 32768,5,50,0,0,1,0,0 32768,5,25,0,0,1,0,0 32768,5,0,0,0,1,0,0 'Access specification name,default assignment 2K; 100% Read,NONE 'size,% of size,% reads,% random,delay,burst,align,reply 2048,100,100,0,0,1,0,0 'Access specification name,default assignment 4K; 100% Read,NONE 'size,% of size,% reads,% random,delay,burst,align,reply 4096,100,100,0,0,1,0,0 'Access specification name,default assignment 8K; 100% Read,NONE 'size,% of size,% reads,% random,delay,burst,align,reply 8192,100,100,0,0,1,0,0 'Access specification name,default assignment 16K; 100% Read,NONE 'size,% of size,% reads,% random,delay,burst,align,reply 16384,100,100,0,0,1,0,0 'Access specification name,default assignment 32K; 100% Read,NONE 'size,% of size,% reads,% random,delay,burst,align,reply 32768,100,100,0,0,1,0,0 'Access specification name,default assignment 64K; 100% Read,NONE 'size,% of size,% reads,% random,delay,burst,align,reply 65536,100,100,0,0,1,0,0 'Access specification name,default assignment 2K; 100% Write,NONE 'size,% of size,% reads,% random,delay,burst,align,reply 2048,100,0,0,0,1,0,0 'Access specification name,default assignment 4K; 100% Write,NONE 'size,% of size,% reads,% random,delay,burst,align,reply 4096,100,0,0,0,1,0,0 'Access specification name,default assignment 8K; 100% Write,NONE 'size,% of size,% reads,% random,delay,burst,align,reply 8192,100,0,0,0,1,0,0 'Access specification name,default assignment 16K; 100% Write,NONE 'size,% of size,% reads,% random,delay,burst,align,reply 16384,100,0,0,0,1,0,0 'Access specification name,default assignment 32K; 100% Write,NONE 'size,% of size,% reads,% random,delay,burst,align,reply 32768,100,0,0,0,1,0,0 'Access specification name,default assignment 64K; 100% Write,NONE 'size,% of size,% reads,% random,delay,burst,align,reply 65536,100,0,0,0,1,0,0 'Access specification name,default assignment 128K; 100% Read,NONE 'size,% of size,% reads,% random,delay,burst,align,reply 131072,100,100,0,0,1,0,0 'Access specification name,default assignment 256K; 100% Read,NONE 'size,% of size,% reads,% random,delay,burst,align,reply 262144,100,100,0,0,1,0,0 'Access specification name,default assignment 128K; 100% Write,NONE 'size,% of size,% reads,% random,delay,burst,align,reply 131072,100,0,0,0,1,0,0 'Access specification name,default assignment 256K; 100% Write,NONE 'size,% of size,% reads,% random,delay,burst,align,reply 262144,100,0,0,0,1,0,0 'END access specifications 'MANAGER LIST ================================================================== 'Manager ID, manager name 1,WIN-9TQ3NRAB83E 'Manager network address 10.66.72.156 'Worker Worker 1 'Worker type DISK 'Default target settings for worker 'Number of outstanding IOs,test connection rate,transactions per connection 1,DISABLED,1 'Disk maximum size,starting sector 0,0 'End default target settings for worker 'Assigned access specs 2K; 100% Read 4K; 100% Read 8K; 100% Read 16K; 100% Read 32K; 100% Read 64K; 100% Read 128K; 100% Read 256K; 100% Read 2K; 100% Write 4K; 100% Write 8K; 100% Write 16K; 100% Write 32K; 100% Write 64K; 100% Write 128K; 100% Write 256K; 100% Write 'End assigned access specs 'Target assignments 'Target I:local 'Target type DISK 'End target 'End target assignments 'End worker 'Worker Worker 2 'Worker type DISK 'Default target settings for worker 'Number of outstanding IOs,test connection rate,transactions per connection 1,DISABLED,1 'Disk maximum size,starting sector 0,0 'End default target settings for worker 'Assigned access specs 2K; 100% Read 4K; 100% Read 8K; 100% Read 16K; 100% Read 32K; 100% Read 64K; 100% Read 128K; 100% Read 256K; 100% Read 2K; 100% Write 4K; 100% Write 8K; 100% Write 16K; 100% Write 32K; 100% Write 64K; 100% Write 128K; 100% Write 256K; 100% Write 'End assigned access specs 'Target assignments 'Target I:local 'Target type DISK 'End target 'End target assignments 'End worker 'End manager 'END manager list Version 2006.07.27 Version-Release number of selected component (if applicable): kernel-4.18.0-62.el8.x86_64 qemu-kvm-2.12.0-57.module+el8+2683+02b3b955.x86_64 seabios-bin-1.11.1-3.module+el8+2529+a9686a4d.noarch virtio-win-1.9.6-1.el8.iso How reproducible: about 40% Steps to Reproduce: 1.Boot the guest up: ============================================================================= MALLOC_PERTURB_=1 /usr/libexec/qemu-kvm \ -name 'avocado-vt-vm1' \ -machine pc \ -nodefaults \ -device VGA,bus=pci.0,addr=0x2 \ -device ich9-usb-ehci1,id=usb1,addr=0x1d.7,multifunction=on,bus=pci.0 \ -device ich9-usb-uhci1,id=usb1.0,multifunction=on,masterbus=usb1.0,addr=0x1d.0,firstport=0,bus=pci.0 \ -device ich9-usb-uhci2,id=usb1.1,multifunction=on,masterbus=usb1.0,addr=0x1d.2,firstport=2,bus=pci.0 \ -device ich9-usb-uhci3,id=usb1.2,multifunction=on,masterbus=usb1.0,addr=0x1d.4,firstport=4,bus=pci.0 \ -device virtio-scsi-pci,id=virtio_scsi_pci0,bus=pci.0,addr=0x3 \ -drive id=drive_image1,if=none,snapshot=off,aio=threads,cache=none,format=qcow2,file=/home/kvm_autotest_root/images/win7-32-sp1-virtio-scsi.qcow2 \ -device scsi-hd,id=image1,drive=drive_image1 \ -drive id=drive_disk1,if=none,snapshot=off,aio=threads,cache=none,format=qcow2,file=/home/kvm_autotest_root/images/storage.qcow2 \ -device scsi-hd,id=disk1,drive=drive_disk1 \ -device virtio-net-pci,mac=9a:a4:a5:a6:a7:a8,id=idBkzKrg,vectors=4,netdev=idP2WXUc,bus=pci.0,addr=0x4 \ -netdev tap,id=idP2WXUc,vhost=on \ -m 14336 \ -smp 24,maxcpus=24,cores=12,threads=1,sockets=2 \ -cpu 'Skylake-Server',+kvm_pv_unhalt,hv_relaxed,hv_spinlocks=0x1fff,hv_vapic,hv_time \ -drive id=drive_cd1,if=none,snapshot=off,aio=threads,cache=none,media=cdrom,file=/home/kvm_autotest_root/iso/windows/winutils.iso \ -device scsi-cd,id=cd1,drive=drive_cd1 \ -device usb-tablet,id=usb-tablet1,bus=usb1.0,port=1 \ -vnc :0 \ -rtc base=localtime,clock=host,driftfix=slew \ -boot order=cdn,once=c,menu=off,strict=off \ -enable-kvm \ -monitor stdio =============================================================================== 2. Sending command: echo select disk 1 > cmd && echo create partition primary >> cmd && echo select partition 1 >> cmd && echo assign letter=I >> cmd && echo exit >> cmd && diskpart /s cmd 3. Sending command: format I: /FS:NTFS /V:local /Q /y 4. cmd /c xcopy /y D:\Iometer\* C:\Iometer\ 5. Register Iometer: cmd /c regedit /s C:\Iometer\iometer.reg 6. Copy iometer.icf file to guest C:\Iometer\iometer.icf 7. Change dir to: C:\Iometer, double click iometer.icf, start iometer test. 8. Check the iometer run time. Actual results: complete need 3000s or 5000s or 9000s Expected results: complete in 1000s Additional info: 1. Hit this issue with virtio-win-prewhlq-163. 2. Hit this issue with virtio-win-1.9.7-3. 3. Hit this issue on win7-32, win10-32, win2016 guest. 4. Manually reproduced it found that run job 1-63 can be finished fast(from 2K 100% read to 128K 100% read), from job 256K 100% read to 2K 100% write need so long time, if started 2K 100% write, also can be finished fast, hope this info helps.