RHEL Engineering is moving the tracking of its product development work on RHEL 6 through RHEL 9 to Red Hat Jira (issues.redhat.com). If you're a Red Hat customer, please continue to file support cases via the Red Hat customer portal. If you're not, please head to the "RHEL project" in Red Hat Jira and file new tickets here. Individual Bugzilla bugs in the statuses "NEW", "ASSIGNED", and "POST" are being migrated throughout September 2023. Bugs of Red Hat partners with an assigned Engineering Partner Manager (EPM) are migrated in late September as per pre-agreed dates. Bugs against components "kernel", "kernel-rt", and "kpatch" are only migrated if still in "NEW" or "ASSIGNED". If you cannot log in to RH Jira, please consult article #7032570. That failing, please send an e-mail to the RH Jira admins at rh-issues@redhat.com to troubleshoot your issue as a user management inquiry. The email creates a ServiceNow ticket with Red Hat. Individual Bugzilla bugs that are migrated will be moved to status "CLOSED", resolution "MIGRATED", and set with "MigratedToJIRA" in "Keywords". The link to the successor Jira issue will be found under "Links", have a little "two-footprint" icon next to it, and direct you to the "RHEL project" in Red Hat Jira (issue links are of type "https://issues.redhat.com/browse/RHEL-XXXX", where "X" is a digit). This same link will be available in a blue banner at the top of the page informing you that that bug has been migrated.
Bug 1671650 - [virtio-win][vioscsi & viostor]It's so slowly run iometer test on a data disk sometimes
Summary: [virtio-win][vioscsi & viostor]It's so slowly run iometer test on a data disk...
Keywords:
Status: CLOSED WONTFIX
Alias: None
Product: Red Hat Enterprise Linux 9
Classification: Red Hat
Component: virtio-win
Version: 9.1
Hardware: Unspecified
OS: Unspecified
unspecified
unspecified
Target Milestone: rc
: 9.1
Assignee: Vadim Rozenfeld
QA Contact: Peixiu Hou
URL:
Whiteboard:
Depends On: 1682882
Blocks: 1771318 1897024
TreeView+ depends on / blocked
 
Reported: 2019-02-01 07:31 UTC by Peixiu Hou
Modified: 2022-10-25 02:54 UTC (History)
12 users (show)

Fixed In Version:
Doc Type: If docs needed, set a value
Doc Text:
Clone Of:
Environment:
Last Closed: 2022-10-25 02:27:43 UTC
Type: Bug
Target Upstream Version:
Embargoed:
phou: needinfo-
phou: needinfo-


Attachments (Terms of Use)
Iometer configuration (8.72 KB, text/plain)
2019-02-01 07:31 UTC, Peixiu Hou
no flags Details
iometer result file for slow one(9520 s) (2.73 MB, application/zip)
2019-11-13 01:52 UTC, Peixiu Hou
no flags Details
iometer result file for fast one(348 s) (147.96 KB, application/zip)
2019-11-13 01:52 UTC, Peixiu Hou
no flags Details
all event files when iometer running 50 times (1.07 MB, application/octet-stream)
2019-11-14 09:20 UTC, Peixiu Hou
no flags Details
full automation logs (6.05 MB, application/zip)
2020-06-18 09:23 UTC, menli@redhat.com
no flags Details

Description Peixiu Hou 2019-02-01 07:31:07 UTC
Created attachment 1525734 [details]
Iometer configuration

Description of problem:
Run iometer test on a vioscsi data disk(1G) so slowly sometimes. Normally, it can be finished in 1000s, but some times, it need more than 3000s or more than 9000s to complete it.

Follows are parts of Iometer configuration, also can check the attachment:

Version 2006.07.27
'TEST SETUP ====================================================================
	
'Run Time
'	hours      minutes    seconds
	0          0          2
'Ramp Up Time (s)
	0
'Default Disk Workers to Spawn
	NUMBER_OF_CPUS
'Default Network Workers to Spawn
	0
'Record Results
	ALL
'Worker Cycling
'	start      step       step type
	1          1          LINEAR
'Disk Cycling
'	start      step       step type
	1          1          LINEAR
'Queue Depth Cycling
'	start      end        step       step type
	1          128        2          EXPONENTIAL
'Test Type
	CYCLE_OUTSTANDING_IOS_AND_TARGETS
'END test setup
'RESULTS DISPLAY ===============================================================
'Update Frequency,Update Type
	0,WHOLE_TEST
'Bar chart 1 statistic
	Total I/Os per Second
'Bar chart 2 statistic
	Total MBs per Second
'Bar chart 3 statistic
	Average I/O Response Time (ms)
'Bar chart 4 statistic
	Maximum I/O Response Time (ms)
'Bar chart 5 statistic
	% CPU Utilization (total)
'Bar chart 6 statistic
	Total Error Count
'END results display
'ACCESS SPECIFICATIONS =========================================================
'Access specification name,default assignment
	Default,NONE
'size,% of size,% reads,% random,delay,burst,align,reply
	2048,100,67,100,0,1,0,0
'Access specification name,default assignment
	512B; 100% Read; 0% random,NONE
'size,% of size,% reads,% random,delay,burst,align,reply
	512,100,100,0,0,1,0,0
'Access specification name,default assignment
	512B; 75% Read; 0% random,NONE
'size,% of size,% reads,% random,delay,burst,align,reply
	512,100,75,0,0,1,0,0
'Access specification name,default assignment
	512B; 50% Read; 0% random,NONE
'size,% of size,% reads,% random,delay,burst,align,reply
	512,100,50,0,0,1,0,0
'Access specification name,default assignment
	512B; 25% Read; 0% random,NONE
'size,% of size,% reads,% random,delay,burst,align,reply
	512,100,25,0,0,1,0,0
'Access specification name,default assignment
	512B; 0% Read; 0% random,NONE
'size,% of size,% reads,% random,delay,burst,align,reply
	512,100,0,0,0,1,0,0
'Access specification name,default assignment
	4K; 100% Read; 0% random,NONE
'size,% of size,% reads,% random,delay,burst,align,reply
	4096,100,100,0,0,1,0,0
'Access specification name,default assignment
	4K; 75% Read; 0% random,NONE
'size,% of size,% reads,% random,delay,burst,align,reply
	4096,100,75,0,0,1,0,0
'Access specification name,default assignment
	4K; 50% Read; 0% random,NONE
'size,% of size,% reads,% random,delay,burst,align,reply
	4096,100,50,0,0,1,0,0
'Access specification name,default assignment
	4K; 25% Read; 0% random,NONE
'size,% of size,% reads,% random,delay,burst,align,reply
	4096,100,25,0,0,1,0,0
'Access specification name,default assignment
	4K; 0% Read; 0% random,NONE
'size,% of size,% reads,% random,delay,burst,align,reply
	4096,100,0,0,0,1,0,0
'Access specification name,default assignment
	16K; 100% Read; 0% random,NONE
'size,% of size,% reads,% random,delay,burst,align,reply
	16384,100,100,0,0,1,0,0
'Access specification name,default assignment
	16K; 75% Read; 0% random,NONE
'size,% of size,% reads,% random,delay,burst,align,reply
	16384,100,75,0,0,1,0,0
'Access specification name,default assignment
	16K; 50% Read; 0% random,NONE
'size,% of size,% reads,% random,delay,burst,align,reply
	16384,100,50,0,0,1,0,0
'Access specification name,default assignment
	16K; 25% Read; 0% random,NONE
'size,% of size,% reads,% random,delay,burst,align,reply
	16384,100,25,0,0,1,0,0
'Access specification name,default assignment
	16K; 0% Read; 0% random,NONE
'size,% of size,% reads,% random,delay,burst,align,reply
	16384,100,0,0,0,1,0,0
'Access specification name,default assignment
	32K; 100% Read; 0% random,NONE
'size,% of size,% reads,% random,delay,burst,align,reply
	32768,100,100,0,0,1,0,0
'Access specification name,default assignment
	32K; 75% Read; 0% random,NONE
'size,% of size,% reads,% random,delay,burst,align,reply
	32768,100,75,0,0,1,0,0
'Access specification name,default assignment
	32K; 50% Read; 0% random,NONE
'size,% of size,% reads,% random,delay,burst,align,reply
	32768,100,50,0,0,1,0,0
'Access specification name,default assignment
	32K; 25% Read; 0% random,NONE
'size,% of size,% reads,% random,delay,burst,align,reply
	32768,100,25,0,0,1,0,0
'Access specification name,default assignment
	32K; 0% Read; 0% random,NONE
'size,% of size,% reads,% random,delay,burst,align,reply
	32768,100,0,0,0,1,0,0
'Access specification name,default assignment
	All in one,NONE
'size,% of size,% reads,% random,delay,burst,align,reply
	512,5,100,0,0,1,0,0
	512,5,75,0,0,1,0,0
	512,5,50,0,0,1,0,0
	512,5,25,0,0,1,0,0
	512,5,0,0,0,1,0,0
	4096,5,100,0,0,1,0,0
	4096,5,75,0,0,1,0,0
	4096,5,50,0,0,1,0,0
	4096,5,25,0,0,1,0,0
	4096,5,0,0,0,1,0,0
	16384,5,100,0,0,1,0,0
	16384,5,75,0,0,1,0,0
	16384,5,50,0,0,1,0,0
	16384,5,25,0,0,1,0,0
	16384,5,0,0,0,1,0,0
	32768,5,100,0,0,1,0,0
	32768,5,75,0,0,1,0,0
	32768,5,50,0,0,1,0,0
	32768,5,25,0,0,1,0,0
	32768,5,0,0,0,1,0,0
'Access specification name,default assignment
	2K; 100% Read,NONE
'size,% of size,% reads,% random,delay,burst,align,reply
	2048,100,100,0,0,1,0,0
'Access specification name,default assignment
	4K; 100% Read,NONE
'size,% of size,% reads,% random,delay,burst,align,reply
	4096,100,100,0,0,1,0,0
'Access specification name,default assignment
	8K; 100% Read,NONE
'size,% of size,% reads,% random,delay,burst,align,reply
	8192,100,100,0,0,1,0,0
'Access specification name,default assignment
	16K; 100% Read,NONE
'size,% of size,% reads,% random,delay,burst,align,reply
	16384,100,100,0,0,1,0,0
'Access specification name,default assignment
	32K; 100% Read,NONE
'size,% of size,% reads,% random,delay,burst,align,reply
	32768,100,100,0,0,1,0,0
'Access specification name,default assignment
	64K; 100% Read,NONE
'size,% of size,% reads,% random,delay,burst,align,reply
	65536,100,100,0,0,1,0,0
'Access specification name,default assignment
	2K; 100% Write,NONE
'size,% of size,% reads,% random,delay,burst,align,reply
	2048,100,0,0,0,1,0,0
'Access specification name,default assignment
	4K; 100% Write,NONE
'size,% of size,% reads,% random,delay,burst,align,reply
	4096,100,0,0,0,1,0,0
'Access specification name,default assignment
	8K; 100% Write,NONE
'size,% of size,% reads,% random,delay,burst,align,reply
	8192,100,0,0,0,1,0,0
'Access specification name,default assignment
	16K; 100% Write,NONE
'size,% of size,% reads,% random,delay,burst,align,reply
	16384,100,0,0,0,1,0,0
'Access specification name,default assignment
	32K; 100% Write,NONE
'size,% of size,% reads,% random,delay,burst,align,reply
	32768,100,0,0,0,1,0,0
'Access specification name,default assignment
	64K; 100% Write,NONE
'size,% of size,% reads,% random,delay,burst,align,reply
	65536,100,0,0,0,1,0,0
'Access specification name,default assignment
	128K; 100% Read,NONE
'size,% of size,% reads,% random,delay,burst,align,reply
	131072,100,100,0,0,1,0,0
'Access specification name,default assignment
	256K; 100% Read,NONE
'size,% of size,% reads,% random,delay,burst,align,reply
	262144,100,100,0,0,1,0,0
'Access specification name,default assignment
	128K; 100% Write,NONE
'size,% of size,% reads,% random,delay,burst,align,reply
	131072,100,0,0,0,1,0,0
'Access specification name,default assignment
	256K; 100% Write,NONE
'size,% of size,% reads,% random,delay,burst,align,reply
	262144,100,0,0,0,1,0,0
'END access specifications
'MANAGER LIST ==================================================================
'Manager ID, manager name
	1,WIN-9TQ3NRAB83E
'Manager network address
	10.66.72.156
'Worker
	Worker 1
'Worker type
	DISK
'Default target settings for worker
'Number of outstanding IOs,test connection rate,transactions per connection
	1,DISABLED,1
'Disk maximum size,starting sector
	0,0
'End default target settings for worker
'Assigned access specs
	2K; 100% Read
	4K; 100% Read
	8K; 100% Read
	16K; 100% Read
	32K; 100% Read
	64K; 100% Read
	128K; 100% Read
	256K; 100% Read
	2K; 100% Write
	4K; 100% Write
	8K; 100% Write
	16K; 100% Write
	32K; 100% Write
	64K; 100% Write
	128K; 100% Write
	256K; 100% Write
'End assigned access specs
'Target assignments
'Target
	I:local
'Target type
	DISK
'End target
'End target assignments
'End worker
'Worker
	Worker 2
'Worker type
	DISK
'Default target settings for worker
'Number of outstanding IOs,test connection rate,transactions per connection
	1,DISABLED,1
'Disk maximum size,starting sector
	0,0
'End default target settings for worker
'Assigned access specs
	2K; 100% Read
	4K; 100% Read
	8K; 100% Read
	16K; 100% Read
	32K; 100% Read
	64K; 100% Read
	128K; 100% Read
	256K; 100% Read
	2K; 100% Write
	4K; 100% Write
	8K; 100% Write
	16K; 100% Write
	32K; 100% Write
	64K; 100% Write
	128K; 100% Write
	256K; 100% Write
'End assigned access specs
'Target assignments
'Target
	I:local
'Target type
	DISK
'End target
'End target assignments
'End worker
'End manager
'END manager list
Version 2006.07.27 

Version-Release number of selected component (if applicable):
kernel-4.18.0-62.el8.x86_64
qemu-kvm-2.12.0-57.module+el8+2683+02b3b955.x86_64
seabios-bin-1.11.1-3.module+el8+2529+a9686a4d.noarch
virtio-win-1.9.6-1.el8.iso

How reproducible:
about 40%

Steps to Reproduce:
1.Boot the guest up:
=============================================================================
MALLOC_PERTURB_=1  /usr/libexec/qemu-kvm \
    -name 'avocado-vt-vm1' \
    -machine pc  \
    -nodefaults \
    -device VGA,bus=pci.0,addr=0x2  \
    -device ich9-usb-ehci1,id=usb1,addr=0x1d.7,multifunction=on,bus=pci.0 \
    -device ich9-usb-uhci1,id=usb1.0,multifunction=on,masterbus=usb1.0,addr=0x1d.0,firstport=0,bus=pci.0 \
    -device ich9-usb-uhci2,id=usb1.1,multifunction=on,masterbus=usb1.0,addr=0x1d.2,firstport=2,bus=pci.0 \
    -device ich9-usb-uhci3,id=usb1.2,multifunction=on,masterbus=usb1.0,addr=0x1d.4,firstport=4,bus=pci.0 \
    -device virtio-scsi-pci,id=virtio_scsi_pci0,bus=pci.0,addr=0x3 \
    -drive id=drive_image1,if=none,snapshot=off,aio=threads,cache=none,format=qcow2,file=/home/kvm_autotest_root/images/win7-32-sp1-virtio-scsi.qcow2 \
    -device scsi-hd,id=image1,drive=drive_image1 \
    -drive id=drive_disk1,if=none,snapshot=off,aio=threads,cache=none,format=qcow2,file=/home/kvm_autotest_root/images/storage.qcow2 \
    -device scsi-hd,id=disk1,drive=drive_disk1 \
    -device virtio-net-pci,mac=9a:a4:a5:a6:a7:a8,id=idBkzKrg,vectors=4,netdev=idP2WXUc,bus=pci.0,addr=0x4  \
    -netdev tap,id=idP2WXUc,vhost=on \
    -m 14336  \
    -smp 24,maxcpus=24,cores=12,threads=1,sockets=2  \
    -cpu 'Skylake-Server',+kvm_pv_unhalt,hv_relaxed,hv_spinlocks=0x1fff,hv_vapic,hv_time \
    -drive id=drive_cd1,if=none,snapshot=off,aio=threads,cache=none,media=cdrom,file=/home/kvm_autotest_root/iso/windows/winutils.iso \
    -device scsi-cd,id=cd1,drive=drive_cd1 \
    -device usb-tablet,id=usb-tablet1,bus=usb1.0,port=1  \
    -vnc :0  \
    -rtc base=localtime,clock=host,driftfix=slew  \
    -boot order=cdn,once=c,menu=off,strict=off \
    -enable-kvm \
    -monitor stdio
===============================================================================
2. Sending command: echo select disk 1 > cmd && echo create partition primary >> cmd && echo select partition 1 >> cmd && echo assign letter=I >> cmd && echo exit >> cmd && diskpart /s cmd
3. Sending command: format I: /FS:NTFS /V:local /Q /y
4. cmd /c xcopy /y D:\Iometer\* C:\Iometer\
5. Register Iometer: cmd /c regedit /s C:\Iometer\iometer.reg
6. Copy iometer.icf file to guest C:\Iometer\iometer.icf
7. Change dir to: C:\Iometer, double click iometer.icf, start iometer test.
8. Check the iometer run time.

Actual results:
complete need 3000s or 5000s or 9000s

Expected results:
complete in 1000s

Additional info:
1. Hit this issue with virtio-win-prewhlq-163.
2. Hit this issue with virtio-win-1.9.7-3.
3. Hit this issue on win7-32, win10-32, win2016 guest.
4. Manually reproduced it found that run job 1-63 can be finished fast(from 2K 100% read to 128K 100% read), from job 256K 100% read to 2K 100% write need so long time, if started 2K 100% write, also can be finished fast, hope this info helps.

Comment 1 Peixiu Hou 2019-02-01 07:58:31 UTC
Paste some test results for win10-32 and win7-32:
On win10-32 guest, reproduce rate is low,hit 1 times, rerun 20 times, all can be completed in 1000s.
 (01/20) repeat1.Host_RHEL.m8.u0.qcow2.virtio_scsi.up.virtio_net.Guest.Win10.i386.io-github-autotest-qemu.iometer_windows.aio_threads: PASS (408.96 s)
 (02/20) repeat2.Host_RHEL.m8.u0.qcow2.virtio_scsi.up.virtio_net.Guest.Win10.i386.io-github-autotest-qemu.iometer_windows.aio_threads: PASS (388.31 s)
 (03/20) repeat3.Host_RHEL.m8.u0.qcow2.virtio_scsi.up.virtio_net.Guest.Win10.i386.io-github-autotest-qemu.iometer_windows.aio_threads: PASS (393.77 s)
 (04/20) repeat4.Host_RHEL.m8.u0.qcow2.virtio_scsi.up.virtio_net.Guest.Win10.i386.io-github-autotest-qemu.iometer_windows.aio_threads: PASS (399.90 s)
 (05/20) repeat5.Host_RHEL.m8.u0.qcow2.virtio_scsi.up.virtio_net.Guest.Win10.i386.io-github-autotest-qemu.iometer_windows.aio_threads: PASS (406.58 s)

On win7-32 guest, reproduced rate is more higher, run 5 times, all exceed 1000s:
 (2/6) repeat1.Host_RHEL.m8.u0.qcow2.virtio_scsi.up.virtio_net.Guest.Win7.i386.sp1.io-github-autotest-qemu.iometer_windows.aio_threads: PASS (3063.68 s)
 (3/6) repeat2.Host_RHEL.m8.u0.qcow2.virtio_scsi.up.virtio_net.Guest.Win7.i386.sp1.io-github-autotest-qemu.iometer_windows.aio_threads: PASS (2608.46 s)
 (4/6) repeat3.Host_RHEL.m8.u0.qcow2.virtio_scsi.up.virtio_net.Guest.Win7.i386.sp1.io-github-autotest-qemu.iometer_windows.aio_threads: PASS (6097.69 s)
 (5/6) repeat4.Host_RHEL.m8.u0.qcow2.virtio_scsi.up.virtio_net.Guest.Win7.i386.sp1.io-github-autotest-qemu.iometer_windows.aio_threads: PASS (4292.73 s)
 (6/6) repeat5.Host_RHEL.m8.u0.qcow2.virtio_scsi.up.virtio_net.Guest.Win7.i386.sp1.io-github-autotest-qemu.iometer_windows.aio_threads: PASS (2878.56 s)

Best Regards~
Peixiu

Comment 2 Peixiu Hou 2019-05-16 05:37:11 UTC
Hit this issue on viostor data disk with virtio-win-prewhql-171.

Used version:
virtio-win-prewhql-171
qemu-kvm-3.1.0-22.module+el8.0.1+3032+a09688b9.x86_64
kernel-4.18.0-80.el8.x86_64
seabios-1.12.0-1.module+el8+2706+3c6581b6.x86_64

Best Regards~
Peixiu

Comment 3 Peixiu Hou 2019-05-20 07:05:37 UTC
Hi vadim,

I have a question want to confirm with you:
I also hit another problem related this issue, when iometer running slowly in guest(like keep in "run 64 of 128" long time), try to do {'execute': 'system_powerdown', 'id': 'Ey1yNefY'} from qmp, the guest cannot be shutdown long time(10mins). If I need to file a new bug for it? or track it on this one?

Thanks a lot~
Peixiu

Comment 4 Vadim Rozenfeld 2019-05-20 08:19:33 UTC
(In reply to Peixiu Hou from comment #3)
> Hi vadim,
> 
> I have a question want to confirm with you:
> I also hit another problem related this issue, when iometer running slowly
> in guest(like keep in "run 64 of 128" long time), try to do {'execute':
> 'system_powerdown', 'id': 'Ey1yNefY'} from qmp, the guest cannot be shutdown
> long time(10mins). If I need to file a new bug for it? or track it on this
> one?
> 
> Thanks a lot~
> Peixiu

Can you ping this slow running VM from the hos or any other VM to see the 
ping response time?

All the best,
Vadim.

Comment 5 Peixiu Hou 2019-05-20 09:38:14 UTC
(In reply to Vadim Rozenfeld from comment #4)
> (In reply to Peixiu Hou from comment #3)
> > Hi vadim,
> > 
> > I have a question want to confirm with you:
> > I also hit another problem related this issue, when iometer running slowly
> > in guest(like keep in "run 64 of 128" long time), try to do {'execute':
> > 'system_powerdown', 'id': 'Ey1yNefY'} from qmp, the guest cannot be shutdown
> > long time(10mins). If I need to file a new bug for it? or track it on this
> > one?
> > 
> > Thanks a lot~
> > Peixiu
> 
> Can you ping this slow running VM from the hos or any other VM to see the 
> ping response time?
> 

I tried to ping the slow running vm from the host, status as follows:
[root@dell-per440-03 home]# ping 10.73.75.63
PING 10.73.75.63 (10.73.75.63) 56(84) bytes of data.
64 bytes from 10.73.75.63: icmp_seq=1 ttl=128 time=0.338 ms
64 bytes from 10.73.75.63: icmp_seq=2 ttl=128 time=0.148 ms
64 bytes from 10.73.75.63: icmp_seq=3 ttl=128 time=0.136 ms
64 bytes from 10.73.75.63: icmp_seq=4 ttl=128 time=0.191 ms
64 bytes from 10.73.75.63: icmp_seq=5 ttl=128 time=0.208 ms
64 bytes from 10.73.75.63: icmp_seq=6 ttl=128 time=0.136 ms
64 bytes from 10.73.75.63: icmp_seq=7 ttl=128 time=0.168 ms
64 bytes from 10.73.75.63: icmp_seq=8 ttl=128 time=0.194 ms
64 bytes from 10.73.75.63: icmp_seq=9 ttl=128 time=0.196 ms
64 bytes from 10.73.75.63: icmp_seq=10 ttl=128 time=0.183 ms
64 bytes from 10.73.75.63: icmp_seq=11 ttl=128 time=0.252 ms
64 bytes from 10.73.75.63: icmp_seq=12 ttl=128 time=0.225 ms
64 bytes from 10.73.75.63: icmp_seq=13 ttl=128 time=0.232 ms
64 bytes from 10.73.75.63: icmp_seq=14 ttl=128 time=0.259 ms
64 bytes from 10.73.75.63: icmp_seq=15 ttl=128 time=0.222 ms
64 bytes from 10.73.75.63: icmp_seq=16 ttl=128 time=0.250 ms
64 bytes from 10.73.75.63: icmp_seq=17 ttl=128 time=0.210 ms
64 bytes from 10.73.75.63: icmp_seq=18 ttl=128 time=0.256 ms
64 bytes from 10.73.75.63: icmp_seq=19 ttl=128 time=0.219 ms
64 bytes from 10.73.75.63: icmp_seq=20 ttl=128 time=0.245 ms
64 bytes from 10.73.75.63: icmp_seq=21 ttl=128 time=0.144 ms

Thanks~
Peixiu

> All the best,
> Vadim.

Comment 6 Peixiu Hou 2019-05-20 09:43:08 UTC
And I also tried to test similar scenario with step "shutdown -r -t 0" in guest replace the step {'execute': 'system_powerdown', 'id': 'Ey1yNefY'} from qmp, tried 10 times, cannot reproduce this issue, guest can be shutdown immediately~

Comment 7 Peixiu Hou 2019-05-20 09:48:49 UTC
(In reply to Peixiu Hou from comment #6)
> And I also tried to test similar scenario with step "shutdown -r -t 0" in
> guest replace the step {'execute': 'system_powerdown', 'id': 'Ey1yNefY'}
> from qmp, tried 10 times, cannot reproduce this issue, guest can be shutdown
> immediately~

sorry, wrong description for "shutdown -r -t 0", with step "shutdown -s -t 0" in guest cannot reproduce this issue~

Best Regards~
Peixiu

Comment 8 Vadim Rozenfeld 2019-05-21 01:34:09 UTC
(In reply to Peixiu Hou from comment #7)
> (In reply to Peixiu Hou from comment #6)
> > And I also tried to test similar scenario with step "shutdown -r -t 0" in
> > guest replace the step {'execute': 'system_powerdown', 'id': 'Ey1yNefY'}
> > from qmp, tried 10 times, cannot reproduce this issue, guest can be shutdown
> > immediately~
> 
> sorry, wrong description for "shutdown -r -t 0", with step "shutdown -s -t
> 0" in guest cannot reproduce this issue~
> 
> Best Regards~
> Peixiu

Thanks a lot for your update.
In any case ping response times are quite reasonable, and indicate
that system itself is not stalled (does not experienced some sorts 
of interrupt storm). 

Best regards,
Vadim.

Comment 9 Peixiu Hou 2019-05-21 02:02:35 UTC
(In reply to Vadim Rozenfeld from comment #8)
> (In reply to Peixiu Hou from comment #7)
> > (In reply to Peixiu Hou from comment #6)
> > > And I also tried to test similar scenario with step "shutdown -r -t 0" in
> > > guest replace the step {'execute': 'system_powerdown', 'id': 'Ey1yNefY'}
> > > from qmp, tried 10 times, cannot reproduce this issue, guest can be shutdown
> > > immediately~
> > 
> > sorry, wrong description for "shutdown -r -t 0", with step "shutdown -s -t
> > 0" in guest cannot reproduce this issue~
> > 
> > Best Regards~
> > Peixiu
> 
> Thanks a lot for your update.
> In any case ping response times are quite reasonable, and indicate
> that system itself is not stalled (does not experienced some sorts 
> of interrupt storm). 
> 
> Best regards,
> Vadim.

Thanks vadim, so do you think we need to file a new bug for it? 

Best regards~
Peixiu

Comment 10 Vadim Rozenfeld 2019-05-21 04:35:21 UTC
(In reply to Peixiu Hou from comment #9)
> (In reply to Vadim Rozenfeld from comment #8)
> > (In reply to Peixiu Hou from comment #7)
> > > (In reply to Peixiu Hou from comment #6)
> > > > And I also tried to test similar scenario with step "shutdown -r -t 0" in
> > > > guest replace the step {'execute': 'system_powerdown', 'id': 'Ey1yNefY'}
> > > > from qmp, tried 10 times, cannot reproduce this issue, guest can be shutdown
> > > > immediately~
> > > 
> > > sorry, wrong description for "shutdown -r -t 0", with step "shutdown -s -t
> > > 0" in guest cannot reproduce this issue~
> > > 
> > > Best Regards~
> > > Peixiu
> > 
> > Thanks a lot for your update.
> > In any case ping response times are quite reasonable, and indicate
> > that system itself is not stalled (does not experienced some sorts 
> > of interrupt storm). 
> > 
> > Best regards,
> > Vadim.
> 
> Thanks vadim, so do you think we need to file a new bug for it? 

Yes. Please do it.
Just could you please confirm that I understand it correctly that system_powerdown from qemu monitor and 
"shutdown -s -t 0" from VM cmd prompt both work smoothly, while qmp is not. Right?

Best regards,
Vadim.

> 
> Best regards~
> Peixiu

Comment 11 Peixiu Hou 2019-05-21 06:27:35 UTC
(In reply to Vadim Rozenfeld from comment #10)
> (In reply to Peixiu Hou from comment #9)
> > (In reply to Vadim Rozenfeld from comment #8)
> > > (In reply to Peixiu Hou from comment #7)
> > > > (In reply to Peixiu Hou from comment #6)
> > > > > And I also tried to test similar scenario with step "shutdown -r -t 0" in
> > > > > guest replace the step {'execute': 'system_powerdown', 'id': 'Ey1yNefY'}
> > > > > from qmp, tried 10 times, cannot reproduce this issue, guest can be shutdown
> > > > > immediately~
> > > > 
> > > > sorry, wrong description for "shutdown -r -t 0", with step "shutdown -s -t
> > > > 0" in guest cannot reproduce this issue~
> > > > 
> > > > Best Regards~
> > > > Peixiu
> > > 
> > > Thanks a lot for your update.
> > > In any case ping response times are quite reasonable, and indicate
> > > that system itself is not stalled (does not experienced some sorts 
> > > of interrupt storm). 
> > > 
> > > Best regards,
> > > Vadim.
> > 
> > Thanks vadim, so do you think we need to file a new bug for it? 
> 
> Yes. Please do it.
> Just could you please confirm that I understand it correctly that
> system_powerdown from qemu monitor and 
> "shutdown -s -t 0" from VM cmd prompt both work smoothly, while qmp is not.
> Right?
> 
Hi vadim,

scenario 1: during iometer running slowly "shutdown -s -t 0" from VM cmd prompt.
scenario 2: during iometer running slowly {'execute': 'system_powerdown', 'id': 'Ey1yNefY'} from qmp.

Tried upper 2 scenarios, scenario 1 work smoothly, but scenario2 is not.

Did not tried system_powerdown from qemu monitor, due to this issue is found from automation test, It's not easily to reproduced iometer running slowly by manual, so also not easily to reproduced this one by manual~

Best Regards~
Peixiu

> Best regards,
> Vadim.
> 
> > 
> > Best regards~
> > Peixiu

Comment 12 Peixiu Hou 2019-05-21 07:26:30 UTC
Hi vadim,

I reproduced it with system_powerdown from qemu monitor, during iometer running slowly, system_powerdown from qemu monitor also cannot shutdown the vm.

here is a summary I tried:

scenario 1: during iometer running slowly "shutdown -s -t 0" from VM cmd prompt, shutdown the vm smoothly.
scenario 2: during iometer running slowly {'execute': 'system_powerdown', 'id': 'Ey1yNefY'} from qmp, cannot shutdown the vm.
scenario 3: during iometer running slowly do system_powerdown from qemu monitor, cannot shutdown the vm.
scenario 4: during iometer running slowly do ping vm from host successfully.


Best Regards~
Peixiu

Comment 13 Peixiu Hou 2019-05-22 01:38:22 UTC
(In reply to Peixiu Hou from comment #12)
> Hi vadim,
> 
> I reproduced it with system_powerdown from qemu monitor, during iometer
> running slowly, system_powerdown from qemu monitor also cannot shutdown the
> vm.
> 
> here is a summary I tried:
> 
> scenario 1: during iometer running slowly "shutdown -s -t 0" from VM cmd
> prompt, shutdown the vm smoothly.
> scenario 2: during iometer running slowly {'execute': 'system_powerdown',
> 'id': 'Ey1yNefY'} from qmp, cannot shutdown the vm.
> scenario 3: during iometer running slowly do system_powerdown from qemu
> monitor, cannot shutdown the vm.
> scenario 4: during iometer running slowly do ping vm from host successfully.
> 
> 
Hi vadim, based upper results, if we remain need file a new bug for it?

> Best Regards~
> Peixiu

Comment 14 Vadim Rozenfeld 2019-05-22 02:41:16 UTC
(In reply to Peixiu Hou from comment #13)
> (In reply to Peixiu Hou from comment #12)
> > Hi vadim,
> > 
> > I reproduced it with system_powerdown from qemu monitor, during iometer
> > running slowly, system_powerdown from qemu monitor also cannot shutdown the
> > vm.
> > 
> > here is a summary I tried:
> > 
> > scenario 1: during iometer running slowly "shutdown -s -t 0" from VM cmd
> > prompt, shutdown the vm smoothly.
> > scenario 2: during iometer running slowly {'execute': 'system_powerdown',
> > 'id': 'Ey1yNefY'} from qmp, cannot shutdown the vm.
> > scenario 3: during iometer running slowly do system_powerdown from qemu
> > monitor, cannot shutdown the vm.
> > scenario 4: during iometer running slowly do ping vm from host successfully.
> > 
> > 
> Hi vadim, based upper results, if we remain need file a new bug for it?

It looks like something is wrong at qemu side. I wouldn't open a new 
bug then. Just keep qmp amd qemu monitor cases as additional indications of
the problem.
I also wonder if qmp is slow on response on 'system_powerdown' only or it 
is a generic problem for all/most qmp/qemu monitor commands?

All the best,
Vadim.

> 
> > Best Regards~
> > Peixiu

Comment 15 Peixiu Hou 2019-05-23 06:11:07 UTC
(In reply to Vadim Rozenfeld from comment #14)
> (In reply to Peixiu Hou from comment #13)
> > (In reply to Peixiu Hou from comment #12)
> > > Hi vadim,
> > > 
> > > I reproduced it with system_powerdown from qemu monitor, during iometer
> > > running slowly, system_powerdown from qemu monitor also cannot shutdown the
> > > vm.
> > > 
> > > here is a summary I tried:
> > > 
> > > scenario 1: during iometer running slowly "shutdown -s -t 0" from VM cmd
> > > prompt, shutdown the vm smoothly.
> > > scenario 2: during iometer running slowly {'execute': 'system_powerdown',
> > > 'id': 'Ey1yNefY'} from qmp, cannot shutdown the vm.
> > > scenario 3: during iometer running slowly do system_powerdown from qemu
> > > monitor, cannot shutdown the vm.
> > > scenario 4: during iometer running slowly do ping vm from host successfully.
> > > 
> > > 
> > Hi vadim, based upper results, if we remain need file a new bug for it?
> 
> It looks like something is wrong at qemu side. I wouldn't open a new 
> bug then. Just keep qmp amd qemu monitor cases as additional indications of
> the problem.
> I also wonder if qmp is slow on response on 'system_powerdown' only or it 
> is a generic problem for all/most qmp/qemu monitor commands?
> 

Hi vadim,

I tried to reproduce this issue with "system_powerdown" and "system_reset" from qmp, the result as follows:

1. tried test the job during iometer do system_powerdown from qmp 30 times, reproduced this issue 17 times, once reproduced iometer run slowly, this issue occurred 100%.
2. tried test the job during iometer do system_reset from qmp 30 times, not hit this issue, checked the log, also hit the iometer run slowly problem, but the system_reset can be responded in guest immediately. 

Thanks~
Peixiu

> All the best,
> Vadim.
> 
> > 
> > > Best Regards~
> > > Peixiu

Comment 16 Peixiu Hou 2019-07-16 10:31:04 UTC
Also hit the comment#0 issue on win7-64 guest, tested on rhel7.7 host.

Used versions:
virtio-win-prewhql-172
kernel-3.10.0-1057.el7
qemu-kvm-rhev-2.12.0-33.el7
seabios-1.11.0-2.el7

Best Regards~
Peixiu

Comment 17 xiagao 2019-08-21 03:17:53 UTC
Hit this issue on win7-32

version:
kernel-4.18.0-129.el8.x86_64
qemu-kvm-4.0.0-6.module+el8.1.0+3736+a2aefea3.x86_64
virtio-win-prewhql-173
seabios-1.12.0-4.module+el8.1.0+3876+ec1667b7.x86_64

Comment 18 Yu Wang 2019-11-05 03:12:50 UTC
Hit this issue on win7-32

version:

kernel-4.18.0-147.el8.x86_64
qemu-kvm-4.1.0-14.module+el8.1.0+4548+ed1300f4.x86_64
virtio-win-prewhql-172
seabios-1.12.0-5.module+el8.1.0+4022+29a53beb.x86_64

Comment 19 xiagao 2019-11-05 06:41:40 UTC
(In reply to Yu Wang from comment #18)
> Hit this issue on win7-32
> 
> version:
> 
> kernel-4.18.0-147.el8.x86_64
> qemu-kvm-4.1.0-14.module+el8.1.0+4548+ed1300f4.x86_64
> virtio-win-prewhql-172
> seabios-1.12.0-5.module+el8.1.0+4022+29a53beb.x86_64

Also hit it on win8.1-64 in the same version.

Comment 20 Vadim Rozenfeld 2019-11-05 07:57:45 UTC
(In reply to xiagao from comment #19)
> (In reply to Yu Wang from comment #18)
> > Hit this issue on win7-32
> > 
> > version:
> > 
> > kernel-4.18.0-147.el8.x86_64
> > qemu-kvm-4.1.0-14.module+el8.1.0+4548+ed1300f4.x86_64
> > virtio-win-prewhql-172
> > seabios-1.12.0-5.module+el8.1.0+4022+29a53beb.x86_64
> 
> Also hit it on win8.1-64 in the same version.

Can you please upload "slow" and "normal" result files?
I would like to see the difference bitween them.

Thanks,
Vadim

Comment 21 Peixiu Hou 2019-11-13 01:52:23 UTC
Created attachment 1635565 [details]
iometer result file for slow one(9520 s)

Comment 22 Peixiu Hou 2019-11-13 01:52:59 UTC
Created attachment 1635566 [details]
iometer result file for fast one(348 s)

Comment 23 Peixiu Hou 2019-11-13 01:56:19 UTC
(In reply to Vadim Rozenfeld from comment #20)
> (In reply to xiagao from comment #19)
> > (In reply to Yu Wang from comment #18)
> > > Hit this issue on win7-32
> > > 
> > > version:
> > > 
> > > kernel-4.18.0-147.el8.x86_64
> > > qemu-kvm-4.1.0-14.module+el8.1.0+4548+ed1300f4.x86_64
> > > virtio-win-prewhql-172
> > > seabios-1.12.0-5.module+el8.1.0+4022+29a53beb.x86_64
> > 
> > Also hit it on win8.1-64 in the same version.
> 
> Can you please upload "slow" and "normal" result files?
> I would like to see the difference bitween them.
> 
Hi Vadim,

I added 2 iometer result files as attachment~
One for slow, finish iometer in 9520s, and one for normal, finish iometer in 348s.

Thanks~
Peixiu

Comment 24 Vadim Rozenfeld 2019-11-14 02:25:43 UTC
Thanks a lot, Peixiu Hou

Could you let me know which exactly two tests you are mentioning?
Because one one of files has two tests inside and another combines
around fifty different runs. It will be great if you can point me to the
timestamps or offsets of that case were the problem happens.

All the best,
Vadim.

Comment 26 Vadim Rozenfeld 2019-11-14 07:19:49 UTC
I see, 
here are the differences 

                             fast         slow
Write MBps                184.073189    109.283971
Transactions per Second 23561.368181  13988.348228
Average Response Time       0.527282     18.299974
Maximum Response Time       3.038314     89.562067
Bytes Written              386981888 1026528395264
Write I/Os                     47239     125308642
% CPU Utilization          17.003341      4.997737
% User Time                 1.649476     10.073912
% Privileged Time           15.86639     17.867497
% DPC Time                  4.437876       5.03771
% Interrupt Time            0.039273      3.013814
Interrupts per Second   34792.699194 600772.805485
CPU Effectiveness        1385.690472   2798.936233
Packets/Second              5.529672      9.431819


There is a huge difference between overall and per-second statistics.
Bytes Written              1026528395264 / 386981888 = 2652.65
Transactions per Second     13988.348228 / 23561.368181   = 0.59


I will be quite interesting to check the System Event Log after a slow
run and see if there are any Disk IO related problems reported.


Best,
Vadim.

Comment 27 Peixiu Hou 2019-11-14 09:19:49 UTC
Hi vadim,

I checked the slow run event log, on system event log filed, no log shown, on application event log, hit a wmi error log when show running, detail as follows:

Event filter with query "SELECT * FROM __InstanceModificationEvent WITHIN 60 WHERE TargetInstance ISA "Win32_Processor" AND TargetInstance.LoadPercentage > 99" could not be reactivated in namespace "//./root/CIMV2" because of error 0x80041003. Events cannot be delivered through this filter until the problem is corrected.

- System 

  - Provider 

   [ Name]  Microsoft-Windows-WMI 
   [ Guid]  {1edeee53-0afe-4609-b846-d8c0b2075b1f} 
   [ EventSourceName]  WinMgmt 
 
  - EventID 10 

   [ Qualifiers]  49152 
 
   Version 0 
 
   Level 2 
 
   Task 0 
 
   Opcode 0 
 
   Keywords 0x80000000000000 
 
  - TimeCreated 

   [ SystemTime]  2019-11-12T19:50:43.000000000Z 
 
   EventRecordID 1275 
 
   Correlation 
 
  - Execution 

   [ ProcessID]  0 
   [ ThreadID]  0 
 
   Channel Application 
 
   Computer WIN-GMF90RKEL1K 
 
   Security 
 

- EventData 

   //./root/CIMV2 
   SELECT * FROM __InstanceModificationEvent WITHIN 60 WHERE TargetInstance ISA "Win32_Processor" AND TargetInstance.LoadPercentage > 99 
   0x80041003 

And I upload all event log when running iometer 50 times, for slow run logs generated from 03:48:12 - 06:26:13, you can check it~

Thanks~
Peixiu

Comment 28 Peixiu Hou 2019-11-14 09:20:25 UTC
Created attachment 1636055 [details]
all event files when iometer running 50 times

Comment 29 Vadim Rozenfeld 2019-11-21 02:48:17 UTC
(In reply to Peixiu Hou from comment #28)
> Created attachment 1636055 [details]
> all event files when iometer running 50 times

Thanks.
Unfortunately, you were right, there is nothing suspicious in the log file.
Do you think it is possible to trigger the slow writing event and try to 
collect some kvm related statistics (something like 
"./perf stat -e 'kvm:*' -a sleep 60s" or so) ?

Best regards,
Vadim.

Comment 30 lijin 2020-03-03 08:59:41 UTC
phou,

Could you handle comment#29 pls?

Comment 31 Peixiu Hou 2020-03-04 12:45:29 UTC
(In reply to Vadim Rozenfeld from comment #29)
> (In reply to Peixiu Hou from comment #28)
> > Created attachment 1636055 [details]
> > all event files when iometer running 50 times
> 
> Thanks.
> Unfortunately, you were right, there is nothing suspicious in the log file.
> Do you think it is possible to trigger the slow writing event and try to 
> collect some kvm related statistics (something like 
> "./perf stat -e 'kvm:*' -a sleep 60s" or so) ?
> 

Hi Vadim,

I tried to do it when reproducing the slow run issue, I collect info with sleep 60s and sleep 1h during the bug reproduced, pasted as follows, hope they are useful.

1. root@dell-per440-05 /home/kar # perf stat -e 'kvm:*' -a sleep 60s

 Performance counter stats for 'system wide':

         6,358,287      kvm:kvm_entry                                               
                 0      kvm:kvm_hypercall                                           
           846,721      kvm:kvm_hv_hypercall                                        
         3,392,446      kvm:kvm_pio                                                 
           842,584      kvm:kvm_fast_mmio                                           
                 0      kvm:kvm_cpuid                                               
           449,507      kvm:kvm_apic                                                
         6,358,178      kvm:kvm_exit                                                
           319,564      kvm:kvm_inj_virq                                            
                 0      kvm:kvm_inj_exception                                       
             9,384      kvm:kvm_page_fault                                          
           229,382      kvm:kvm_msr                                                 
           425,012      kvm:kvm_cr                                                  
            11,572      kvm:kvm_pic_set_irq                                         
           220,102      kvm:kvm_apic_ipi                                            
           315,593      kvm:kvm_apic_accept_irq                                     
           315,591      kvm:kvm_eoi                                                 
           306,314      kvm:kvm_pv_eoi                                              
                 0      kvm:kvm_nested_vmrun                                        
                 0      kvm:kvm_nested_intercepts                                   
                 0      kvm:kvm_nested_vmexit                                       
                 0      kvm:kvm_nested_vmexit_inject                                   
                 0      kvm:kvm_nested_intr_vmexit                                   
                 0      kvm:kvm_invlpga                                             
                 0      kvm:kvm_skinit                                              
                 0      kvm:kvm_emulate_insn                                        
                 0      kvm:vcpu_match_mmio                                         
                 0      kvm:kvm_write_tsc_offset                                    
                 0      kvm:kvm_update_master_clock                                   
                 0      kvm:kvm_track_tsc                                           
                 0      kvm:kvm_pml_full                                            
             7,431      kvm:kvm_ple_window_update                                   
                 0      kvm:kvm_pvclock_update                                      
                 0      kvm:kvm_wait_lapic_expire                                   
                 0      kvm:kvm_enter_smm                                           
                 0      kvm:kvm_pi_irte_update                                      
                 0      kvm:kvm_hv_notify_acked_sint                                   
                 0      kvm:kvm_hv_synic_set_irq                                    
                 0      kvm:kvm_hv_synic_send_eoi                                   
                 0      kvm:kvm_hv_synic_set_msr                                    
                 0      kvm:kvm_hv_stimer_set_config                                   
                 0      kvm:kvm_hv_stimer_set_count                                   
                 0      kvm:kvm_hv_stimer_start_periodic                                   
                 0      kvm:kvm_hv_stimer_start_one_shot                                   
                 0      kvm:kvm_hv_stimer_callback                                   
                 0      kvm:kvm_hv_stimer_expiration                                   
                 0      kvm:kvm_hv_stimer_cleanup                                   
                 0      kvm:kvm_avic_incomplete_ipi                                   
                 0      kvm:kvm_avic_unaccelerated_access                                   
           241,305      kvm:kvm_hv_timer_state                                      
           846,437      kvm:kvm_hv_flush_tlb                                        
                 0      kvm:kvm_hv_flush_tlb_ex                                     
                 0      kvm:kvm_hv_send_ipi                                         
                 0      kvm:kvm_hv_send_ipi_ex                                      
                 0      kvm:kvm_pv_tlb_flush                                        
                 0      kvm:kvm_nested_vmenter_failed                                   
         3,383,509      kvm:kvm_userspace_exit                                      
           241,304      kvm:kvm_vcpu_wakeup                                         
            11,569      kvm:kvm_set_irq                                             
            11,569      kvm:kvm_ioapic_set_irq                                      
                 0      kvm:kvm_ioapic_delayed_eoi_inj                                   
             7,141      kvm:kvm_msi_set_irq                                         
             3,864      kvm:kvm_ack_irq                                             
                 0      kvm:kvm_mmio                                                
         6,766,300      kvm:kvm_fpu                                                 
                 0      kvm:kvm_age_page                                            
                 0      kvm:kvm_try_async_get_page                                   
                 0      kvm:kvm_async_pf_doublefault                                   
                 0      kvm:kvm_async_pf_not_present                                   
                 0      kvm:kvm_async_pf_ready                                      
                 0      kvm:kvm_async_pf_completed                                   
            44,860      kvm:kvm_halt_poll_ns                                        

      60.015614150 seconds time elapsed

2. root@dell-per440-05 /home/kar # perf stat -e 'kvm:*' -a sleep 1h

 Performance counter stats for 'system wide':

       374,657,241      kvm:kvm_entry                                               
                 0      kvm:kvm_hypercall                                           
        43,826,947      kvm:kvm_hv_hypercall                                        
       175,740,118      kvm:kvm_pio                                                 
        37,185,470      kvm:kvm_fast_mmio                                           
               126      kvm:kvm_cpuid                                               
        40,665,492      kvm:kvm_apic                                                
       374,657,175      kvm:kvm_exit                                                
        26,462,013      kvm:kvm_inj_virq                                            
             7,518      kvm:kvm_inj_exception                                       
           408,162      kvm:kvm_page_fault                                          
        20,680,269      kvm:kvm_msr                                                 
        39,127,730      kvm:kvm_cr                                                  
           694,272      kvm:kvm_pic_set_irq                                         
        19,985,413      kvm:kvm_apic_ipi                                            
        26,043,579      kvm:kvm_apic_accept_irq                                     
        26,043,480      kvm:kvm_eoi                                                 
        25,348,834      kvm:kvm_pv_eoi                                              
                 0      kvm:kvm_nested_vmrun                                        
                 0      kvm:kvm_nested_intercepts                                   
                 0      kvm:kvm_nested_vmexit                                       
                 0      kvm:kvm_nested_vmexit_inject                                   
                 0      kvm:kvm_nested_intr_vmexit                                   
                 0      kvm:kvm_invlpga                                             
                 0      kvm:kvm_skinit                                              
             7,518      kvm:kvm_emulate_insn                                        
                 0      kvm:vcpu_match_mmio                                         
                 0      kvm:kvm_write_tsc_offset                                    
                 0      kvm:kvm_update_master_clock                                   
                 0      kvm:kvm_track_tsc                                           
                 0      kvm:kvm_pml_full                                            
         8,792,089      kvm:kvm_ple_window_update                                   
                 0      kvm:kvm_pvclock_update                                      
                 0      kvm:kvm_wait_lapic_expire                                   
                 0      kvm:kvm_enter_smm                                           
                 0      kvm:kvm_pi_irte_update                                      
                 0      kvm:kvm_hv_notify_acked_sint                                   
                 0      kvm:kvm_hv_synic_set_irq                                    
                 0      kvm:kvm_hv_synic_send_eoi                                   
                 0      kvm:kvm_hv_synic_set_msr                                    
                 0      kvm:kvm_hv_stimer_set_config                                   
                 0      kvm:kvm_hv_stimer_set_count                                   
                 0      kvm:kvm_hv_stimer_start_periodic                                   
                 0      kvm:kvm_hv_stimer_start_one_shot                                   
                 0      kvm:kvm_hv_stimer_callback                                   
                 0      kvm:kvm_hv_stimer_expiration                                   
                 0      kvm:kvm_hv_stimer_cleanup                                   
                 0      kvm:kvm_avic_incomplete_ipi                                   
                 0      kvm:kvm_avic_unaccelerated_access                                   
        23,326,365      kvm:kvm_hv_timer_state                                      
        43,825,037      kvm:kvm_hv_flush_tlb                                        
                 0      kvm:kvm_hv_flush_tlb_ex                                     
                 0      kvm:kvm_hv_send_ipi                                         
                 0      kvm:kvm_hv_send_ipi_ex                                      
                 0      kvm:kvm_pv_tlb_flush                                        
                 0      kvm:kvm_nested_vmenter_failed                                   
       175,278,780      kvm:kvm_userspace_exit                                      
        23,326,336      kvm:kvm_vcpu_wakeup                                         
           694,266      kvm:kvm_set_irq                                             
           694,266      kvm:kvm_ioapic_set_irq                                      
                 0      kvm:kvm_ioapic_delayed_eoi_inj                                   
           755,908      kvm:kvm_msi_set_irq                                         
           231,934      kvm:kvm_ack_irq                                             
                 0      kvm:kvm_mmio                                                
       350,556,899      kvm:kvm_fpu                                                 
                 0      kvm:kvm_age_page                                            
                 0      kvm:kvm_try_async_get_page                                   
                 0      kvm:kvm_async_pf_doublefault                                   
                 0      kvm:kvm_async_pf_not_present                                   
                 0      kvm:kvm_async_pf_ready                                      
                 0      kvm:kvm_async_pf_completed                                   
         8,368,961      kvm:kvm_halt_poll_ns                                        

    3600.015585224 seconds time elapsed

Best Regards~
Peixiu

Comment 33 Vadim Rozenfeld 2020-05-12 07:00:27 UTC
Thank you Peixiu Hou.

Can I ask you to check the system event log file if there is something unusual,
like warnings or errors, happen in disk/storage subsystems when the problem 
appears?

Best,
Vadim.

Comment 34 menli@redhat.com 2020-05-19 12:41:38 UTC
Sorry for reply this late.

I try to reproduce this issue with auto,only hit this issue with win7-32 guest and failed as following error.

 Host_RHEL.m8.u2.product_av.qcow2.virtio_scsi.up.virtio_net.Guest.Win7.i386.sp1.io-github-autotest-qemu.iometer_windows.aio_threads: ERROR: Timeout expired while waiting for shell command to complete: 'cd C:\\Iometer && cmd /c Iometer.exe /c iometer.icf /r C:\\autotest_iometer_result.csv'    (output: '') (1056.68 s)

There seems a Warning Event ID:129 for source vioscsi in system event log.

build info:
qemu-kvm-4.2.0-21.module+el8.2.1+6586+8b7713b9.x86_64
kernel-4.18.0-193.el8.x86_64
seabios-1.13.0-1.module+el8.2.0+5520+4e5817f3.x86_64
virtio-win-prewhql-0.1-184.iso


Thanks
Menghuan

Comment 35 xiagao 2020-05-25 09:54:08 UTC
Hit the same issue on Win8-64.

pkg:
virtio-win-prewhql-0.1-185.iso
qemu-kvm-4.2.0-19.module+el8.2.0+6296+6b821950.x86_64
kernel-4.18.0-193.el8.x86_64
seabios-1.13.0-1.module+el8.2.0+5520+4e5817f3.x86_64

Comment 36 menli@redhat.com 2020-06-18 09:23:40 UTC
Created attachment 1697923 [details]
full automation logs

Comment 38 menli@redhat.com 2020-09-08 07:52:36 UTC
Hit the same issue on Win8-64.

pkg:
virtio-win-1.9.13-1.el8.iso
qemu-kvm-5.1.0-4.module+el8.3.0+7846+ae9b566f.x86_64
kernel-4.18.0-235.el8.x86_64
seabios-1.14.0-1.module+el8.3.0+7638+07cf13d2.x86_64

Comment 41 dehanmeng 2020-11-13 06:58:17 UTC
Hit the same issue on Win8-64 and Win10.i386 

pkg:
kernel-4.18.0-193.el8.x86_64
qemu-kvm-4.2.0-19.module+el8.2.0+6296+6b821950.x86_64
virtio-win-prewhql-190
seabios-1.13.0-1.module+el8.2.0+5520+4e5817f3.x86_64

Comment 42 dehanmeng 2020-11-30 08:53:48 UTC
Hit this same issue on Win8-32 guest for virtio-win-prewhql-191 test .

023-Host_RHEL.m8.u2.product_av.qcow2.virtio_blk.up.virtio_net.Guest.Win8.i386.io-github-autotest-qemu.single_driver_install.with_netkvm.virtio_win_iso_media.q35

pkg:
kernel-4.18.0-193.el8.x86_64
qemu-kvm-4.2.0-19.module+el8.2.0+6296+6b821950.x86_64
virtio-win-prewhql-191
seabios-1.13.0-1.module+el8.2.0+5520+4e5817f3.x86_64

Comment 43 Vadim Rozenfeld 2020-12-14 01:46:50 UTC
(In reply to menli from comment #34)
> Sorry for reply this late.
> 
> I try to reproduce this issue with auto,only hit this issue with win7-32
> guest and failed as following error.
> 
>  Host_RHEL.m8.u2.product_av.qcow2.virtio_scsi.up.virtio_net.Guest.Win7.i386.
> sp1.io-github-autotest-qemu.iometer_windows.aio_threads: ERROR: Timeout
> expired while waiting for shell command to complete: 'cd C:\\Iometer && cmd
> /c Iometer.exe /c iometer.icf /r C:\\autotest_iometer_result.csv'   
> (output: '') (1056.68 s)
> 
> There seems a Warning Event ID:129 for source vioscsi in system event log.
> 
> build info:
> qemu-kvm-4.2.0-21.module+el8.2.1+6586+8b7713b9.x86_64
> kernel-4.18.0-193.el8.x86_64
> seabios-1.13.0-1.module+el8.2.0+5520+4e5817f3.x86_64
> virtio-win-prewhql-0.1-184.iso
> 
> 
> Thanks
> Menghuan

Event ID 129 means "Reset to device" issued by miniport to device. 
https://docs.microsoft.com/en-us/archive/blogs/ntdebugging/understanding-storage-timeouts-and-event-129-errors

Technically it means that backend was not responsive for more then 30 seconds. I wonder if the storage is backed 
by local or network located (SAN) file?

Thanks,
Vadim.

Comment 44 menli@redhat.com 2020-12-17 01:46:49 UTC
(In reply to Vadim Rozenfeld from comment #43)
> (In reply to menli from comment #34)
> > Sorry for reply this late.
> > 
> > I try to reproduce this issue with auto,only hit this issue with win7-32
> > guest and failed as following error.
> > 
> >  Host_RHEL.m8.u2.product_av.qcow2.virtio_scsi.up.virtio_net.Guest.Win7.i386.
> > sp1.io-github-autotest-qemu.iometer_windows.aio_threads: ERROR: Timeout
> > expired while waiting for shell command to complete: 'cd C:\\Iometer && cmd
> > /c Iometer.exe /c iometer.icf /r C:\\autotest_iometer_result.csv'   
> > (output: '') (1056.68 s)
> > 
> > There seems a Warning Event ID:129 for source vioscsi in system event log.
> > 
> > build info:
> > qemu-kvm-4.2.0-21.module+el8.2.1+6586+8b7713b9.x86_64
> > kernel-4.18.0-193.el8.x86_64
> > seabios-1.13.0-1.module+el8.2.0+5520+4e5817f3.x86_64
> > virtio-win-prewhql-0.1-184.iso
> > 
> > 
> > Thanks
> > Menghuan
> 
> Event ID 129 means "Reset to device" issued by miniport to device. 
> https://docs.microsoft.com/en-us/archive/blogs/ntdebugging/understanding-
> storage-timeouts-and-event-129-errors
> 
> Technically it means that backend was not responsive for more then 30
> seconds. I wonder if the storage is backed 
> by local or network located (SAN) file?
> 
> Thanks,
> Vadim.


The storage is backed by local, thanks


Thanks
Menghuan

Comment 46 Peixiu Hou 2021-06-24 07:31:50 UTC
I tried to reproduce this issue on win8.1-32/64 guest. 

Run 20 times on each guest, also hit this issue on win8.1-64 guest.
(07/21) repeat6.Host_RHEL.m8.u5.product_rhel.qcow2.virtio_scsi.up.virtio_net.Guest.Win8.x86_64.1.io-github-autotest-qemu.iometer_windows.aio_threads.q35: ERROR: Timeout expired while waiting for shell command to complete: 'cd C:\\Iometer && cmd /c Iometer.exe /c iometer.icf /r C:\\autotest_iometer_result.csv'    (output: '') (1065.92 s)

Used version:
kernel-4.18.0-311.el8.kpq1.x86_64
qemu-kvm-4.2.0-52.module+el8.5.0+11386+ef5875dd.x86_64
seabios-bin-1.13.0-2.module+el8.3.0+7353+9de0a3cc.noarch
virtio-win-prewhql-202

As upper result, I prefer to keep this issue open~

Thanks~
Peixiu

Comment 47 Vadim Rozenfeld 2021-06-24 11:12:49 UTC
(In reply to Peixiu Hou from comment #46)
> I tried to reproduce this issue on win8.1-32/64 guest. 
> 
> Run 20 times on each guest, also hit this issue on win8.1-64 guest.
> (07/21)
> repeat6.Host_RHEL.m8.u5.product_rhel.qcow2.virtio_scsi.up.virtio_net.Guest.
> Win8.x86_64.1.io-github-autotest-qemu.iometer_windows.aio_threads.q35:
> ERROR: Timeout expired while waiting for shell command to complete: 'cd
> C:\\Iometer && cmd /c Iometer.exe /c iometer.icf /r
> C:\\autotest_iometer_result.csv'    (output: '') (1065.92 s)
> 
> Used version:
> kernel-4.18.0-311.el8.kpq1.x86_64
> qemu-kvm-4.2.0-52.module+el8.5.0+11386+ef5875dd.x86_64
> seabios-bin-1.13.0-2.module+el8.3.0+7353+9de0a3cc.noarch
> virtio-win-prewhql-202
> 
> As upper result, I prefer to keep this issue open~
> 

Fully agree with you, Peixiu. 
Let's keep it open. 
Meanwhile can you please share iometer.reg file?

All the best,
Vadim.

> Thanks~
> Peixiu

Comment 50 RHEL Program Management 2021-07-12 07:30:10 UTC
After evaluating this issue, there are no plans to address it further or fix it in an upcoming release.  Therefore, it is being closed.  If plans change such that this issue will be fixed in an upcoming release, then the bug can be reopened.

Comment 54 Peixiu Hou 2021-12-21 02:27:18 UTC
Hi Vadim, 

This issue is not only hit on win7 os, but also hit on win8.1 and win10(not 100%).
Could you help to extend the stale date for this issue? 

Thanks~
Peixiu

Comment 55 Peixiu Hou 2022-02-20 22:25:47 UTC
Hi Vadim,

Could you help to move this bug to RHEL9.1 Product? thanks~

Best Regards~
Peixiu

Comment 56 Peixiu Hou 2022-03-31 02:57:08 UTC
Change Stale Date as RHEL9.1 schedule.

Comment 58 RHEL Program Management 2022-10-25 02:27:43 UTC
After evaluating this issue, there are no plans to address it further or fix it in an upcoming release.  Therefore, it is being closed.  If plans change such that this issue will be fixed in an upcoming release, then the bug can be reopened.


Note You need to log in before you can comment on or make changes to this bug.