Bug 1140521
| Summary: | [performance] virtio-blk performance degradation happened with virito-serial | ||||||
|---|---|---|---|---|---|---|---|
| Product: | Red Hat Enterprise Linux 7 | Reporter: | Sibiao Luo <sluo> | ||||
| Component: | qemu-kvm-rhev | Assignee: | Fam Zheng <famz> | ||||
| Status: | CLOSED WONTFIX | QA Contact: | Yanhui Ma <yama> | ||||
| Severity: | high | Docs Contact: | |||||
| Priority: | high | ||||||
| Version: | 7.1 | CC: | amit.shah, chayang, famz, hhuang, huding, jasowang, juzhang, michen, pbonzini, rbalakri, stefanha, virt-maint, wquan, xfu, yama | ||||
| Target Milestone: | rc | ||||||
| Target Release: | --- | ||||||
| Hardware: | Unspecified | ||||||
| OS: | Unspecified | ||||||
| Whiteboard: | |||||||
| Fixed In Version: | Doc Type: | If docs needed, set a value | |||||
| Doc Text: | Story Points: | --- | |||||
| Clone Of: | |||||||
| : | 1140580 1140583 (view as bug list) | Environment: | |||||
| Last Closed: | 2018-01-17 06:16:42 UTC | Type: | Bug | ||||
| Regression: | --- | Mount Type: | --- | ||||
| Documentation: | --- | CRM: | |||||
| Verified Versions: | Category: | --- | |||||
| oVirt Team: | --- | RHEL 7.3 requirements from Atomic Host: | |||||
| Cloudforms Team: | --- | Target Upstream Version: | |||||
| Embargoed: | |||||||
| Bug Depends On: | |||||||
| Bug Blocks: | 1140580, 1140583 | ||||||
| Attachments: |
|
||||||
|
Description
Sibiao Luo
2014-09-11 07:45:27 UTC
1.with virtio-console test report:
# fio -filename /dev/vda -direct=1 -iodepth=1 -thread -rw=write -ioengine=psync -bs=64k -size=30G -numjobs=1 -name=mytest
mytest: (g=0): rw=write, bs=64K-64K/64K-64K/64K-64K, ioengine=psync, iodepth=1
fio-2.1.7
Starting 1 thread
Jobs: 1 (f=1): [W] [14.0% done] [0KB/644.6MB/0KB /s] [0/10.4K/0 iops] [eta 00m:43s]
mytest: (groupid=0, jobs=1): err= 0: pid=2135: Thu Sep 11 11:33:00 2014
write: io=4096.0MB, bw=642116KB/s, iops=10033, runt= 6532msec
clat (usec): min=72, max=8042, avg=95.19, stdev=38.71
lat (usec): min=74, max=8044, avg=97.95, stdev=38.73
clat percentiles (usec):
| 1.00th=[ 80], 5.00th=[ 82], 10.00th=[ 84], 20.00th=[ 91],
| 30.00th=[ 92], 40.00th=[ 93], 50.00th=[ 94], 60.00th=[ 95],
| 70.00th=[ 95], 80.00th=[ 97], 90.00th=[ 100], 95.00th=[ 106],
| 99.00th=[ 157], 99.50th=[ 169], 99.90th=[ 217], 99.95th=[ 237],
| 99.99th=[ 414]
bw (KB /s): min=620800, max=667776, per=100.00%, avg=642185.85, stdev=15698.36
lat (usec) : 100=88.72%, 250=11.25%, 500=0.02%, 1000=0.01%
lat (msec) : 4=0.01%, 10=0.01%
cpu : usr=5.14%, sys=11.77%, ctx=65642, majf=0, minf=6
IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0%
submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
issued : total=r=0/w=65536/d=0, short=r=0/w=0/d=0
latency : target=0, window=0, percentile=100.00%, depth=1
Run status group 0 (all jobs):
WRITE: io=4096.0MB, aggrb=642116KB/s, minb=642116KB/s, maxb=642116KB/s, mint=6532msec, maxt=6532msec
Disk stats (read/write):
vda: ios=0/64241, merge=0/0, ticks=0/5052, in_queue=5019, util=77.22%
2.remove virtio-console test report:
# fio -filename /dev/vda -direct=1 -iodepth=1 -thread -rw=write -ioengine=psync -bs=64k -size=30G -numjobs=1 -name=mytest
mytest: (g=0): rw=write, bs=64K-64K/64K-64K/64K-64K, ioengine=psync, iodepth=1
fio-2.1.7
Starting 1 thread
Jobs: 1 (f=1): [W] [14.6% done] [0KB/763.3MB/0KB /s] [0/12.3K/0 iops] [eta 00m:35s]
mytest: (groupid=0, jobs=1): err= 0: pid=2141: Thu Sep 11 11:33:32 2014
write: io=4096.0MB, bw=783543KB/s, iops=12242, runt= 5353msec
clat (usec): min=61, max=540, avg=77.12, stdev= 4.92
lat (usec): min=64, max=543, avg=79.94, stdev= 4.96
clat percentiles (usec):
| 1.00th=[ 72], 5.00th=[ 75], 10.00th=[ 75], 20.00th=[ 76],
| 30.00th=[ 76], 40.00th=[ 76], 50.00th=[ 77], 60.00th=[ 77],
| 70.00th=[ 77], 80.00th=[ 77], 90.00th=[ 79], 95.00th=[ 81],
| 99.00th=[ 92], 99.50th=[ 105], 99.90th=[ 131], 99.95th=[ 149],
| 99.99th=[ 211]
bw (KB /s): min=779392, max=790144, per=100.00%, avg=783564.80, stdev=3876.90
lat (usec) : 100=99.35%, 250=0.64%, 500=0.01%, 750=0.01%
cpu : usr=5.98%, sys=14.72%, ctx=65650, majf=0, minf=6
IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0%
submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
issued : total=r=0/w=65536/d=0, short=r=0/w=0/d=0
latency : target=0, window=0, percentile=100.00%, depth=1
Run status group 0 (all jobs):
WRITE: io=4096.0MB, aggrb=783542KB/s, minb=783542KB/s, maxb=783542KB/s, mint=5353msec, maxt=5353msec
Disk stats (read/write):
vda: ios=0/63061, merge=0/0, ticks=0/3940, in_queue=3907, util=74.42%
Created attachment 936412 [details]
Output of lspci from a rhel guest.txt
If use local file to test the virtio-blk i/o performance: 1.virtio_console module installed: 64K-write-sequence: 104 MBPS, 1672 IOPS 2.virtio_console module uninstalled(modprobe -r virtio_console): 64K-write-sequence: 109 MBPS, 1759 IOPS Best Regards, sluo Upstream report: https://www.mail-archive.com/kvm@vger.kernel.org/msg107248.html Also tried virtio-nic performance not hit this issue. 1.virtio_console module installed: # netperf -D 1 -H 10.66.11.154 -l 67.5 -C -c -t TCP_MAERTS -- -m 64 Sorry, Demo Mode not configured into this netperf. Please consider reconfiguring netperf with --enable-demo=yes and recompiling MIGRATED TCP MAERTS TEST from 0.0.0.0 (0.0.0.0) port 0 AF_INET to 10.66.11.154 () port 0 AF_INET Recv Send Send Utilization Service Demand Socket Socket Message Elapsed Recv Send Recv Send Size Size Size Time Throughput local remote local remote bytes bytes bytes secs. 10^6bits/s % S % S us/KB us/KB 87380 16384 64 67.00 28194.75 100.00 37.12 0.291 0.863 2.virtio_console module uninstalled(modprobe -r virtio_console): # netperf -D 1 -H 10.66.11.154 -l 67.5 -C -c -t TCP_MAERTS -- -m 64 Sorry, Demo Mode not configured into this netperf. Please consider reconfiguring netperf with --enable-demo=yes and recompiling MIGRATED TCP MAERTS TEST from 0.0.0.0 (0.0.0.0) port 0 AF_INET to 10.66.11.154 () port 0 AF_INET Recv Send Send Utilization Service Demand Socket Socket Message Elapsed Recv Send Recv Send Size Size Size Time Throughput local remote local remote bytes bytes bytes secs. 10^6bits/s % S % S us/KB us/KB 87380 16384 64 67.00 26613.94 100.00 39.98 0.308 0.985 Best Regards, sluo The upstream WIP patches "[PATCH v2 0/2] main-loop: Use epoll on Linux" will fix this. But the series still needs improvements on its implementation. Fam (In reply to Sibiao Luo from comment #5) > Also tried virtio-nic performance not hit this issue. > > 1.virtio_console module installed: > # netperf -D 1 -H 10.66.11.154 -l 67.5 -C -c -t TCP_MAERTS -- -m 64 > Sorry, Demo Mode not configured into this netperf. > Please consider reconfiguring netperf with > --enable-demo=yes and recompiling > MIGRATED TCP MAERTS TEST from 0.0.0.0 (0.0.0.0) port 0 AF_INET to > 10.66.11.154 () port 0 AF_INET > Recv Send Send Utilization Service Demand > Socket Socket Message Elapsed Recv Send Recv Send > Size Size Size Time Throughput local remote local remote > bytes bytes bytes secs. 10^6bits/s % S % S us/KB us/KB > > 87380 16384 64 67.00 28194.75 100.00 37.12 0.291 > 0.863 > > 2.virtio_console module uninstalled(modprobe -r virtio_console): > # netperf -D 1 -H 10.66.11.154 -l 67.5 -C -c -t TCP_MAERTS -- -m 64 > Sorry, Demo Mode not configured into this netperf. > Please consider reconfiguring netperf with > --enable-demo=yes and recompiling > MIGRATED TCP MAERTS TEST from 0.0.0.0 (0.0.0.0) port 0 AF_INET to > 10.66.11.154 () port 0 AF_INET > Recv Send Send Utilization Service Demand > Socket Socket Message Elapsed Recv Send Recv Send > Size Size Size Time Throughput local remote local remote > bytes bytes bytes secs. 10^6bits/s % S % S us/KB us/KB > > 87380 16384 64 67.00 26613.94 100.00 39.98 0.308 0.985 > > Best Regards, > sluo You probably need to disable vhost_net and retry. virtio-blk dataplane has no such issue. Fixes for non-dataplane is not ready on upstream, this bug will be revisited for 7.3. epoll support was merged in QEMU 2.5, which can mitigate this performance degradation. Chance is low (but still possible) this can be fixed in 7.4 time frame, deferring to 7.5 unless I get lucky with it in upstream later. Virtio performance work is focused on dataplane. Closing this one now since dataplane will not suffer from this problem. (virtio notifier fds are processed in the main loop, and also that iothreads have adaptive epoll). |