Bug 1930320

Summary: virtio-blk with iothreads can be significantly slower than without
Product: Red Hat Enterprise Linux 9 Reporter: Stefan Hajnoczi <stefanha>
Component: qemu-kvmAssignee: Stefan Hajnoczi <stefanha>
qemu-kvm sub component: virtio-blk,scsi QA Contact: Tingting Mao <timao>
Status: CLOSED CURRENTRELEASE Docs Contact:
Severity: high    
Priority: high CC: anton.wd, chayang, coli, jen, jinzhao, juzhang, kkiwi, ldoktor, qinwang, virt-maint, xuwei, yama
Version: unspecifiedKeywords: Triaged
Target Milestone: rc   
Target Release: ---   
Hardware: Unspecified   
OS: Unspecified   
Whiteboard:
Fixed In Version: Doc Type: If docs needed, set a value
Doc Text:
Story Points: ---
Clone Of: Environment:
Last Closed: 2022-08-09 15:01:38 UTC Type: Bug
Regression: --- Mount Type: ---
Documentation: --- CRM:
Verified Versions: Category: ---
oVirt Team: --- RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: --- Target Upstream Version:
Embargoed:
Bug Depends On:    
Bug Blocks: 1827722    

Description Stefan Hajnoczi 2021-02-18 16:50:39 UTC
Description of problem:
In some cases virtio-blk with iothreads is ~30% slower than without. These cases need to be investigated because enabling iothreads is supposed to improve performance across the board.

One example is randwrite,bs=64kb,iodepth=1 with 16 threads.

http://kvm-perf.englab.nay.redhat.com/results/regression/multiqueue_rhel8.4.0/dataplane/4queues/raw.virtio_blk.*.x86_64.html

Version-Release number of selected component (if applicable):
qemu-kvm-5.2.0-1.module+el8.4.0+9091+650b220a

How reproducible:


Steps to Reproduce:
See benchmark details above.

Actual results:
~30% slower with iothreads in some cases.

Expected results:
iothreads should be similar or faster than without iothreads.


Additional info:

Comment 1 John Ferlan 2021-09-09 11:27:16 UTC
Bulk update: Move RHEL-AV bugs to RHEL9. If necessary to resolve in RHEL8, then clone to the current RHEL8 release.

Comment 2 Klaus Heinrich Kiwi 2022-01-19 16:55:00 UTC
Stefan,
 
Do you think this one is doable for RHEL9 investigation / inclusion?

Comment 3 Stefan Hajnoczi 2022-01-24 17:24:30 UTC
(In reply to Klaus Heinrich Kiwi from comment #2)
> Do you think this one is doable for RHEL9 investigation / inclusion?

I have too many ongoing tasks at the moment. It's possible that some of the optimizations I've been working on will apply to this BZ but I can't commit to it for RHEL9.

Comment 5 Tingting Mao 2022-07-27 07:09:02 UTC
Hi Stefan,

After tried in latest rhel9.1, there is just a little performance degradation(~3%) for several combinations of read, but most of them, the performance is better after adding the iothread.

Check below link for the details:
http://kvm-perf.englab.nay.redhat.com/results/regression/nvme9.1.0_qemu7.0.0-9_kernel5.14.0-96_iothread/raw.virtio_blk.*.x86_64.html 

Could you please check again? I think we could close this bug as currentrelease then.

Thanks.

Comment 6 Stefan Hajnoczi 2022-08-09 15:01:38 UTC
(In reply to Tingting Mao from comment #5)
> Hi Stefan,
> 
> After tried in latest rhel9.1, there is just a little performance
> degradation(~3%) for several combinations of read, but most of them, the
> performance is better after adding the iothread.
> 
> Check below link for the details:
> http://kvm-perf.englab.nay.redhat.com/results/regression/nvme9.1.0_qemu7.0.0-
> 9_kernel5.14.0-96_iothread/raw.virtio_blk.*.x86_64.html 
> 
> Could you please check again? I think we could close this bug as
> currentrelease then.
> 
> Thanks.

Thanks, I agree performance looks acceptable and we can close this bug.