Bug 1106420

Summary: Enable ioenventfd for virtio-scsi-pci
Product: Red Hat Enterprise Linux 6 Reporter: Fam Zheng <famz>
Component: qemu-kvmAssignee: Fam Zheng <famz>
Status: CLOSED ERRATA QA Contact: Virtualization Bugs <virt-bugs>
Severity: urgent Docs Contact:
Priority: urgent    
Version: 6.6CC: areis, atheurer, bsarathy, chayang, famz, jen, juzhang, lsoft-mso-pj, michen, mkenneth, moshiro, pbonzini, qzhang, rbalakri, sluo, srao, virt-bugs, virt-maint, xigao, yoguma
Target Milestone: rcKeywords: ZStream
Target Release: ---   
Hardware: Unspecified   
OS: Unspecified   
Whiteboard:
Fixed In Version: qemu-kvm-0.12.1.2-2.433.el6 Doc Type: Bug Fix
Doc Text:
This update enables ioeventfd in virtio-scsi-pci. This allows QEMU to process I/O requests outside of the vCPU thread, reducing the latency of submitting requests and improving single task throughput.
Story Points: ---
Clone Of:
: 1123271 (view as bug list) Environment:
Last Closed: 2014-10-14 07:01:34 UTC Type: Bug
Regression: --- Mount Type: ---
Documentation: --- CRM:
Verified Versions: Category: ---
oVirt Team: --- RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: ---
Bug Depends On:    
Bug Blocks: 1011600, 893327, 1123271    

Description Fam Zheng 2014-06-09 09:58:57 UTC
ioeventfd reduces burden on vcpu thread for IO request processing, we should enable it on virtio-scsi. It helps with single task throughput.

Testing an SSD partition backed virtio-scsi disk, before and after enabling ioeventfd:

(fio job=1 iodepth=32 bs=4k)

filename        rw         bw         iops       latency
-----------------------------------------------------------
Before          read       112        28772      1110
Before          write      108        27730      1152

After           read       196        50300      634
After           write      169        43444      734

Comment 4 Sanjay Rao 2014-06-09 12:32:10 UTC
Yes. I will run it on my system and post the results into this BZ.

Comment 5 Sanjay Rao 2014-06-09 16:46:13 UTC
I have tested the fix on my VMs where I saw similar improvement.

Here is the output from the host which shows the version of qemu-kvm, qemu-img

[root@perf92 ~]# rpm -qa |grep qemu
qemu-img-rhev-0.12.1.2-2.427.el6.test.x86_64
qemu-kvm-rhev-tools-0.12.1.2-2.427.el6.test.x86_64
gpxe-roms-qemu-0.9.7-6.10.el6.noarch
qemu-kvm-rhev-0.12.1.2-2.427.el6.test.x86_64

The data below shows the improvement in the virtio-scsi driver. The comparison shows with increasing queue-depth (1, 16,32, 64) with 4k and 16k block size for Sequential write and Sequential read.


Seq Write
=========
                1       16      32      64
4K-before       22690   100041  100689  101356
4K-after        20689   207701  275941  276560

16K-before      76371   376036  366956  382675
16K-after       71765   757915  899679  996745

                                
Seq Read                                
========
                1       16      32      64
4K-before       17988   100208  101883  103654
4K-after        17434   195685  255423  283514

16K-before      63068   393665  394656  391945
16K-after       60595   710237  858082  754236

Comment 7 Miroslav Rezanina 2014-07-04 07:10:02 UTC
Fix included in qemu-kvm-0.12.1.2-2.429.el6

Comment 14 Paolo Bonzini 2014-07-24 10:13:11 UTC
*** Bug 1121054 has been marked as a duplicate of this bug. ***

Comment 33 Jeff Nelson 2014-07-31 22:16:25 UTC
Fixed in qemu-kvm-0.12.1.2-2.433.el6

The patch that fixes BZ#1123698 did not reference this BZ, so adding this
by hand.

Comment 34 Fam Zheng 2014-08-01 00:36:23 UTC
Thanks, Jeff!

Comment 36 Qunfang Zhang 2014-08-18 01:07:27 UTC
*** Bug 916418 has been marked as a duplicate of this bug. ***

Comment 41 errata-xmlrpc 2014-10-14 07:01:34 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

http://rhn.redhat.com/errata/RHBA-2014-1490.html