Bug 999304

Summary: virtio_scsi has bad write performance compared to virtio_blk
Product: Red Hat Enterprise Linux 7 Reporter: Xiaomei Gao <xigao>
Component: qemu-kvmAssignee: Fam Zheng <famz>
Status: CLOSED WONTFIX QA Contact: Virtualization Bugs <virt-bugs>
Severity: medium Docs Contact:
Priority: high    
Version: 7.0CC: areis, bmcclain, bsarathy, hhuang, juzhang, kwolf, lyarwood, michele, michen, mkenneth, pbonzini, rbalakri, virt-maint, wquan
Target Milestone: rcKeywords: Tracking
Target Release: ---   
Hardware: x86_64   
OS: Linux   
Whiteboard:
Fixed In Version: Doc Type: Bug Fix
Doc Text:
Story Points: ---
Clone Of: 893327 Environment:
Last Closed: 2014-12-25 04:00:42 UTC Type: Bug
Regression: --- Mount Type: ---
Documentation: --- CRM:
Verified Versions: Category: ---
oVirt Team: --- RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: --- Target Upstream Version:
Embargoed:
Bug Depends On: 893327    
Bug Blocks: 1172230    

Comment 3 Paolo Bonzini 2013-10-31 13:34:26 UTC
These are different tests than those in the RHEL6 bug 893327.  That bug was using iozone, not fio.  It would be useful to compare RHEL6/RHEL7 results for iozone.

Regarding the results in comment 2, the qcow2 results are actually very good, where do you see a 10%-25% gap?

For SSD, virtio-scsi has worse results than virtio-blk in write tests.

For ramdisk, the results are very noisy, and the overall significance is not great except that virtio-scsi consumes more host CPU.

Comment 4 Xiaomei Gao 2013-11-01 07:36:29 UTC
(In reply to Paolo Bonzini from comment #3)
> These are different tests than those in the RHEL6 bug 893327.  That bug was
> using iozone, not fio.  It would be useful to compare RHEL6/RHEL7 results
> for iozone.

Iozone is the test tools that we used before, now we use fio benchmark to test related block performance.

> Regarding the results in comment 2, the qcow2 results are actually very
> good, where do you see a 10%-25% gap?

We could see 10%-25% gap in "BW/CPU" column.

> For SSD, virtio-scsi has worse results than virtio-blk in write tests.

There is also performance gap for read tests in "BW/CPU" aspect.

Comment 5 Paolo Bonzini 2013-11-04 14:53:26 UTC
> Iozone is the test tools that we used before, now we use fio benchmark to test
> related block performance.

Can you run fio for RHEL6 and/or iozone for RHEL7?  Otherwise it's difficult to compare.

> There is also performance gap for read tests in "BW/CPU" aspect.

BW/CPU is not performance, it is performance divided by host CPU usage.  Only MB/sec and IOPS is performance.

Comment 6 Xiaomei Gao 2013-11-05 01:28:43 UTC
(In reply to Paolo Bonzini from comment #5)
> > Iozone is the test tools that we used before, now we use fio benchmark to test
> > related block performance.
> 
> Can you run fio for RHEL6 and/or iozone for RHEL7?  Otherwise it's difficult
> to compare.

Sure, i am working against it, once test results come out, i will update the comment.

Comment 10 Xiaomei Gao 2013-11-07 09:42:05 UTC
(In reply to Paolo Bonzini from comment #8)

> Is this visible for virtio-blk as well?  Or is it specific to virtio-scsi?

Forgot to mention in comment #9 that there has been exited Bug 966398 to track the issue.

Comment 11 Paolo Bonzini 2013-11-07 10:40:30 UTC
> Sure. But we could still see big IOPS boost with the same guest RHEL7.0 on
> ramdisk storage backend.

Interesting, this means my theory was wrong. :)  It could be due to many factors, perhaps RHEL7 host kernel improvements...

Anyhow, the RHEL7 vs. RHEL6 performance doesn't seem to be very different for virtio-scsi and virtio-blk.  The host CPU usage can be tracked by bug 966398 (thanks for pointing me to it).

The main issue tracked by this bug should be "virtio-scsi has worse results than virtio-blk in write tests".  This is where we have less noise in the ramdisk results, and consistently worse performance in the SSD tests (raw and qcow2).

Thanks!