Bug 999304 - virtio_scsi has bad write performance compared to virtio_blk
virtio_scsi has bad write performance compared to virtio_blk
Status: CLOSED WONTFIX
Product: Red Hat Enterprise Linux 7
Classification: Red Hat
Component: qemu-kvm (Show other bugs)
7.0
x86_64 Linux
high Severity medium
: rc
: ---
Assigned To: Fam Zheng
Virtualization Bugs
: Tracking
Depends On: 893327
Blocks: 1172230
  Show dependency treegraph
 
Reported: 2013-08-21 02:22 EDT by Xiaomei Gao
Modified: 2015-08-17 11:22 EDT (History)
14 users (show)

See Also:
Fixed In Version:
Doc Type: Bug Fix
Doc Text:
Story Points: ---
Clone Of: 893327
Environment:
Last Closed: 2014-12-24 23:00:42 EST
Type: Bug
Regression: ---
Mount Type: ---
Documentation: ---
CRM:
Verified Versions:
Category: ---
oVirt Team: ---
RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: ---


Attachments (Terms of Use)

  None (edit)
Comment 3 Paolo Bonzini 2013-10-31 09:34:26 EDT
These are different tests than those in the RHEL6 bug 893327.  That bug was using iozone, not fio.  It would be useful to compare RHEL6/RHEL7 results for iozone.

Regarding the results in comment 2, the qcow2 results are actually very good, where do you see a 10%-25% gap?

For SSD, virtio-scsi has worse results than virtio-blk in write tests.

For ramdisk, the results are very noisy, and the overall significance is not great except that virtio-scsi consumes more host CPU.
Comment 4 Xiaomei Gao 2013-11-01 03:36:29 EDT
(In reply to Paolo Bonzini from comment #3)
> These are different tests than those in the RHEL6 bug 893327.  That bug was
> using iozone, not fio.  It would be useful to compare RHEL6/RHEL7 results
> for iozone.

Iozone is the test tools that we used before, now we use fio benchmark to test related block performance.

> Regarding the results in comment 2, the qcow2 results are actually very
> good, where do you see a 10%-25% gap?

We could see 10%-25% gap in "BW/CPU" column.

> For SSD, virtio-scsi has worse results than virtio-blk in write tests.

There is also performance gap for read tests in "BW/CPU" aspect.
Comment 5 Paolo Bonzini 2013-11-04 09:53:26 EST
> Iozone is the test tools that we used before, now we use fio benchmark to test
> related block performance.

Can you run fio for RHEL6 and/or iozone for RHEL7?  Otherwise it's difficult to compare.

> There is also performance gap for read tests in "BW/CPU" aspect.

BW/CPU is not performance, it is performance divided by host CPU usage.  Only MB/sec and IOPS is performance.
Comment 6 Xiaomei Gao 2013-11-04 20:28:43 EST
(In reply to Paolo Bonzini from comment #5)
> > Iozone is the test tools that we used before, now we use fio benchmark to test
> > related block performance.
> 
> Can you run fio for RHEL6 and/or iozone for RHEL7?  Otherwise it's difficult
> to compare.

Sure, i am working against it, once test results come out, i will update the comment.
Comment 10 Xiaomei Gao 2013-11-07 04:42:05 EST
(In reply to Paolo Bonzini from comment #8)

> Is this visible for virtio-blk as well?  Or is it specific to virtio-scsi?

Forgot to mention in comment #9 that there has been exited Bug 966398 to track the issue.
Comment 11 Paolo Bonzini 2013-11-07 05:40:30 EST
> Sure. But we could still see big IOPS boost with the same guest RHEL7.0 on
> ramdisk storage backend.

Interesting, this means my theory was wrong. :)  It could be due to many factors, perhaps RHEL7 host kernel improvements...

Anyhow, the RHEL7 vs. RHEL6 performance doesn't seem to be very different for virtio-scsi and virtio-blk.  The host CPU usage can be tracked by bug 966398 (thanks for pointing me to it).

The main issue tracked by this bug should be "virtio-scsi has worse results than virtio-blk in write tests".  This is where we have less noise in the ramdisk results, and consistently worse performance in the SSD tests (raw and qcow2).

Thanks!

Note You need to log in before you can comment on or make changes to this bug.