RHEL Engineering is moving the tracking of its product development work on RHEL 6 through RHEL 9 to Red Hat Jira (issues.redhat.com). If you're a Red Hat customer, please continue to file support cases via the Red Hat customer portal. If you're not, please head to the "RHEL project" in Red Hat Jira and file new tickets here. Individual Bugzilla bugs in the statuses "NEW", "ASSIGNED", and "POST" are being migrated throughout September 2023. Bugs of Red Hat partners with an assigned Engineering Partner Manager (EPM) are migrated in late September as per pre-agreed dates. Bugs against components "kernel", "kernel-rt", and "kpatch" are only migrated if still in "NEW" or "ASSIGNED". If you cannot log in to RH Jira, please consult article #7032570. That failing, please send an e-mail to the RH Jira admins at rh-issues@redhat.com to troubleshoot your issue as a user management inquiry. The email creates a ServiceNow ticket with Red Hat. Individual Bugzilla bugs that are migrated will be moved to status "CLOSED", resolution "MIGRATED", and set with "MigratedToJIRA" in "Keywords". The link to the successor Jira issue will be found under "Links", have a little "two-footprint" icon next to it, and direct you to the "RHEL project" in Red Hat Jira (issue links are of type "https://issues.redhat.com/browse/RHEL-XXXX", where "X" is a digit). This same link will be available in a blue banner at the top of the page informing you that that bug has been migrated.
Bug 999304 - virtio_scsi has bad write performance compared to virtio_blk
Summary: virtio_scsi has bad write performance compared to virtio_blk
Keywords:
Status: CLOSED WONTFIX
Alias: None
Product: Red Hat Enterprise Linux 7
Classification: Red Hat
Component: qemu-kvm
Version: 7.0
Hardware: x86_64
OS: Linux
high
medium
Target Milestone: rc
: ---
Assignee: Fam Zheng
QA Contact: Virtualization Bugs
URL:
Whiteboard:
Depends On: 893327
Blocks: 1172230
TreeView+ depends on / blocked
 
Reported: 2013-08-21 06:22 UTC by Xiaomei Gao
Modified: 2015-08-17 15:22 UTC (History)
14 users (show)

Fixed In Version:
Doc Type: Bug Fix
Doc Text:
Clone Of: 893327
Environment:
Last Closed: 2014-12-25 04:00:42 UTC
Target Upstream Version:
Embargoed:


Attachments (Terms of Use)

Comment 3 Paolo Bonzini 2013-10-31 13:34:26 UTC
These are different tests than those in the RHEL6 bug 893327.  That bug was using iozone, not fio.  It would be useful to compare RHEL6/RHEL7 results for iozone.

Regarding the results in comment 2, the qcow2 results are actually very good, where do you see a 10%-25% gap?

For SSD, virtio-scsi has worse results than virtio-blk in write tests.

For ramdisk, the results are very noisy, and the overall significance is not great except that virtio-scsi consumes more host CPU.

Comment 4 Xiaomei Gao 2013-11-01 07:36:29 UTC
(In reply to Paolo Bonzini from comment #3)
> These are different tests than those in the RHEL6 bug 893327.  That bug was
> using iozone, not fio.  It would be useful to compare RHEL6/RHEL7 results
> for iozone.

Iozone is the test tools that we used before, now we use fio benchmark to test related block performance.

> Regarding the results in comment 2, the qcow2 results are actually very
> good, where do you see a 10%-25% gap?

We could see 10%-25% gap in "BW/CPU" column.

> For SSD, virtio-scsi has worse results than virtio-blk in write tests.

There is also performance gap for read tests in "BW/CPU" aspect.

Comment 5 Paolo Bonzini 2013-11-04 14:53:26 UTC
> Iozone is the test tools that we used before, now we use fio benchmark to test
> related block performance.

Can you run fio for RHEL6 and/or iozone for RHEL7?  Otherwise it's difficult to compare.

> There is also performance gap for read tests in "BW/CPU" aspect.

BW/CPU is not performance, it is performance divided by host CPU usage.  Only MB/sec and IOPS is performance.

Comment 6 Xiaomei Gao 2013-11-05 01:28:43 UTC
(In reply to Paolo Bonzini from comment #5)
> > Iozone is the test tools that we used before, now we use fio benchmark to test
> > related block performance.
> 
> Can you run fio for RHEL6 and/or iozone for RHEL7?  Otherwise it's difficult
> to compare.

Sure, i am working against it, once test results come out, i will update the comment.

Comment 10 Xiaomei Gao 2013-11-07 09:42:05 UTC
(In reply to Paolo Bonzini from comment #8)

> Is this visible for virtio-blk as well?  Or is it specific to virtio-scsi?

Forgot to mention in comment #9 that there has been exited Bug 966398 to track the issue.

Comment 11 Paolo Bonzini 2013-11-07 10:40:30 UTC
> Sure. But we could still see big IOPS boost with the same guest RHEL7.0 on
> ramdisk storage backend.

Interesting, this means my theory was wrong. :)  It could be due to many factors, perhaps RHEL7 host kernel improvements...

Anyhow, the RHEL7 vs. RHEL6 performance doesn't seem to be very different for virtio-scsi and virtio-blk.  The host CPU usage can be tracked by bug 966398 (thanks for pointing me to it).

The main issue tracked by this bug should be "virtio-scsi has worse results than virtio-blk in write tests".  This is where we have less noise in the ramdisk results, and consistently worse performance in the SSD tests (raw and qcow2).

Thanks!


Note You need to log in before you can comment on or make changes to this bug.