RHEL Engineering is moving the tracking of its product development work on RHEL 6 through RHEL 9 to Red Hat Jira (issues.redhat.com). If you're a Red Hat customer, please continue to file support cases via the Red Hat customer portal. If you're not, please head to the "RHEL project" in Red Hat Jira and file new tickets here. Individual Bugzilla bugs in the statuses "NEW", "ASSIGNED", and "POST" are being migrated throughout September 2023. Bugs of Red Hat partners with an assigned Engineering Partner Manager (EPM) are migrated in late September as per pre-agreed dates. Bugs against components "kernel", "kernel-rt", and "kpatch" are only migrated if still in "NEW" or "ASSIGNED". If you cannot log in to RH Jira, please consult article #7032570. That failing, please send an e-mail to the RH Jira admins at rh-issues@redhat.com to troubleshoot your issue as a user management inquiry. The email creates a ServiceNow ticket with Red Hat. Individual Bugzilla bugs that are migrated will be moved to status "CLOSED", resolution "MIGRATED", and set with "MigratedToJIRA" in "Keywords". The link to the successor Jira issue will be found under "Links", have a little "two-footprint" icon next to it, and direct you to the "RHEL project" in Red Hat Jira (issue links are of type "https://issues.redhat.com/browse/RHEL-XXXX", where "X" is a digit). This same link will be available in a blue banner at the top of the page informing you that that bug has been migrated.
Bug 1285644 - IO performance impacts huge with aio=threads for file based backing xfs on ssd
Summary: IO performance impacts huge with aio=threads for file based backing xfs on ssd
Keywords:
Status: CLOSED CURRENTRELEASE
Alias: None
Product: Red Hat Enterprise Linux 7
Classification: Red Hat
Component: qemu-kvm-rhev
Version: 7.2
Hardware: x86_64
OS: Linux
unspecified
high
Target Milestone: rc
: ---
Assignee: Stefan Hajnoczi
QA Contact: Virtualization Bugs
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2015-11-26 07:13 UTC by Pradeep Kumar Surisetty
Modified: 2016-01-29 13:54 UTC (History)
11 users (show)

Fixed In Version:
Doc Type: Bug Fix
Doc Text:
Clone Of:
Environment:
Last Closed: 2016-01-29 13:54:39 UTC
Target Upstream Version:
Embargoed:


Attachments (Terms of Use)

Description Pradeep Kumar Surisetty 2015-11-26 07:13:18 UTC
Description of problem:

IO performance evaluated with aio= native & threads with different combinations on  RHEL 7.2 VM & RHEL 7.2 Host.  IO Performance to attached virtual disks      ( file based) backing with xfs (on SSD) is very very poor.  Almost 90% lesser than aio=native. 

Combinations validated: 

         Storage disk: SSD and HDD
         File system:
              EXT4, XFS, NFS, and no fs (a logical volume is used instead)
         Virtual disk image format: raw and QCOW2
         Virtual disk image allocation: 
              sparse and preallocated (with multiple preallocation methods)
         VM's:  16 VM (with concurrent operations) , 1 VM

How:

Attach two virtual disks to each VM. One with aio=native, one with aio=threads. 
Start fio benchmark with different testing modes. 
        sequential read
        sequential write
        random read,
        random write,
        random read/write

Affected combinations:  (Almost 70 to 90% of performance impact)

         Storage: SSD
         FS: XFS
         Images;  Both raw & qcow2. 
         Number of VMs: Multi VM (16), Single VM



Version-Release number of selected component (if applicable):


How reproducible:


Steps to Reproduce:
1. Attach two virtual disks to each VM. One with aio=native, one with aio=threads. run fio with mentioned combinations. 

3.

Actual results:

Performance with aio=threads  is almost 70 to 90% lesser. 

Expected results:

aio=threads performance should be little less (around 10-15%)  or close to aio=native.  

Additional info:

Comment 1 Pradeep Kumar Surisetty 2015-11-26 07:14:34 UTC
This impact is noticed with HDD ( All combinations)
                       With SSD (ext4, lvm).

Comment 3 Stefan Hajnoczi 2015-11-26 08:35:41 UTC
Input from the kernel filesystems team is needed here since this is an XFS-specific performance issue.  Probably nothing can be done about this in qemu-kvm-rhev (at best a workaround for an XFS kernel issue).

Have you been able to reproduce this with fio?  That would make it easier for the filesystems team to look into the issue.

Comment 4 Pradeep Kumar Surisetty 2015-11-26 14:34:19 UTC
fio job file which was used to validate

iodepth=1
ioengine="sync"
file_size="4096M"
job_mode="concurrent"

Will replicate same environment and run host.

Comment 5 Stefan Hajnoczi 2016-01-04 05:12:19 UTC
(In reply to Pradeep Kumar Surisetty from comment #4)
> fio job file which was used to validate
> 
> iodepth=1
> ioengine="sync"
> file_size="4096M"
> job_mode="concurrent"
> 
> Will replicate same environment and run host.

This fio job snippet isn't a complete job file (e.g. direct=1 and other options are missing).

The host fio jobs need to simulate QEMU's aio=threads and aio=native behavior so that the kernel file systems team can see the big performance difference between the two and investigate.

Can you post the full fio jobs and shell commands to reproduce the performance problem?

Comment 7 Pradeep Kumar Surisetty 2016-01-04 12:56:57 UTC
Steps:

1) Attach images which are on XFS FS to VM 
    I added one image with aio=native (vdb)
            one image with aio=threads (vdc)
2) Run fio concurrently on 32 VM's once with native and then with threads

3) Compare IOPS

Comment 8 Pradeep Kumar Surisetty 2016-01-14 11:29:30 UTC
Previous runs were on  3.10.0-315.el7.x86_64 kernel. 
Now ran again on 3.10.0-327.4.4.el7.x86_64 kernel. 

I see better numbers now. Looks like regression in 3.10.0-315.   trying to understand what fixed this. Any xfs fixes went in?

Comment 9 Stefan Hajnoczi 2016-01-29 13:54:39 UTC
Closing because the issue was fixed.


Note You need to log in before you can comment on or make changes to this bug.