Red Hat Bugzilla – Bug 1285644
IO performance impacts huge with aio=threads for file based backing xfs on ssd
Last modified: 2016-01-29 08:54:39 EST
Description of problem:
IO performance evaluated with aio= native & threads with different combinations on RHEL 7.2 VM & RHEL 7.2 Host. IO Performance to attached virtual disks ( file based) backing with xfs (on SSD) is very very poor. Almost 90% lesser than aio=native.
Storage disk: SSD and HDD
EXT4, XFS, NFS, and no fs (a logical volume is used instead)
Virtual disk image format: raw and QCOW2
Virtual disk image allocation:
sparse and preallocated (with multiple preallocation methods)
VM's: 16 VM (with concurrent operations) , 1 VM
Attach two virtual disks to each VM. One with aio=native, one with aio=threads.
Start fio benchmark with different testing modes.
Affected combinations: (Almost 70 to 90% of performance impact)
Images; Both raw & qcow2.
Number of VMs: Multi VM (16), Single VM
Version-Release number of selected component (if applicable):
Steps to Reproduce:
1. Attach two virtual disks to each VM. One with aio=native, one with aio=threads. run fio with mentioned combinations.
Performance with aio=threads is almost 70 to 90% lesser.
aio=threads performance should be little less (around 10-15%) or close to aio=native.
This impact is noticed with HDD ( All combinations)
With SSD (ext4, lvm).
Native vs Threads comparison graphs:
Single VM: Disk image: Preallocated raw
Multi VM: Disk image: Preallocated raw
Single VM: Disk image: qcow2
Multi VM: Disk image: qcow2
Single VM: Disk image: qcow2 , Preallocation=falloc
Input from the kernel filesystems team is needed here since this is an XFS-specific performance issue. Probably nothing can be done about this in qemu-kvm-rhev (at best a workaround for an XFS kernel issue).
Have you been able to reproduce this with fio? That would make it easier for the filesystems team to look into the issue.
fio job file which was used to validate
Will replicate same environment and run host.
(In reply to Pradeep Kumar Surisetty from comment #4)
> fio job file which was used to validate
> Will replicate same environment and run host.
This fio job snippet isn't a complete job file (e.g. direct=1 and other options are missing).
The host fio jobs need to simulate QEMU's aio=threads and aio=native behavior so that the kernel file systems team can see the big performance difference between the two and investigate.
Can you post the full fio jobs and shell commands to reproduce the performance problem?
1) Attach images which are on XFS FS to VM
I added one image with aio=native (vdb)
one image with aio=threads (vdc)
2) Run fio concurrently on 32 VM's once with native and then with threads
3) Compare IOPS
Previous runs were on 3.10.0-315.el7.x86_64 kernel.
Now ran again on 3.10.0-327.4.4.el7.x86_64 kernel.
I see better numbers now. Looks like regression in 3.10.0-315. trying to understand what fixed this. Any xfs fixes went in?
Closing because the issue was fixed.