Bug 963258
Summary: | support fuse async dio (FUSE_ASYNC_DIO) in gluster | ||
---|---|---|---|
Product: | [Community] GlusterFS | Reporter: | Brian Foster <bfoster> |
Component: | fuse | Assignee: | Brian Foster <bfoster> |
Status: | CLOSED CURRENTRELEASE | QA Contact: | |
Severity: | medium | Docs Contact: | |
Priority: | medium | ||
Version: | mainline | CC: | gluster-bugs |
Target Milestone: | --- | ||
Target Release: | --- | ||
Hardware: | All | ||
OS: | Linux | ||
Whiteboard: | |||
Fixed In Version: | glusterfs-3.5.0 | Doc Type: | Bug Fix |
Doc Text: | Story Points: | --- | |
Clone Of: | Environment: | ||
Last Closed: | 2014-04-17 11:42:21 UTC | Type: | Bug |
Regression: | --- | Mount Type: | --- |
Documentation: | --- | CRM: | |
Verified Versions: | Category: | --- | |
oVirt Team: | --- | RHEL 7.3 requirements from Atomic Host: | |
Cloudforms Team: | --- | Target Upstream Version: | |
Embargoed: |
Description
Brian Foster
2013-05-15 13:59:41 UTC
The following aio-stress output demonstrates the performance advantage of async DIO on a simple gluster volume (single brick vol, single client/server VM). aio-stress -s 1024 -r 1024 -b 1 -c 1 -t 1 -O /mnt/file --- baseline dropping io_iter to 1 file size 1024MB, record size 1024KB, depth 64, ios per iteration 1 max io_submit 1, buffer alignment set to 4KB threads 1 files 1 contexts 1 context offset 2MB verification off Running single thread version write on /mnt/file (12.10 MB/s) 1024.00 MB in 84.64s thread 0 write totals (12.10 MB/s) 1024.00 MB in 84.65s read on /mnt/file (37.50 MB/s) 1024.00 MB in 27.30s thread 0 read totals (37.50 MB/s) 1024.00 MB in 27.31s random write on /mnt/file (11.13 MB/s) 1024.00 MB in 92.00s thread 0 random write totals (11.13 MB/s) 1024.00 MB in 92.01s random read on /mnt/file (38.67 MB/s) 1024.00 MB in 26.48s thread 0 random read totals (38.67 MB/s) 1024.00 MB in 26.48s --- async dio dropping io_iter to 1 file size 1024MB, record size 1024KB, depth 64, ios per iteration 1 max io_submit 1, buffer alignment set to 4KB threads 1 files 1 contexts 1 context offset 2MB verification off Running single thread version write on /mnt/file (22.09 MB/s) 1024.00 MB in 46.36s thread 0 write totals (22.08 MB/s) 1024.00 MB in 46.39s read on /mnt/file (204.34 MB/s) 1024.00 MB in 5.01s thread 0 read totals (204.29 MB/s) 1024.00 MB in 5.01s random write on /mnt/file (17.46 MB/s) 1024.00 MB in 58.64s thread 0 random write totals (17.45 MB/s) 1024.00 MB in 58.67s random read on /mnt/file (218.00 MB/s) 1024.00 MB in 4.70s thread 0 random read totals (217.93 MB/s) 1024.00 MB in 4.70s REVIEW: http://review.gluster.org/5014 (mount/fuse: enable fuse real async dio when available) posted (#1) for review on master by Brian Foster (bfoster) Posted for review: http://review.gluster.org/5014 COMMIT: http://review.gluster.org/5014 committed in master by Anand Avati (avati) ------ commit 8a7cda772d34b96c45714160ce4ec3b0c0d5b29b Author: Brian Foster <bfoster> Date: Wed May 15 12:30:07 2013 -0400 mount/fuse: enable fuse real async dio when available fuse has support for optimized async. direct I/O handling via the FUSE_ASYNC_DIO init flag. Enable FUSE_ASYNC_DIO when advertised by fuse. performance/write-behind: fix dio hang Also fix a hang observed during aio-stress testing due to conflicting request handling in write-behind. Overlapping requests are skipped in pick_winds and may never continue when the conflicting write in progress returns. Add a wb_process_queue() call after a non-wb request completes to keep the queue moving. BUG: 963258 Change-Id: Ifba6e8aba7a7790b288a32067706b75f263105d4 Signed-off-by: Brian Foster <bfoster> Reviewed-on: http://review.gluster.org/5014 Reviewed-by: Anand Avati <avati> Tested-by: Gluster Build System <jenkins.com> This bug is getting closed because a release has been made available that should address the reported issue. In case the problem is still not fixed with glusterfs-3.5.0, please reopen this bug report. glusterfs-3.5.0 has been announced on the Gluster Developers mailinglist [1], packages for several distributions should become available in the near future. Keep an eye on the Gluster Users mailinglist [2] and the update infrastructure for your distribution. [1] http://thread.gmane.org/gmane.comp.file-systems.gluster.devel/6137 [2] http://thread.gmane.org/gmane.comp.file-systems.gluster.user |