Bug 1009134 - sequential read performance not optimized for libgfapi
sequential read performance not optimized for libgfapi
Status: CLOSED CURRENTRELEASE
Product: GlusterFS
Classification: Community
Component: libgfapi (Show other bugs)
mainline
Unspecified Unspecified
high Severity high
: ---
: ---
Assigned To: GlusterFS Bugs list
Sudhir D
:
Depends On: 1007866
Blocks:
  Show dependency treegraph
 
Reported: 2013-09-17 14:39 EDT by Anand Avati
Modified: 2015-09-01 19:06 EDT (History)
8 users (show)

See Also:
Fixed In Version: glusterfs-3.4.3
Doc Type: Bug Fix
Doc Text:
Story Points: ---
Clone Of: 1007866
Environment:
Last Closed: 2014-04-17 09:14:32 EDT
Type: Bug
Regression: ---
Mount Type: ---
Documentation: ---
CRM:
Verified Versions:
Category: ---
oVirt Team: ---
RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: ---
wquan: needinfo-


Attachments (Terms of Use)

  None (edit)
Comment 1 Anand Avati 2013-09-17 14:41:09 EDT
REVIEW: http://review.gluster.org/5897 (gfapi: use native STACK_WIND for read _async() calls) posted (#4) for review on master by Anand Avati (avati@redhat.com)
Comment 2 Anand Avati 2013-09-17 14:42:27 EDT
COMMIT: http://review.gluster.org/5897 committed in master by Anand Avati (avati@redhat.com) 
------
commit 8eb3898578a4fe359934da57c0e51cfaa21685d4
Author: Anand Avati <avati@redhat.com>
Date:   Wed Sep 11 00:49:57 2013 -0700

    gfapi: use native STACK_WIND for read _async() calls
    
    There is little value in using synctask wrappers for async IO
    requests, as STACK_WIND is asynchronous by nature already.
    
    Skip going through synctask for read/write async calls.
    
    Change-Id: Ifde331d7c97e0f33426da6ef4377c5ba70dddc06
    BUG: 1009134
    Signed-off-by: Anand Avati <avati@redhat.com>
    Reviewed-on: http://review.gluster.org/5897
Comment 4 Ben England 2013-11-08 13:24:55 EST
Did this change get committed to nightly builds for RHS 2.1U2?  When?  This result looks like single-thread read performance before the patch (and after stat-prefetch on tuning applied)

[root@gprfc093 ~]# rpm -q glusterfs
glusterfs-3.4.0.35.1u2rhs-1.el6rhs.x86_64
[root@gprfc093 ~]# ssh gprfs047 rpm -q glusterfs
glusterfs-3.4.0.35.1u2rhs-1.el6rhs.x86_64
[root@gprfc093 ~]# ssh gprfs048 rpm -q glusterfs

[root@gprfc093 ~]# virsh create /home/kvm_images/gfapi.xml
Domain gfapi created from /home/kvm_images/gfapi.xml

[root@gprfs047 ~]# gluster v i 

Volume Name: benvol
Type: Replicate
Volume ID: acbd0a41-35ee-466a-99bb-07a0cfe7bc81
Status: Started
Number of Bricks: 1 x 2 = 2
Transport-type: tcp
Bricks:
Brick1: gprfs047-10ge:/mnt/b1/brick
Brick2: gprfs048-10ge:/mnt/brick0/brick
Options Reconfigured:
cluster.self-heal-daemon: off
network.remote-dio: enable
cluster.eager-lock: enable
performance.io-cache: off
performance.read-ahead: off
performance.quick-read: off
server.allow-insecure: on
diagnostics.count-fop-hits: on
diagnostics.latency-measurement: on
performance.stat-prefetch: on

[root@gprfc093 ~]# for n in 7 8 ; do ssh gprfs04$n 'echo 1 > /proc/sys/vm/drop_caches' ; done
[root@gprfc093 ~]# ssh gfapi 'echo 1 > /proc/sys/vm/drop_caches ; iozone -w -c -e -i 1 -+n -r 64k -s 16g -t 1 -F /mnt/vbd-b/f.tmp /mnt/vbd-c/f.tmp /mnt/vbd-d/f.tmp /mnt/vbd-e/f.tmp '

        Children see throughput for  1 readers          =  581969.38 KB/sec
        Parent sees throughput for  1 readers           =  581762.98 KB/sec
Comment 5 Anand Avati 2013-11-26 14:02:10 EST
COMMIT: http://review.gluster.org/6325 committed in release-3.4 by Anand Avati (avati@redhat.com) 
------
commit 24e4b5d12be5d92a4e5c3167372f88cd3dfa720a
Author: Anand Avati <avati@redhat.com>
Date:   Wed Sep 11 00:49:57 2013 -0700

    gfapi: use native STACK_WIND for read _async() calls
    
    There is little value in using synctask wrappers for async IO
    requests, as STACK_WIND is asynchronous by nature already.
    
    Skip going through synctask for read/write async calls.
    
    Change-Id: Ifde331d7c97e0f33426da6ef4377c5ba70dddc06
    BUG: 1009134
    Signed-off-by: Anand Avati <avati@redhat.com>
    Reviewed-on: http://review.gluster.org/6325
    Tested-by: Gluster Build System <jenkins@build.gluster.com>
Comment 6 Niels de Vos 2014-04-17 09:14:32 EDT
This bug is getting closed because a release has been made available that should address the reported issue. In case the problem is still not fixed with glusterfs-3.4.3, please reopen this bug report.

glusterfs-3.4.3 has been announced on the Gluster Developers mailinglist [1], packages for several distributions should already be or become available in the near future. Keep an eye on the Gluster Users mailinglist [2] and the update infrastructure for your distribution.

The fix for this bug likely to be included in all future GlusterFS releases i.e. release > 3.4.3. In the same line the recent release i.e. glusterfs-3.5.0 [3] likely to have the fix. You can verify this by reading the comments in this bug report and checking for comments mentioning "committed in release-3.5".

[1] http://thread.gmane.org/gmane.comp.file-systems.gluster.devel/5978
[2] http://news.gmane.org/gmane.comp.file-systems.gluster.user
[3] http://thread.gmane.org/gmane.comp.file-systems.gluster.devel/6137
Comment 7 Niels de Vos 2015-05-26 08:34:12 EDT
This bug has been CLOSED, and there has not been a response to the requested NEEDINFO in more than 4 weeks. The NEEDINFO flag is now getting cleared so that our Bugzilla household is getting more in order.

Note You need to log in before you can comment on or make changes to this bug.