Bug 908277 - Poor performance with gluster volume set command execution
Poor performance with gluster volume set command execution
Status: CLOSED CURRENTRELEASE
Product: GlusterFS
Classification: Community
Component: glusterd (Show other bugs)
mainline
Unspecified Unspecified
unspecified Severity unspecified
: ---
: ---
Assigned To: Pranith Kumar K
: Reopened
Depends On:
Blocks:
  Show dependency treegraph
 
Reported: 2013-02-06 05:21 EST by Pranith Kumar K
Modified: 2014-11-11 03:23 EST (History)
2 users (show)

See Also:
Fixed In Version: glusterfs-3.6.0beta1
Doc Type: Bug Fix
Doc Text:
Story Points: ---
Clone Of:
Environment:
Last Closed: 2014-11-11 03:23:31 EST
Type: Bug
Regression: ---
Mount Type: ---
Documentation: ---
CRM:
Verified Versions:
Category: ---
oVirt Team: ---
RHEL 7.3 requirements from Atomic Host:


Attachments (Terms of Use)

  None (edit)
Description Pranith Kumar K 2013-02-06 05:21:43 EST
Description of problem:
When we compare same command execution times with master vs v3.3.0 the time difference is drastic.

With master:
15:45:56 :( ⚡ time gluster volume set r2 performance.read-ahead off
volume set: success

real	0m1.259s
user	0m0.051s
sys	0m0.020s

root - ~ 
15:46:13 :) ⚡ time gluster volume set r2 performance.read-ahead off
^[[A
volume set: success

real	0m1.310s
user	0m0.052s
sys	0m0.018s

root - ~ 
15:46:40 :) ⚡ time gluster volume set r2 performance.read-ahead off
volume set: success

real	0m1.250s
user	0m0.059s
sys	0m0.019s

With 3.3.0:
root - ~ 
15:49:16 :) ⚡ time gluster volume set r2 performance.read-ahead off
Set volume successful

real	0m0.081s
user	0m0.051s
sys	0m0.016s

root - ~ 
15:49:21 :) ⚡ time gluster volume set r2 performance.read-ahead off
Set volume successful

real	0m0.088s
user	0m0.062s
sys	0m0.016s

root - ~ 
15:49:24 :) ⚡ time gluster volume set r2 performance.read-ahead off
Set volume successful

real	0m0.081s
user	0m0.056s
sys	0m0.011s

Volume information:

root - ~ 
15:49:25 :) ⚡ gluster v i
 
Volume Name: r2
Type: Replicate
Volume ID: 44413bc7-04d3-43f1-a6fa-202a1629cd93
Status: Stopped
Number of Bricks: 1 x 2 = 2
Transport-type: tcp
Bricks:
Brick1: pranithk-laptop:/home/gfs/r2_0
Brick2: pranithk-laptop:/home/gfs/r2_1
Options Reconfigured:
performance.read-ahead: off
diagnostics.brick-log-level: DEBUG
diagnostics.client-log-level: DEBUG


Version-Release number of selected component (if applicable):


How reproducible:


Steps to Reproduce:
1.
2.
3.
  
Actual results:


Expected results:


Additional info:
Comment 1 Pranith Kumar K 2013-02-06 21:50:20 EST
The latency is due to O_SYNC and fsync in gluster-store, which are necessary.
Comment 2 Anand Avati 2014-05-01 01:32:09 EDT
REVIEW: http://review.gluster.org/7370 (mgmt/gluster: Use fsync instead of O_SYNC) posted (#2) for review on master by Pranith Kumar Karampuri (pkarampu@redhat.com)
Comment 3 Pranith Kumar K 2014-05-01 01:50:14 EDT
Performance results with this new change:
Without this change:
[root@localhost ~]# for i in {1..3}; do time gluster volume set r2 performance.read-ahead off; done
volume set: success

real	0m1.897s
user	0m0.233s
sys	0m0.088s
volume set: success

real	0m2.241s
user	0m0.228s
sys	0m0.081s
volume set: success

real	0m2.544s
user	0m0.130s
sys	0m0.047s


With the change:
[root@localhost ~]# for i in {1..3}; do time gluster volume set r2 performance.read-ahead off; done
volume set: success

real	0m0.569s
user	0m0.132s
sys	0m0.038s
volume set: success

real	0m0.485s
user	0m0.130s
sys	0m0.034s
volume set: success

real	0m0.840s
user	0m0.125s
sys	0m0.031s
Comment 4 Anand Avati 2014-05-02 00:08:01 EDT
REVIEW: http://review.gluster.org/7370 (mgmt/gluster: Use fsync instead of O_SYNC) posted (#3) for review on master by Pranith Kumar Karampuri (pkarampu@redhat.com)
Comment 5 Anand Avati 2014-05-05 16:52:58 EDT
COMMIT: http://review.gluster.org/7370 committed in master by Anand Avati (avati@redhat.com) 
------
commit 3a35f975fceb89c5ae0e8e3e189545f6fceaf6e5
Author: Pranith Kumar K <pkarampu@redhat.com>
Date:   Thu May 1 10:29:54 2014 +0530

    mgmt/gluster: Use fsync instead of O_SYNC
    
    Glusterd uses O_SYNC to write to temp file then performs renames
    to the actual file and performs fsync on parent directory. Until
    this rename happens syncing writes to the file can be deferred.
    In this patch O_SYNC open of temp file is removed and fsync of the
    fd before rename is done.
    
    Change-Id: Ie7da161b0daec845c7dcfab4154cc45c2f49d825
    BUG: 908277
    Signed-off-by: Pranith Kumar K <pkarampu@redhat.com>
    Reviewed-on: http://review.gluster.org/7370
    Reviewed-by: Krishnan Parthasarathi <kparthas@redhat.com>
    Reviewed-by: Raghavendra Bhat <raghavendra@redhat.com>
    Tested-by: Gluster Build System <jenkins@build.gluster.com>
    Reviewed-by: Anand Avati <avati@redhat.com>
Comment 6 Niels de Vos 2014-09-22 08:31:24 EDT
A beta release for GlusterFS 3.6.0 has been released. Please verify if the release solves this bug report for you. In case the glusterfs-3.6.0beta1 release does not have a resolution for this issue, leave a comment in this bug and move the status to ASSIGNED. If this release fixes the problem for you, leave a note and change the status to VERIFIED.

Packages for several distributions should become available in the near future. Keep an eye on the Gluster Users mailinglist [2] and the update (possibly an "updates-testing" repository) infrastructure for your distribution.

[1] http://supercolony.gluster.org/pipermail/gluster-users/2014-September/018836.html
[2] http://supercolony.gluster.org/pipermail/gluster-users/
Comment 7 Niels de Vos 2014-11-11 03:23:31 EST
This bug is getting closed because a release has been made available that should address the reported issue. In case the problem is still not fixed with glusterfs-3.6.1, please reopen this bug report.

glusterfs-3.6.1 has been announced [1], packages for several distributions should become available in the near future. Keep an eye on the Gluster Users mailinglist [2] and the update infrastructure for your distribution.

[1] http://supercolony.gluster.org/pipermail/gluster-users/2014-November/019410.html
[2] http://supercolony.gluster.org/mailman/listinfo/gluster-users

Note You need to log in before you can comment on or make changes to this bug.