Bug 1324004

Summary: arbiter volume write performance is bad.
Product: [Community] GlusterFS Reporter: Ravishankar N <ravishankar>
Component: arbiterAssignee: Ravishankar N <ravishankar>
Status: CLOSED CURRENTRELEASE QA Contact:
Severity: unspecified Docs Contact:
Priority: unspecified    
Version: mainlineCC: bugs
Target Milestone: ---   
Target Release: ---   
Hardware: Unspecified   
OS: Unspecified   
Whiteboard:
Fixed In Version: glusterfs-3.8rc2 Doc Type: Bug Fix
Doc Text:
Story Points: ---
Clone Of:
: 1324809 1375125 (view as bug list) Environment:
Last Closed: 2016-06-16 14:03:04 UTC Type: Bug
Regression: --- Mount Type: ---
Documentation: --- CRM:
Verified Versions: Category: ---
oVirt Team: --- RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: --- Target Upstream Version:
Embargoed:
Bug Depends On:    
Bug Blocks: 1324809    

Description Ravishankar N 2016-04-05 10:33:12 UTC
Reported by Robert Rauch @ https://bugzilla.redhat.com/show_bug.cgi?id=1309462#c50 and Russel Purinton @ http://www.spinics.net/lists/gluster-users/msg26311.html

Replica-3:
0:root@vm2 glusterfs$ gluster v create testvol replica 3  127.0.0.2:/bricks/brick{1..3} force
volume create: testvol: success: please start the volume to access data
0:root@vm2 glusterfs$ gluster v start testvol
volume start: testvol: success
0:root@vm2 glusterfs$ mount -t glusterfs 127.0.0.2:testvol /mnt/fuse_mnt
0:root@vm2 glusterfs$ cd /mnt/fuse_mnt/
0:root@vm2 fuse_mnt$ dd if=/dev/zero of=file bs=1M count=100
100+0 records in
100+0 records out
104857600 bytes (105 MB) copied, 1.87984 s, 55.8 MB/s


Arbiter:
0:root@vm2 ~$ gluster v create testvol replica 3  arbiter 1 127.0.0.2:/bricks/brick{1..3} force
volume create: testvol: success: please start the volume to access data
0:root@vm2 ~$ gluster v start testvol
volume start: testvol: success
0:root@vm2 ~$ mount -t glusterfs 127.0.0.2:testvol /mnt/fuse_mnt
0:root@vm2 ~$ cd /mnt/fuse_mnt/
0:root@vm2 fuse_mnt$ dd if=/dev/zero of=file bs=1M count=100
100+0 records in
100+0 records out
104857600 bytes (105 MB) copied, 7.51857 s, 13.9 MB/s

Comment 1 Vijay Bellur 2016-04-05 10:51:37 UTC
REVIEW: http://review.gluster.org/13906 (arbiter: write performance improvement) posted (#1) for review on master by Ravishankar N (ravishankar)

Comment 2 Ravishankar N 2016-04-06 00:46:41 UTC
Note: With the patch applied, here is the throughput I get:

Arbiter:
0:root@vm2 ~$ gluster v create testvol replica 3  arbiter 1 127.0.0.2:/bricks/brick{1..3} forcevolume create: testvol: success: please start the volume to access data
0:root@vm2 ~$ gluster v start testvol
volume start: testvol: success
0:root@vm2 ~$ mount -t glusterfs 127.0.0.2:testvol /mnt/fuse_mnt
0:root@vm2 ~$ dd if=/dev/zero of=/mnt/fuse_mnt/file bs=1M count=100
100+0 records in
100+0 records out
104857600 bytes (105 MB) copied, 1.25445 s, 83.6 MB/s

Comment 3 Vijay Bellur 2016-04-07 10:38:07 UTC
REVIEW: http://review.gluster.org/13906 (arbiter: write performance improvement) posted (#2) for review on master by Ravishankar N (ravishankar)

Comment 4 Vijay Bellur 2016-04-09 06:12:24 UTC
REVIEW: http://review.gluster.org/13906 (arbiter: write performance improvement) posted (#3) for review on master by Ravishankar N (ravishankar)

Comment 5 Vijay Bellur 2016-04-09 06:27:26 UTC
REVIEW: http://review.gluster.org/13906 (arbiter: write performance improvement) posted (#4) for review on master by Ravishankar N (ravishankar)

Comment 6 Vijay Bellur 2016-04-11 11:32:45 UTC
COMMIT: http://review.gluster.org/13906 committed in master by Pranith Kumar Karampuri (pkarampu) 
------
commit e1004679563ef17c460f83098983baf105655712
Author: Ravishankar N <ravishankar>
Date:   Tue Apr 5 15:16:52 2016 +0530

    arbiter: write performance improvement
    
    Problem: The throughput for a 'dd' workload was much less for arbiter
    configuration when compared to normal replica-3 volume. There were 2
    issues:
    
    i)arbiter_writev was using the request dict as response dict while
    unwinding, leading to incorect GLUSTERFS_WRITE_IS_APPEND and
    GLUSTERFS_OPEN_FD_COUNT values (=4), leading to immediate post-ops
    because is_afr_delayed_changelog_post_op_needed() failed due to
    afr_are_multiple_fds_opened() check.
    
    ii) The arbiter code in afr was setting local->transaction.{start and len} =0
    to take full file locks. What this meant was even for simultaenous but
    non-overlapping writevs, afr_transaction_eager_lock_init() was not
    happening because afr_locals_overlap() always stays true. Consequently
    is_afr_delayed_changelog_post_op_needed() failed due to
    local->delayed_post_op not being set.
    
    Fix:
    i) Send appropriate response dict values in arbiter_writev.
    ii) Modify flock params instead of local->transaction.{start and len} to
    take full file locks in the transaction.
    
    Also changed _fill_writev_xdata() in posix to fill rsp_xdata for
    whatever key is requested for.
    
    Change-Id: I1c5fc5e98aba49ade540bb441a022e65b753432a
    BUG: 1324004
    Signed-off-by: Ravishankar N <ravishankar>
    Reported-by: Robert Rauch <robert.rauch>
    Reported-by: Russel Purinton <russell.purinton>
    Reviewed-on: http://review.gluster.org/13906
    Reviewed-by: Pranith Kumar Karampuri <pkarampu>
    Smoke: Gluster Build System <jenkins.com>
    CentOS-regression: Gluster Build System <jenkins.com>
    NetBSD-regression: NetBSD Build System <jenkins.org>

Comment 7 Niels de Vos 2016-06-16 14:03:04 UTC
This bug is getting closed because a release has been made available that should address the reported issue. In case the problem is still not fixed with glusterfs-3.8.0, please open a new bug report.

glusterfs-3.8.0 has been announced on the Gluster mailinglists [1], packages for several distributions should become available in the near future. Keep an eye on the Gluster Users mailinglist [2] and the update infrastructure for your distribution.

[1] http://blog.gluster.org/2016/06/glusterfs-3-8-released/
[2] http://thread.gmane.org/gmane.comp.file-systems.gluster.user