Bug 1473134

Summary: The rebal-throttle setting does not work as expected
Product: [Community] GlusterFS Reporter: Susant Kumar Palai <spalai>
Component: distributeAssignee: Susant Kumar Palai <spalai>
Status: CLOSED CURRENTRELEASE QA Contact:
Severity: urgent Docs Contact:
Priority: unspecified    
Version: 3.10CC: bugs
Target Milestone: ---   
Target Release: ---   
Hardware: x86_64   
OS: Linux   
Whiteboard:
Fixed In Version: glusterfs-3.10.5 Doc Type: If docs needed, set a value
Doc Text:
Story Points: ---
Clone Of: 1420166 Environment:
Last Closed: 2017-08-21 13:41:22 UTC Type: Bug
Regression: --- Mount Type: ---
Documentation: --- CRM:
Verified Versions: Category: ---
oVirt Team: --- RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: --- Target Upstream Version:
Embargoed:
Bug Depends On: 1420166    
Bug Blocks:    

Comment 1 Worker Ant 2017-07-20 06:40:28 UTC
REVIEW: https://review.gluster.org/17834 (cluster/dht: rebalance perf enhancement) posted (#1) for review on release-3.10 by Susant Palai (spalai)

Comment 2 Worker Ant 2017-07-20 10:09:45 UTC
REVIEW: https://review.gluster.org/17834 (cluster/dht: rebalance perf enhancement) posted (#2) for review on release-3.10 by Susant Palai (spalai)

Comment 3 Worker Ant 2017-08-11 17:13:34 UTC
REVIEW: https://review.gluster.org/17834 (cluster/dht: rebalance perf enhancement) posted (#3) for review on release-3.10 by Shyamsundar Ranganathan (srangana)

Comment 4 Worker Ant 2017-08-11 17:40:33 UTC
COMMIT: https://review.gluster.org/17834 committed in release-3.10 by Shyamsundar Ranganathan (srangana) 
------
commit 082868b81a0f4c9ed27f71b48708b2ddfd379150
Author: Susant Palai <spalai>
Date:   Tue Jan 10 16:11:50 2017 +0530

    cluster/dht: rebalance perf enhancement
    
    Problem: Throttle settings "normal" and "aggressive" for rebalance
    did not have performance difference.
    
    normal mode spawns $(no. of cores - 4)/2 threads and aggressive
    spawns $(no. of cores - 4) threads. Though aggressive mode has twice
    the number of threads compared to that of normal mode, there was no
    performance gain when switched to aggressive mode from normal mode.
    
    RCA:
    During the course of debugging the above problem, we tried assigning
    migration job to migration threads spawned by rebalance, rather than
    synctasks(as there is more overhead associated to manage the task
    queue and threads). This gave us a significant improvement over rebalance
    under synctasks. This patch does not really gurantee that there will be a
    clear performance difference between normal and aggressive mode, but this
    patch certainly maximized the disk utilization for 1GBfiles run.
    
    Results:
    
    Test enviroment:
    Gluster Config:
    Number of Bricks: 2 (one brick per disk(RAID-6 12 disk))
    Bricks:
    Brick1: server1:/brick/test1/1
    Brick2: server2:/brick/test1/1
    Options Reconfigured:
    performance.readdir-ahead: on
    server.event-threads: 4
    client.event-threads: 4
    
    1000 files with 1GB each were created/renamed such that all files will have
    server1 as cached and server2 as hashed, so that all files will be migrated.
    
    Test machines had 24 cores each.
    
    Results  with/without synctask based migration:
    -----------------------------------------------
    
    mode                    normal(10threads)          aggressive(20threads)
    
    timetaken               0:55:30 (h:m:s)            0:56:3 (h:m:s)
    withsynctask
    
    timetaken
    with migrator           0:38:3 (h:m:s)             0:23:41 (h:m:s)
    threads
    
    From above table it can be seen that, there is a clear 2x perf gain between
    rebalance with synctask vs rebalance with migrator threads.
    
    Additionally this patch modifies the code so that caller will have the exact error
    number returned by dht_migrate_file(earlier the errno meaning was overloaded). This
    will help avoiding scenarios where migration failure due to ENOENT, can result in
    rebalance abort/failure.
    
    > Change-Id: I8904e2fb147419d4a51c1267be11a08ffd52168e
    > BUG: 1420166
    > Signed-off-by: Susant Palai <spalai>
    > Reviewed-on: https://review.gluster.org/16427
    > Smoke: Gluster Build System <jenkins.org>
    > Reviewed-by: N Balachandran <nbalacha>
    > Reviewed-by: Raghavendra G <rgowdapp>
    > NetBSD-regression: NetBSD Build System <jenkins.org>
    > CentOS-regression: Gluster Build System <jenkins.org>
    > Signed-off-by: Susant Palai <spalai>
    
    Change-Id: I8904e2fb147419d4a51c1267be11a08ffd52168e
    BUG: 1473134
    Signed-off-by: Susant Palai <spalai>
    Reviewed-on: https://review.gluster.org/17834
    CentOS-regression: Gluster Build System <jenkins.org>
    Smoke: Gluster Build System <jenkins.org>
    Reviewed-by: Shyamsundar Ranganathan <srangana>

Comment 5 Shyamsundar 2017-08-21 13:41:22 UTC
This bug is getting closed because a release has been made available that should address the reported issue. In case the problem is still not fixed with glusterfs-3.10.5, please open a new bug report.

glusterfs-3.10.5 has been announced on the Gluster mailinglists [1], packages for several distributions should become available in the near future. Keep an eye on the Gluster Users mailinglist [2] and the update infrastructure for your distribution.

[1] http://lists.gluster.org/pipermail/announce/2017-August/000079.html
[2] https://www.gluster.org/pipermail/gluster-users/