Bug 1473134 - The rebal-throttle setting does not work as expected
The rebal-throttle setting does not work as expected
Product: GlusterFS
Classification: Community
Component: distribute (Show other bugs)
x86_64 Linux
unspecified Severity urgent
: ---
: ---
Assigned To: Susant Kumar Palai
Depends On: 1420166
  Show dependency treegraph
Reported: 2017-07-20 01:54 EDT by Susant Kumar Palai
Modified: 2017-08-21 09:41 EDT (History)
1 user (show)

See Also:
Fixed In Version: glusterfs-3.10.5
Doc Type: If docs needed, set a value
Doc Text:
Story Points: ---
Clone Of: 1420166
Last Closed: 2017-08-21 09:41:22 EDT
Type: Bug
Regression: ---
Mount Type: ---
Documentation: ---
Verified Versions:
Category: ---
oVirt Team: ---
RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: ---

Attachments (Terms of Use)

  None (edit)
Comment 1 Worker Ant 2017-07-20 02:40:28 EDT
REVIEW: https://review.gluster.org/17834 (cluster/dht: rebalance perf enhancement) posted (#1) for review on release-3.10 by Susant Palai (spalai@redhat.com)
Comment 2 Worker Ant 2017-07-20 06:09:45 EDT
REVIEW: https://review.gluster.org/17834 (cluster/dht: rebalance perf enhancement) posted (#2) for review on release-3.10 by Susant Palai (spalai@redhat.com)
Comment 3 Worker Ant 2017-08-11 13:13:34 EDT
REVIEW: https://review.gluster.org/17834 (cluster/dht: rebalance perf enhancement) posted (#3) for review on release-3.10 by Shyamsundar Ranganathan (srangana@redhat.com)
Comment 4 Worker Ant 2017-08-11 13:40:33 EDT
COMMIT: https://review.gluster.org/17834 committed in release-3.10 by Shyamsundar Ranganathan (srangana@redhat.com) 
commit 082868b81a0f4c9ed27f71b48708b2ddfd379150
Author: Susant Palai <spalai@redhat.com>
Date:   Tue Jan 10 16:11:50 2017 +0530

    cluster/dht: rebalance perf enhancement
    Problem: Throttle settings "normal" and "aggressive" for rebalance
    did not have performance difference.
    normal mode spawns $(no. of cores - 4)/2 threads and aggressive
    spawns $(no. of cores - 4) threads. Though aggressive mode has twice
    the number of threads compared to that of normal mode, there was no
    performance gain when switched to aggressive mode from normal mode.
    During the course of debugging the above problem, we tried assigning
    migration job to migration threads spawned by rebalance, rather than
    synctasks(as there is more overhead associated to manage the task
    queue and threads). This gave us a significant improvement over rebalance
    under synctasks. This patch does not really gurantee that there will be a
    clear performance difference between normal and aggressive mode, but this
    patch certainly maximized the disk utilization for 1GBfiles run.
    Test enviroment:
    Gluster Config:
    Number of Bricks: 2 (one brick per disk(RAID-6 12 disk))
    Brick1: server1:/brick/test1/1
    Brick2: server2:/brick/test1/1
    Options Reconfigured:
    performance.readdir-ahead: on
    server.event-threads: 4
    client.event-threads: 4
    1000 files with 1GB each were created/renamed such that all files will have
    server1 as cached and server2 as hashed, so that all files will be migrated.
    Test machines had 24 cores each.
    Results  with/without synctask based migration:
    mode                    normal(10threads)          aggressive(20threads)
    timetaken               0:55:30 (h:m:s)            0:56:3 (h:m:s)
    with migrator           0:38:3 (h:m:s)             0:23:41 (h:m:s)
    From above table it can be seen that, there is a clear 2x perf gain between
    rebalance with synctask vs rebalance with migrator threads.
    Additionally this patch modifies the code so that caller will have the exact error
    number returned by dht_migrate_file(earlier the errno meaning was overloaded). This
    will help avoiding scenarios where migration failure due to ENOENT, can result in
    rebalance abort/failure.
    > Change-Id: I8904e2fb147419d4a51c1267be11a08ffd52168e
    > BUG: 1420166
    > Signed-off-by: Susant Palai <spalai@redhat.com>
    > Reviewed-on: https://review.gluster.org/16427
    > Smoke: Gluster Build System <jenkins@build.gluster.org>
    > Reviewed-by: N Balachandran <nbalacha@redhat.com>
    > Reviewed-by: Raghavendra G <rgowdapp@redhat.com>
    > NetBSD-regression: NetBSD Build System <jenkins@build.gluster.org>
    > CentOS-regression: Gluster Build System <jenkins@build.gluster.org>
    > Signed-off-by: Susant Palai <spalai@redhat.com>
    Change-Id: I8904e2fb147419d4a51c1267be11a08ffd52168e
    BUG: 1473134
    Signed-off-by: Susant Palai <spalai@redhat.com>
    Reviewed-on: https://review.gluster.org/17834
    CentOS-regression: Gluster Build System <jenkins@build.gluster.org>
    Smoke: Gluster Build System <jenkins@build.gluster.org>
    Reviewed-by: Shyamsundar Ranganathan <srangana@redhat.com>
Comment 5 Shyamsundar 2017-08-21 09:41:22 EDT
This bug is getting closed because a release has been made available that should address the reported issue. In case the problem is still not fixed with glusterfs-3.10.5, please open a new bug report.

glusterfs-3.10.5 has been announced on the Gluster mailinglists [1], packages for several distributions should become available in the near future. Keep an eye on the Gluster Users mailinglist [2] and the update infrastructure for your distribution.

[1] http://lists.gluster.org/pipermail/announce/2017-August/000079.html
[2] https://www.gluster.org/pipermail/gluster-users/

Note You need to log in before you can comment on or make changes to this bug.