Bug 1438370 - rebalance: Allow admin to change thread count for rebalance
Summary: rebalance: Allow admin to change thread count for rebalance
Keywords:
Status: CLOSED CURRENTRELEASE
Alias: None
Product: GlusterFS
Classification: Community
Component: distribute
Version: mainline
Hardware: Unspecified
OS: Unspecified
unspecified
unspecified
Target Milestone: ---
Assignee: Susant Kumar Palai
QA Contact:
URL:
Whiteboard:
Depends On:
Blocks: 1473136
TreeView+ depends on / blocked
 
Reported: 2017-04-03 09:08 UTC by Susant Kumar Palai
Modified: 2017-07-20 06:05 UTC (History)
1 user (show)

Fixed In Version: glusterfs-3.11.0
Doc Type: Enhancement
Doc Text:
Clone Of:
: 1473136 (view as bug list)
Environment:
Last Closed: 2017-05-30 18:48:52 UTC
Regression: ---
Mount Type: ---
Documentation: ---
CRM:
Verified Versions:
Embargoed:


Attachments (Terms of Use)

Description Susant Kumar Palai 2017-04-03 09:08:03 UTC
Description of problem:

Current rebalance throttle options: lazy/normal/aggressive may not always be sufficient for the purpose of throttling.  In our recent test, we observed for certain setups, normal and aggressive modes behaved similarly consuming full disk bandwidth. So in cases like this admin can tune it down(or vice versa) depending on the need.

Comment 1 Worker Ant 2017-04-03 09:19:52 UTC
REVIEW: https://review.gluster.org/16980 (cluster/dht: Make rebalance throttle option tuned by number) posted (#2) for review on master by Susant Palai (spalai)

Comment 2 Worker Ant 2017-04-27 09:28:38 UTC
REVIEW: https://review.gluster.org/16980 (cluster/dht: Make rebalance throttle option tuned by number) posted (#3) for review on master by Susant Palai (spalai)

Comment 3 Worker Ant 2017-04-27 10:38:47 UTC
REVIEW: https://review.gluster.org/16980 (cluster/dht: Make rebalance throttle option tuned by number) posted (#4) for review on master by Susant Palai (spalai)

Comment 4 Worker Ant 2017-04-27 10:43:03 UTC
REVIEW: https://review.gluster.org/16980 (cluster/dht: Make rebalance throttle option tuned by number) posted (#5) for review on master by Susant Palai (spalai)

Comment 5 Worker Ant 2017-04-27 14:06:23 UTC
REVIEW: https://review.gluster.org/16980 (cluster/dht: Make rebalance throttle option tuned by number) posted (#6) for review on master by Susant Palai (spalai)

Comment 6 Worker Ant 2017-04-29 14:29:38 UTC
COMMIT: https://review.gluster.org/16980 committed in master by Raghavendra G (rgowdapp) 
------
commit d51288540241d1f7785bb17bdc0702c0879087a9
Author: Susant Palai <spalai>
Date:   Wed Mar 22 17:14:25 2017 +0530

    cluster/dht: Make rebalance throttle option tuned by number
    
    Current rebalance throttle options: lazy/normal/aggressive may not always be
    sufficient for the purpose of throttling.  In our recent test, we observed for
    certain setups, normal and aggressive modes behaved similarly consuming full
    disk bandwidth. So in cases like this admin should be able to  tune it
    down(or vice versa) depending on the need.
    
    Along with old throttle configurations, thread counts are tuned based on number.
    e.g. gluster v set vol-name cluster-rebal.throttle  5.
    
    Admin can tune up/down between 0 and the number of cores available.
    
    Note: For heterogenous servers, validation will fail on the old server if "number"
    is given for throttle configuration.
    The message looks something like this:
    "volume set: failed: Staging failed on vm2. Error: cluster.rebal-throttle should be {lazy|normal|aggressive}"
    
    Test: Manual test by logging active thread number after reconfiguring throttle option.
    testcase: tests/basic/distribute/throttle-rebal.t
    
    Change-Id: I46e3cde546900307831028b344ecf601fd9b02c3
    BUG: 1438370
    Signed-off-by: Susant Palai <spalai>
    Reviewed-on: https://review.gluster.org/16980
    NetBSD-regression: NetBSD Build System <jenkins.org>
    Smoke: Gluster Build System <jenkins.org>
    CentOS-regression: Gluster Build System <jenkins.org>
    Reviewed-by: Atin Mukherjee <amukherj>
    Reviewed-by: Raghavendra G <rgowdapp>

Comment 7 Shyamsundar 2017-05-30 18:48:52 UTC
This bug is getting closed because a release has been made available that should address the reported issue. In case the problem is still not fixed with glusterfs-3.11.0, please open a new bug report.

glusterfs-3.11.0 has been announced on the Gluster mailinglists [1], packages for several distributions should become available in the near future. Keep an eye on the Gluster Users mailinglist [2] and the update infrastructure for your distribution.

[1] http://lists.gluster.org/pipermail/announce/2017-May/000073.html
[2] https://www.gluster.org/pipermail/gluster-users/


Note You need to log in before you can comment on or make changes to this bug.