Bug 765262 - (GLUSTER-3530) preserve the 'rebalance' state.
preserve the 'rebalance' state.
Product: GlusterFS
Classification: Community
Component: glusterd (Show other bugs)
x86_64 Linux
medium Severity low
: ---
: ---
Assigned To: shishir gowda
Depends On:
Blocks: 817967
  Show dependency treegraph
Reported: 2011-09-09 05:22 EDT by Amar Tumballi
Modified: 2013-12-18 19:06 EST (History)
4 users (show)

See Also:
Fixed In Version: glusterfs-3.4.0
Doc Type: Bug Fix
Doc Text:
Story Points: ---
Clone Of:
Last Closed: 2013-07-24 14:01:35 EDT
Type: ---
Regression: ---
Mount Type: ---
Documentation: ---
Verified Versions:
Category: ---
oVirt Team: ---
RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: ---

Attachments (Terms of Use)

  None (edit)
Description Amar Tumballi 2011-09-09 05:22:56 EDT
something similar to rbstate.
Comment 1 Amar Tumballi 2011-10-13 03:28:33 EDT
with the patch http://review.gluster.com/551 going in, by not storing 'rebalance' state, we are not going to loose anything other than the ability to start rebalance/decommissioning automatically once the glusterd comes back.

But considering user always has an option here to 'start' the process again. Only thing is 'status' of previous rebalance process will be lost. Which I guess is more of the cosmetic improvement, and considering the possible races/corner cases involved with preserving the 'state' of rebalance, I would consider not having the persistent state for rebalance would be a cleaner bet.
Comment 2 shishir gowda 2012-02-23 06:31:42 EST
CHANGE: http://review.gluster.com/2540 (cluster/dht: Rebalance will be a new
glusterfs process) merged in master by Vijay Bellur (vijay@gluster.com)

The above patch (mainline) saves Rebalance state to help in restarting of rebalance when glusterd starts up.
Comment 3 Kaushal 2012-05-29 06:23:00 EDT
Checked on the release-3.3 branch, by killing all gluster processes when rebalance was running. Restarting glusterd led to rebalance also being resumed, hence showing the state was saved.

Note You need to log in before you can comment on or make changes to this bug.