Looking at the documentation, right now it seems that you can only migrate data from a brick to another brick which is not currently being used. Please add functionality where you can migrate data off of a brick and spread it across the other nodes in the cluster. Having to have a new node for the data to go to may not always work.
Please update the status of this bug as its been more than 6months since its filed (bug id < 2000) Please resolve it with proper resolution if its not valid anymore. If its still valid and not critical, move it to 'enhancement' severity.
This is becoming more important especially in Amazon where you will want to expand and shrink your Gluster environment to meet performance spikes.
This is still valid. There is no way to migrate data off of a dht subvolume that you might wish to remove.
*** Bug 2457 has been marked as a duplicate of this bug. ***
Again, this is an important feature necessary for maintenance in environments where people don't have extra servers laying around to use replace-brick. Is it really going into 3.3.0?
patch submitted at http://review.gluster.com/118 hopefully should make it to release 3.3.0
*** Bug 3485 has been marked as a duplicate of this bug. ***
Hopefully this feature will be robust against one of the bricks in a cluster/replicate group being down, either because of disk or computer failure. That is the main reason that I would try to remove a brick. I would want to remove the whole cluster/replicate group with the idea that the data would be available from the other members of the group and then distributed to the the other servers.
CHANGE: http://review.gluster.com/118 (to achieve this, we now create volume-file with) merged in master by Vijay Bellur (vijay)
commit log: ---- support for de-commissioning a node using 'remove-brick' to achieve this, we now create volume-file with 'decommissioned-nodes' option in distribute volume, then just perform the rebalance set of operations (with 'force' flag set). now onwards, the 'remove-brick' (with 'start' option) operation tries to migrate data from removed bricks to existing bricks. 'remove-brick' also supports similar options as of replace-brick. * (no options) -> works as 'force', will have the current behavior of remove-brick, ie., no data-migration, volume changes. * start (starts remove-brick with data-migration/draining process, which takes care of migrating data and once complete, will commit the changes to volume file) * pause (stop data migration, but keep the volume file intact with extra options whatever is set) * abort (stop data-migration, and fall back to old configuration) * commit (if volume is stopped, commits the changes to volumefile) * force (stops the data-migration and commits the changes to volume file) Change-Id: I3952bcfbe604a0952e68b6accace7014d5e401d3 BUG: 1952 Reviewed-on: http://review.gluster.com/118 Tested-by: Gluster Build System <jenkins.com> Reviewed-by: Vijay Bellur <vijay> ---- Will mark this resolved. Will open different bugs for specific issues/bugs within this feature.
CHANGE: http://review.gluster.com/551 (currently if 'remove-brick <BRICKS> start' is given, after all) merged in master by Vijay Bellur (vijay)