+++ This bug was initially created as a clone of Bug #1318289 +++ For existing "replica 2" setup to be converted into "replica 3 arbiter 1" it is necessary to create new volume and copy data to it. However, it is quite complex for large production setup with terabytes of data. In discussion held with GlusterFS developers in IRC channel it was stated that arbiter brick hotplug should be the feature that is easy to implement. It would be nice to see such a feature to be implemented for 3.7.x. --- Additional comment from Vijay Bellur on 2016-04-30 04:29:19 EDT --- REVIEW: http://review.gluster.org/14126 (cli/glusterd: Extend add-brick for arbiter volumes) posted (#1) for review on master by Ravishankar N (ravishankar) --- Additional comment from Vijay Bellur on 2016-04-30 06:15:51 EDT --- REVIEW: http://review.gluster.org/14126 (cli/glusterd: Extend add-brick for arbiter volumes) posted (#2) for review on master by Ravishankar N (ravishankar) --- Additional comment from Vijay Bellur on 2016-05-08 13:49:37 EDT --- REVIEW: http://review.gluster.org/14126 (cli/glusterd: add/remove brick fixes for arbiter volumes) posted (#3) for review on master by Ravishankar N (ravishankar) --- Additional comment from Vijay Bellur on 2016-05-11 01:05:20 EDT --- REVIEW: http://review.gluster.org/14126 (cli/glusterd: add/remove brick fixes for arbiter volumes) posted (#4) for review on master by Ravishankar N (ravishankar) --- Additional comment from Vijay Bellur on 2016-05-17 05:40:26 EDT --- REVIEW: http://review.gluster.org/14126 (cli/glusterd: add/remove brick fixes for arbiter volumes) posted (#5) for review on master by Ravishankar N (ravishankar) --- Additional comment from Vijay Bellur on 2016-05-17 05:47:04 EDT --- REVIEW: http://review.gluster.org/14126 (cli/glusterd: add/remove brick fixes for arbiter volumes) posted (#6) for review on master by Ravishankar N (ravishankar) --- Additional comment from Vijay Bellur on 2016-05-19 00:30:33 EDT --- REVIEW: http://review.gluster.org/14126 (cli/glusterd: add/remove brick fixes for arbiter volumes) posted (#7) for review on master by Ravishankar N (ravishankar)
REVIEW: http://review.gluster.org/14502 (cli/glusterd: add/remove brick fixes for arbiter volumes) posted (#1) for review on release-3.8 by Ravishankar N (ravishankar)
COMMIT: http://review.gluster.org/14502 committed in release-3.8 by Niels de Vos (ndevos) ------ commit ade1d726e035eea4540894b03c82b84304bba2ae Author: Ravishankar N <ravishankar> Date: Fri Apr 29 17:41:18 2016 +0530 cli/glusterd: add/remove brick fixes for arbiter volumes Backport of: http://review.gluster.org/14126 1.Provide a command to convert replica 2 volumes to arbiter volumes. Existing self-heal logic will automatically heal the file hierarchy into the arbiter brick, the progress of which can be monitored using the heal info command. Syntax: gluster volume add-brick <VOLNAME> replica 3 arbiter 1 <HOST:arbiter-brick-path> 2. Add checks when removing bricks from arbiter volumes: - When converting from arbiter to replica 2 volume, allow only arbiter brick to be removed. - When converting from arbiter to plain distribute volume, allow only if arbiter is one of the bricks that is removed. 3. Some clean-up: - Use GD_MSG_DICT_GET_SUCCESS instead of GD_MSG_DICT_GET_FAILED to log messages that are not failures. - Remove unused variable `brick_list` - Move 'brickinfo->group' related functions to glusted-utils. Change-Id: Ifa75d137c67ffddde7dcb8e0df0873163e713119 BUG: 1337387 Signed-off-by: Ravishankar N <ravishankar> Reviewed-on: http://review.gluster.org/14502 Smoke: Gluster Build System <jenkins.com> NetBSD-regression: NetBSD Build System <jenkins.org> CentOS-regression: Gluster Build System <jenkins.com> Reviewed-by: Atin Mukherjee <amukherj>
This bug is getting closed because a release has been made available that should address the reported issue. In case the problem is still not fixed with glusterfs-3.8.0, please open a new bug report. glusterfs-3.8.0 has been announced on the Gluster mailinglists [1], packages for several distributions should become available in the near future. Keep an eye on the Gluster Users mailinglist [2] and the update infrastructure for your distribution. [1] http://blog.gluster.org/2016/06/glusterfs-3-8-released/ [2] http://thread.gmane.org/gmane.comp.file-systems.gluster.user