Description of problem: The client can create a replica 2 volume without knowing that he can end up in a split brain situation. Arbiter is one of the way to avoid that. So warn the CLI while creating a replica 2 volume, and spread the message that arbiter is the preferred option over replica 2. Version-Release number of selected component (if applicable): How reproducible: Steps to Reproduce: 1. 2. 3. Actual results: Creating a replica 2 volume will succeed without any warning of leading to split brain situations. Expected results: Client should be warned of ending up in split brain and asked for whether he still wants to continue or not. Additional info:
REVIEW: https://review.gluster.org/16899 (cli: Adding warning message while creating a replica 2 volume) posted (#1) for review on master by Karthik U S (ksubrahm)
REVIEW: https://review.gluster.org/16899 (cli: Adding warning message while creating a replica 2 volume) posted (#2) for review on master by Karthik U S (ksubrahm)
REVIEW: https://review.gluster.org/16899 (cli: Adding warning message while creating a replica 2 volume) posted (#3) for review on master by Karthik U S (ksubrahm)
COMMIT: https://review.gluster.org/16899 committed in master by Pranith Kumar Karampuri (pkarampu) ------ commit db0e5582b118d5bb0c8bb491f46fa2ae0dcfa97e Author: karthik-us <ksubrahm> Date: Tue Mar 14 13:17:11 2017 +0530 cli: Adding warning message while creating a replica 2 volume Warn the CLI about ending up in split-brain situation with a replica 2 volume. Display arbiter and replica 3 are recommended option to avoid this and point to the document on split-brain and ways to deal with it. Change-Id: I7f31f3c74818d440a684b3130bc5ccdc72258f01 BUG: 1431963 Signed-off-by: karthik-us <ksubrahm> Reviewed-on: https://review.gluster.org/16899 Reviewed-by: Pranith Kumar Karampuri <pkarampu> Tested-by: Pranith Kumar Karampuri <pkarampu> Smoke: Gluster Build System <jenkins.org> NetBSD-regression: NetBSD Build System <jenkins.org> CentOS-regression: Gluster Build System <jenkins.org>
This bug is getting closed because a release has been made available that should address the reported issue. In case the problem is still not fixed with glusterfs-3.11.0, please open a new bug report. glusterfs-3.11.0 has been announced on the Gluster mailinglists [1], packages for several distributions should become available in the near future. Keep an eye on the Gluster Users mailinglist [2] and the update infrastructure for your distribution. [1] http://lists.gluster.org/pipermail/announce/2017-May/000073.html [2] https://www.gluster.org/pipermail/gluster-users/