+++ This bug was initially created as a clone of Bug #1579758 +++ Description of problem: Converting to replica 2 volume is not throwing warning Version-Release number of selected component (if applicable): glusterfs-3.12.2-10.el7rhgs.x86_64 How reproducible: Always Below Scenarios was not throwing Warning while converting to replica 2 1) while converting distribute only to x2 2) while removing bricks from x3 to convert to x2 3) while adding/removing bricks in x2 configurations Actual results: Doesn't show any warning while doing above scenarios Expected results: Expecting to throw warning like below " Replica 2 volumes are prone to split-brain. Use Arbiter or Replica 3 to avoid this. Do you still want to continue? (y/n) y " Additional info:
REVIEW: https://review.gluster.org/21136 (cli: Add warning message while converting to replica 2 configuration) posted (#1) for review on master by Karthik U S
COMMIT: https://review.gluster.org/21136 committed in master by "Atin Mukherjee" <amukherj> with a commit message- cli: Add warning message while converting to replica 2 configuration Currently while creating replica 2 volume we display a warning message of ending up in split-brain. But while converting an existing volume from other configuration to replica 2 by add-brick or remove-brick operations we do not show any such messages. With this fix in add-brick and remove-brick cases also we will display the same warning message and prompt for confirmation if the configuration changes to replica 2. Change-Id: Ifc4ed6994a087d2403894f4e743c4eb41633276b fixes: bz#1627044 Signed-off-by: karthik-us <ksubrahm>
This bug is getting closed because a release has been made available that should address the reported issue. In case the problem is still not fixed with glusterfs-5.0, please open a new bug report. glusterfs-5.0 has been announced on the Gluster mailinglists [1], packages for several distributions should become available in the near future. Keep an eye on the Gluster Users mailinglist [2] and the update infrastructure for your distribution. [1] https://lists.gluster.org/pipermail/announce/2018-October/000115.html [2] https://www.gluster.org/pipermail/gluster-users/