Description of problem: Using the 'force' option with 'peer detach' will lead to the cluster being in an inconsistent state, which will lead to problems.
REVIEW: http://review.gluster.org/5325 (cli,glusterd: Remove 'force' option for 'peer detach' and improve detach check) posted (#1) for review on master by Kaushal M (kaushal)
REVIEW: http://review.gluster.org/5325 (cli,glusterd: Remove 'force' option for 'peer detach' and improve detach check) posted (#2) for review on master by Kaushal M (kaushal)
REVIEW: http://review.gluster.org/5325 (cli,glusterd: Improve detach check validation) posted (#3) for review on master by Atin Mukherjee (amukherj)
REVIEW: http://review.gluster.org/5325 (cli,glusterd: Improve detach check validation) posted (#4) for review on master by Atin Mukherjee (amukherj)
REVIEW: http://review.gluster.org/5325 (cli,glusterd: Improve detach check validation) posted (#5) for review on master by Atin Mukherjee (amukherj)
COMMIT: http://review.gluster.org/5325 committed in master by Vijay Bellur (vbellur) ------ commit 0e7f8af0db8201ee892979713ac86d5548f5ec73 Author: Kaushal M <kaushal> Date: Thu Jul 11 19:42:16 2013 +0530 cli,glusterd: Improve detach check validation This patch improves the validation for the 'peer detach' command. A check for if volumes exist with some bricks on the peer being detached validation is added in peer detach code flow (even force would have this validation). This patch also gurantees that peer detach doesn't fail for a volume with all its brick on the peer which is getting detached and there are no other bricks on this peer. The following steps need to be followed for removing a downed and unrecoverable peer. * If a replacement system is available - add it to the cluster - use replace-brick to migrate bricks of the downed peer to the new peer (since data cannot be recovered anyway use the 'replace-brick commit force' command) or, If no replacement system is available, - remove bricks of the downed peer using 'remove-brick' Change-Id: Ie85ac5b66e87bec365fdedd8352b645bb25e1c33 BUG: 983590 Signed-off-by: Kaushal M <kaushal> Signed-off-by: Atin Mukherjee <amukherj> Reviewed-on: http://review.gluster.org/5325 Reviewed-by: Krishnan Parthasarathi <kparthas> Tested-by: Gluster Build System <jenkins.com> Reviewed-by: Vijay Bellur <vbellur>
A beta release for GlusterFS 3.6.0 has been released. Please verify if the release solves this bug report for you. In case the glusterfs-3.6.0beta1 release does not have a resolution for this issue, leave a comment in this bug and move the status to ASSIGNED. If this release fixes the problem for you, leave a note and change the status to VERIFIED. Packages for several distributions should become available in the near future. Keep an eye on the Gluster Users mailinglist [2] and the update (possibly an "updates-testing" repository) infrastructure for your distribution. [1] http://supercolony.gluster.org/pipermail/gluster-users/2014-September/018836.html [2] http://supercolony.gluster.org/pipermail/gluster-users/
This bug is getting closed because a release has been made available that should address the reported issue. In case the problem is still not fixed with glusterfs-3.6.1, please reopen this bug report. glusterfs-3.6.1 has been announced [1], packages for several distributions should become available in the near future. Keep an eye on the Gluster Users mailinglist [2] and the update infrastructure for your distribution. [1] http://supercolony.gluster.org/pipermail/gluster-users/2014-November/019410.html [2] http://supercolony.gluster.org/mailman/listinfo/gluster-users