REVIEW: https://review.gluster.org/17128 (glusterd: Make reset-brick work correctly if brick-mux is on) posted (#1) for review on master by Samikshan Bairagya (samikshan)
REVIEW: https://review.gluster.org/17128 (glusterd: Make reset-brick work correctly if brick-mux is on) posted (#2) for review on master by Samikshan Bairagya (samikshan)
REVIEW: https://review.gluster.org/17128 (glusterd: Make reset-brick work correctly if brick-mux is on) posted (#3) for review on master by Samikshan Bairagya (samikshan)
REVIEW: https://review.gluster.org/17128 (glusterd: Make reset-brick work correctly if brick-mux is on) posted (#4) for review on master by Samikshan Bairagya (samikshan)
REVIEW: https://review.gluster.org/17128 (glusterd: Make reset-brick work correctly if brick-mux is on) posted (#5) for review on master by Samikshan Bairagya (samikshan)
COMMIT: https://review.gluster.org/17128 committed in master by Jeff Darcy (jeff.us) ------ commit 74383e3ec6f8244b3de9bf14016452498c1ddcf0 Author: Samikshan Bairagya <samikshan> Date: Mon Apr 24 22:00:17 2017 +0530 glusterd: Make reset-brick work correctly if brick-mux is on Reset brick currently kills of the corresponding brick process. However, with brick multiplexing enabled, stopping the brick process would render all bricks attached to it unavailable. To handle this correctly, we need to make sure that the brick process is terminated only if brick-multiplexing is disabled. Otherwise, we should send the GLUSTERD_BRICK_TERMINATE rpc to the respective brick process to detach the brick that is to be reset. Change-Id: I69002d66ffe6ec36ef48af09b66c522c6d35ac58 BUG: 1446172 Signed-off-by: Samikshan Bairagya <samikshan> Reviewed-on: https://review.gluster.org/17128 Smoke: Gluster Build System <jenkins.org> NetBSD-regression: NetBSD Build System <jenkins.org> CentOS-regression: Gluster Build System <jenkins.org> Reviewed-by: Atin Mukherjee <amukherj>
This bug is getting closed because a release has been made available that should address the reported issue. In case the problem is still not fixed with glusterfs-3.12.0, please open a new bug report. glusterfs-3.12.0 has been announced on the Gluster mailinglists [1], packages for several distributions should become available in the near future. Keep an eye on the Gluster Users mailinglist [2] and the update infrastructure for your distribution. [1] http://lists.gluster.org/pipermail/announce/2017-September/000082.html [2] https://www.gluster.org/pipermail/gluster-users/