REVIEW: https://review.gluster.org/17101 (glusterd: cli is not showing correct status after restart glusted while mux is on) posted (#1) for review on master by MOHIT AGRAWAL (moagrawa)
REVIEW: https://review.gluster.org/17101 (glusterd: cli is not showing correct status after restart glusted while mux is on) posted (#2) for review on master by MOHIT AGRAWAL (moagrawa)
REVIEW: https://review.gluster.org/17101 (glusterd(WIP): cli is not showing correct status after restart glusted while mux is on) posted (#3) for review on master by MOHIT AGRAWAL (moagrawa)
REVIEW: https://review.gluster.org/17101 (glusterd(WIP): cli is not showing correct status after restart glusted while mux is on) posted (#4) for review on master by MOHIT AGRAWAL (moagrawa)
REVIEW: https://review.gluster.org/17101 (glusterd(WIP): cli is not showing correct status after restart glusted while mux is on) posted (#5) for review on master by MOHIT AGRAWAL (moagrawa)
REVIEW: https://review.gluster.org/17101 (glusterd(WIP): cli is not showing correct status after restart glusted while mux is on) posted (#6) for review on master by MOHIT AGRAWAL (moagrawa)
REVIEW: https://review.gluster.org/17101 (glusterd(WIP): cli is not showing correct status after restart glusted while mux is on) posted (#7) for review on master by MOHIT AGRAWAL (moagrawa)
REVIEW: https://review.gluster.org/17101 (glusterd(WIP): cli is not showing correct status after restart glusted while mux is on) posted (#8) for review on master by MOHIT AGRAWAL (moagrawa)
REVIEW: https://review.gluster.org/17101 (glusterd(WIP): cli is not showing correct status after restart glusted while mux is on) posted (#9) for review on master by MOHIT AGRAWAL (moagrawa)
REVIEW: https://review.gluster.org/17101 (glusterd(WIP): cli is not showing correct status after restart glusted while mux is on) posted (#10) for review on master by MOHIT AGRAWAL (moagrawa)
REVIEW: https://review.gluster.org/17101 (glusterd(WIP): cli is not showing correct status after restart glusted while mux is on) posted (#11) for review on master by MOHIT AGRAWAL (moagrawa)
REVIEW: https://review.gluster.org/17101 (glusterd(WIP): cli is not showing correct status after restart glusted while mux is on) posted (#12) for review on master by MOHIT AGRAWAL (moagrawa)
REVIEW: https://review.gluster.org/17101 (glusterd(WIP): cli is not showing correct status after restart glusted while mux is on) posted (#13) for review on master by MOHIT AGRAWAL (moagrawa)
REVIEW: https://review.gluster.org/17101 (glusterd(WIP): cli is not showing correct status after restart glusted while mux is on) posted (#14) for review on master by MOHIT AGRAWAL (moagrawa)
REVIEW: https://review.gluster.org/17101 (glusterd(WIP): cli is not showing correct status after restart glusted while mux is on) posted (#15) for review on master by MOHIT AGRAWAL (moagrawa)
REVIEW: https://review.gluster.org/17101 (glusterd(WIP): cli is not showing correct status after restart glusted while mux is on) posted (#16) for review on master by MOHIT AGRAWAL (moagrawa)
REVIEW: https://review.gluster.org/17101 (glusterd(WIP): cli is not showing correct status after restart glusted while mux is on) posted (#17) for review on master by MOHIT AGRAWAL (moagrawa)
REVIEW: https://review.gluster.org/17101 (glusterd(WIP): cli is not showing correct status after restart glusted while mux is on) posted (#18) for review on master by MOHIT AGRAWAL (moagrawa)
REVIEW: https://review.gluster.org/17101 (glusterd(WIP): cli is not showing correct status after restart glusted while mux is on) posted (#19) for review on master by MOHIT AGRAWAL (moagrawa)
REVIEW: https://review.gluster.org/17101 (glusterd(WIP): cli is not showing correct status after restart glusted while mux is on) posted (#20) for review on master by MOHIT AGRAWAL (moagrawa)
REVIEW: https://review.gluster.org/17101 (glusterd(WIP): cli is not showing correct status after restart glusted while mux is on) posted (#21) for review on master by MOHIT AGRAWAL (moagrawa)
REVIEW: https://review.gluster.org/17101 (glusterd(WIP): cli is not showing correct status after restart glusted while mux is on) posted (#22) for review on master by MOHIT AGRAWAL (moagrawa)
REVIEW: https://review.gluster.org/17101 (glusterd(WIP): cli is not showing correct status after restart glusted while mux is on) posted (#23) for review on master by MOHIT AGRAWAL (moagrawa)
REVIEW: https://review.gluster.org/17101 (glusterd(WIP): cli is not showing correct status after restart glusted while mux is on) posted (#24) for review on master by MOHIT AGRAWAL (moagrawa)
REVIEW: https://review.gluster.org/17101 (glusterd: cli is not showing correct status after restart glusted while mux is on) posted (#25) for review on master by MOHIT AGRAWAL (moagrawa)
REVIEW: https://review.gluster.org/17101 (glusterd(WIP): cli is not showing correct status after restart glusted while mux is on) posted (#27) for review on master by Atin Mukherjee (amukherj)
REVIEW: https://review.gluster.org/17168 (glusterd: cleanup pidfile on pmap signout) posted (#1) for review on master by Atin Mukherjee (amukherj)
REVIEW: https://review.gluster.org/17101 (glusterd: socketfile & pidfile related fixes for brick-multiplexing) posted (#28) for review on master by Atin Mukherjee (amukherj)
REVIEW: https://review.gluster.org/17101 (glusterd: socketfile & pidfile related fixes for brick-multiplexing) posted (#29) for review on master by Atin Mukherjee (amukherj)
REVIEW: https://review.gluster.org/17168 (glusterd: cleanup pidfile on pmap signout) posted (#2) for review on master by Atin Mukherjee (amukherj)
REVIEW: https://review.gluster.org/17101 (glusterd: socketfile & pidfile related fixes for brick-multiplexing) posted (#30) for review on master by Atin Mukherjee (amukherj)
REVIEW: https://review.gluster.org/17168 (glusterd: cleanup pidfile on pmap signout) posted (#3) for review on master by Atin Mukherjee (amukherj)
REVIEW: https://review.gluster.org/17101 (glusterd: socketfile & pidfile related fixes for brick-multiplexing feature) posted (#31) for review on master by Atin Mukherjee (amukherj)
REVIEW: https://review.gluster.org/17168 (glusterd: cleanup pidfile on pmap signout) posted (#4) for review on master by Atin Mukherjee (amukherj)
REVIEW: https://review.gluster.org/17101 (glusterd: socketfile & pidfile related fixes for brick multiplexing feature) posted (#32) for review on master by Atin Mukherjee (amukherj)
COMMIT: https://review.gluster.org/17168 committed in master by Jeff Darcy (jeff.us) ------ commit 3d35e21ffb15713237116d85711e9cd1dda1688a Author: Atin Mukherjee <amukherj> Date: Wed May 3 12:17:30 2017 +0530 glusterd: cleanup pidfile on pmap signout This patch ensures 1. brick pidfile is cleaned up on pmap signout 2. pmap signout evemt is sent for all the bricks when a brick process shuts down. Change-Id: I7606a60775b484651d4b9743b6037b40323931a2 BUG: 1444596 Signed-off-by: Atin Mukherjee <amukherj> Reviewed-on: https://review.gluster.org/17168 Smoke: Gluster Build System <jenkins.org> NetBSD-regression: NetBSD Build System <jenkins.org> CentOS-regression: Gluster Build System <jenkins.org> Reviewed-by: Jeff Darcy <jeff.us>
REVIEW: https://review.gluster.org/17101 (glusterd: socketfile & pidfile related fixes for brick multiplexing feature) posted (#33) for review on master by MOHIT AGRAWAL (moagrawa)
REVIEW: https://review.gluster.org/17101 (glusterd: socketfile & pidfile related fixes for brick multiplexing feature) posted (#34) for review on master by MOHIT AGRAWAL (moagrawa)
COMMIT: https://review.gluster.org/17101 committed in master by Atin Mukherjee (amukherj) ------ commit 21c7f7baccfaf644805e63682e5a7d2a9864a1e6 Author: Mohit Agrawal <moagrawa> Date: Mon May 8 19:29:22 2017 +0530 glusterd: socketfile & pidfile related fixes for brick multiplexing feature Problem: While brick-muliplexing is on after restarting glusterd, CLI is not showing pid of all brick processes in all volumes. Solution: While brick-mux is on all local brick process communicated through one UNIX socket but as per current code (glusterd_brick_start) it is trying to communicate with separate UNIX socket for each volume which is populated based on brick-name and vol-name.Because of multiplexing design only one UNIX socket is opened so it is throwing poller error and not able to fetch correct status of brick process through cli process. To resolve the problem write a new function glusterd_set_socket_filepath_for_mux that will call by glusterd_brick_start to validate about the existence of socketpath. To avoid the continuous EPOLLERR erros in logs update socket_connect code. Test: To reproduce the issue followed below steps 1) Create two distributed volumes(dist1 and dist2) 2) Set cluster.brick-multiplex is on 3) kill glusterd 4) run command gluster v status After apply the patch it shows correct pid for all volumes BUG: 1444596 Change-Id: I5d10af69dea0d0ca19511f43870f34295a54a4d2 Signed-off-by: Mohit Agrawal <moagrawa> Reviewed-on: https://review.gluster.org/17101 Smoke: Gluster Build System <jenkins.org> Reviewed-by: Prashanth Pai <ppai> NetBSD-regression: NetBSD Build System <jenkins.org> CentOS-regression: Gluster Build System <jenkins.org> Reviewed-by: Atin Mukherjee <amukherj>
REVIEW: https://review.gluster.org/17208 (posix: Send SIGKILL in 2nd attempt) posted (#1) for review on master by Atin Mukherjee (amukherj)
REVIEW: https://review.gluster.org/17208 (posix: Send SIGKILL in 2nd attempt) posted (#2) for review on master by Atin Mukherjee (amukherj)
COMMIT: https://review.gluster.org/17208 committed in master by Atin Mukherjee (amukherj) ------ commit 4f4ad03e0c4739d3fe1b0640ab8b4e1ffc985374 Author: Atin Mukherjee <amukherj> Date: Tue May 9 07:05:18 2017 +0530 posix: Send SIGKILL in 2nd attempt Commit 21c7f7ba changed the signal from SIGKILL to SIGTERM for the 2nd attempt to terminate the brick process if SIGTERM fails. This patch fixes this problem. Change-Id: I856df607b7109a215f2a2a4827ba3ea42d8a9729 BUG: 1444596 Signed-off-by: Atin Mukherjee <amukherj> Reviewed-on: https://review.gluster.org/17208 NetBSD-regression: NetBSD Build System <jenkins.org> CentOS-regression: Gluster Build System <jenkins.org> Reviewed-by: Pranith Kumar Karampuri <pkarampu> Smoke: Gluster Build System <jenkins.org>
This bug is getting closed because a release has been made available that should address the reported issue. In case the problem is still not fixed with glusterfs-3.12.0, please open a new bug report. glusterfs-3.12.0 has been announced on the Gluster mailinglists [1], packages for several distributions should become available in the near future. Keep an eye on the Gluster Users mailinglist [2] and the update infrastructure for your distribution. [1] http://lists.gluster.org/pipermail/announce/2017-September/000082.html [2] https://www.gluster.org/pipermail/gluster-users/