Description of problem: One of the common problems we encounter are frequent connects/disconnects. Frequent disconnects can be either: 1. voluntary where the process calls a shutdown (2)/close (2) on an otherwise healthy socket connection. 2. involuntary where we get a POLLERR event from network. While debugging this class of issues, it would help if can identify whether a particular disconnect falls into which of the two above categories. We need to add enough log messages to help us classify.
REVIEW: https://review.gluster.org/16732 (rpc: log more about socket disconnects) posted (#1) for review on master by Milind Changire (mchangir)
REVIEW: https://review.gluster.org/16732 (rpc: log more about socket disconnects) posted (#2) for review on master by Milind Changire (mchangir)
COMMIT: https://review.gluster.org/16732 committed in master by Jeff Darcy (jdarcy) ------ commit 67a35ac54bfd61a920c1919fbde588a04ac3358a Author: Milind Changire <mchangir> Date: Thu Feb 23 17:58:46 2017 +0530 rpc: log more about socket disconnects Log more about the different paths leading to socket disconnect for ease of debugging. Log via gf_log_callingfn() in __socket_disconnect() at loglevel TRACE if socket connection is being torn down. Change-Id: I1e551c2d685784b5ec747f481179f64d524c0461 BUG: 1426125 Signed-off-by: Milind Changire <mchangir> Reviewed-on: https://review.gluster.org/16732 Smoke: Gluster Build System <jenkins.org> NetBSD-regression: NetBSD Build System <jenkins.org> CentOS-regression: Gluster Build System <jenkins.org> Reviewed-by: Jeff Darcy <jdarcy>
REVIEW: https://review.gluster.org/17321 (rpc: log more about socket disconnects) posted (#1) for review on release-3.10 by Milind Changire (mchangir)
This bug is getting closed because a release has been made available that should address the reported issue. In case the problem is still not fixed with glusterfs-3.11.0, please open a new bug report. glusterfs-3.11.0 has been announced on the Gluster mailinglists [1], packages for several distributions should become available in the near future. Keep an eye on the Gluster Users mailinglist [2] and the update infrastructure for your distribution. [1] http://lists.gluster.org/pipermail/announce/2017-May/000073.html [2] https://www.gluster.org/pipermail/gluster-users/