REVIEW: https://review.gluster.org/17326 (rpc: avoid logging success on failure) posted (#1) for review on release-3.10 by Milind Changire (mchangir)
Description of problem: Log message shows error code as success even when rpc fails to connect [2016-10-20 14:27:14.474001] E [MSGID: 104024] [glfs-mgmt.c:735:mgmt_rpc_notify] 0-glfs-mgmt: failed to connect with remote-host: 10.70.46.212 (Success)
COMMIT: https://review.gluster.org/17326 committed in release-3.10 by Raghavendra Talur (rtalur) ------ commit 6df94e06cc4dcb60756ac49bd751c4cf95999703 Author: Milind Changire <mchangir> Date: Sun Mar 5 21:39:20 2017 +0530 rpc: avoid logging success on failure Avoid logging Success in the event of failure especially when errno has no meaningful value w.r.t. the failure. In this case the errno is set to zero when there's indeed a failure at the RPC level. mainline: > BUG: 1426032 > Signed-off-by: Milind Changire <mchangir> > Reviewed-on: https://review.gluster.org/16730 > Smoke: Gluster Build System <jenkins.org> > NetBSD-regression: NetBSD Build System <jenkins.org> > CentOS-regression: Gluster Build System <jenkins.org> > Reviewed-by: N Balachandran <nbalacha> > Reviewed-by: Jeff Darcy <jdarcy> (cherry picked from commit 89c6bedc1c2e978f67ca29f212a357984cd8a2dd) Change-Id: If2cc81aa1e590023ed22892dacbef7cac213e591 BUG: 1451995 Signed-off-by: Milind Changire <mchangir> Reviewed-on: https://review.gluster.org/17326 Smoke: Gluster Build System <jenkins.org> NetBSD-regression: NetBSD Build System <jenkins.org> CentOS-regression: Gluster Build System <jenkins.org> Reviewed-by: Raghavendra Talur <rtalur>
This bug is getting closed because a release has been made available that should address the reported issue. In case the problem is still not fixed with glusterfs-3.10.3, please open a new bug report. glusterfs-3.10.3 has been announced on the Gluster mailinglists [1], packages for several distributions should become available in the near future. Keep an eye on the Gluster Users mailinglist [2] and the update infrastructure for your distribution. [1] http://lists.gluster.org/pipermail/gluster-users/2017-June/031399.html [2] https://www.gluster.org/pipermail/gluster-users/