Description of problem: Oddity found when analyzing the code in rpc-clnt.c, funtion rpc_clnt_notify: (..) rpc_clnt_ref (clnt); conn->reconnect = gf_timer_call_after (clnt->ctx, ts, rpc_clnt_reconnect, conn); if (conn->reconnect == NULL) { gf_log (conn->name, GF_LOG_WARNING, "Cannot create rpc_clnt_reconnect timer"); unref_clnt = _gf_true; } (..) if (unref_clnt) rpc_clnt_ref (clnt); (..) If we weren't able to to create the reconnect timer task, then RPC client's reference count should be decreased, not increased.
REVIEW: http://review.gluster.org/15969 (rpc: fix obvious typo in cleanup code in rpc_clnt_notify) posted (#1) for review on master by Anonymous Coward (mateusz.slupny)
REVIEW: http://review.gluster.org/15969 (rpc: fix obvious typo in cleanup code in rpc_clnt_notify) posted (#2) for review on master by Mateusz Slupny (mateusz.slupny)
REVIEW: http://review.gluster.org/15969 (rpc: fix obvious typo in cleanup code in rpc_clnt_notify) posted (#3) for review on master by Mateusz Slupny (mateusz.slupny)
COMMIT: https://review.gluster.org/15969 committed in master by Vijay Bellur (vbellur) ------ commit e30af139739e3a6e587d77a9af999035fe20dc37 Author: Mateusz Slupny <mateusz.slupny> Date: Tue Nov 29 12:09:49 2016 +0100 rpc: fix obvious typo in cleanup code in rpc_clnt_notify Change-Id: I003e38b238704d3345d46688355bcf3702455ba1 BUG: 1399593 Signed-off-by: Mateusz Slupny <mateusz.slupny> [ndevos: rebased after I8ff5d1a32 moved the code around] Reviewed-on: https://review.gluster.org/15969 Reviewed-by: Niels de Vos <ndevos> Tested-by: Niels de Vos <ndevos> Smoke: Gluster Build System <jenkins.org> NetBSD-regression: NetBSD Build System <jenkins.org> Reviewed-by: Prashanth Pai <ppai> CentOS-regression: Gluster Build System <jenkins.org> Reviewed-by: Vijay Bellur <vbellur>
This bug is getting closed because a release has been made available that should address the reported issue. In case the problem is still not fixed with glusterfs-3.11.0, please open a new bug report. glusterfs-3.11.0 has been announced on the Gluster mailinglists [1], packages for several distributions should become available in the near future. Keep an eye on the Gluster Users mailinglist [2] and the update infrastructure for your distribution. [1] http://lists.gluster.org/pipermail/announce/2017-May/000073.html [2] https://www.gluster.org/pipermail/gluster-users/