Description of problem: if a service is not connected to glusterd because of some reason and by then if there is some message that has to be transferred through RPC, then the whole glusterd crashes as it rpc fails. Version-Release number of selected component (if applicable): How reproducible: 100% Steps to Reproduce: 1. 2. 3. Actual results: Connection failed. Please check if gluster daemon is operational. Expected results: glusterd should throw error without crashing. Additional info:
REVIEW: http://review.gluster.org/13854 (glusterd/suncop: double free of frame stack) posted (#1) for review on master by mohammed rafi kc (rkavunga)
REVIEW: http://review.gluster.org/13854 (glusterd/syncop: double free of frame stack) posted (#2) for review on master by mohammed rafi kc (rkavunga)
COMMIT: http://review.gluster.org/13854 committed in master by Jeff Darcy (jdarcy) ------ commit 8dfbb6751b2f421fb179ecf6abf803fbe983350e Author: Mohammed Rafi KC <rkavunga> Date: Wed Mar 30 17:42:44 2016 +0530 glusterd/syncop: double free of frame stack If rpc message from glusterd during brick op phase fails without sending, then frame was freed from the caller function and call back function. Change-Id: I63cb3be30074e9a074f6895faa25b3d091f5b6a5 BUG: 1322262 Signed-off-by: Mohammed Rafi KC <rkavunga> Reviewed-on: http://review.gluster.org/13854 Smoke: Gluster Build System <jenkins.com> NetBSD-regression: NetBSD Build System <jenkins.org> CentOS-regression: Gluster Build System <jenkins.com> Reviewed-by: Jeff Darcy <jdarcy>
This bug is getting closed because a release has been made available that should address the reported issue. In case the problem is still not fixed with glusterfs-3.8.0, please open a new bug report. glusterfs-3.8.0 has been announced on the Gluster mailinglists [1], packages for several distributions should become available in the near future. Keep an eye on the Gluster Users mailinglist [2] and the update infrastructure for your distribution. [1] http://blog.gluster.org/2016/06/glusterfs-3-8-released/ [2] http://thread.gmane.org/gmane.comp.file-systems.gluster.user