REVIEW: http://review.gluster.org/5392 (fuse: fix memory leak in fuse_getxattr()) posted (#1) for review on master by Ravishankar N (ravishankar)
REVIEW: http://review.gluster.org/5393 (afr: check for non-zero call_count before doing a stack wind) posted (#2) for review on master by Ravishankar N (ravishankar)
REVIEW: http://review.gluster.org/5393 (afr: check for non-zero call_count before doing a stack wind) posted (#3) for review on master by Ravishankar N (ravishankar)
COMMIT: http://review.gluster.org/5392 committed in master by Anand Avati (avati) ------ commit b777fc478d74b2582671fef7cb2c55206432c2bb Author: Ravishankar N <ravishankar> Date: Wed Jul 24 18:44:42 2013 +0000 fuse: fix memory leak in fuse_getxattr() The fuse_getxattr() function was not freeing fuse_state_t resulting in a memory leak. As a result, when continuous writes (run dd command in a loop) were done from a FUSE mount point, the OOM killer killed the client process (glusterfs). Change-Id: I6ded1a4c25d26ceab0cb3b89ac81066cb51343ec BUG: 988182 Signed-off-by: Ravishankar N <ravishankar> Reviewed-on: http://review.gluster.org/5392 Reviewed-by: Pranith Kumar Karampuri <pkarampu> Tested-by: Gluster Build System <jenkins.com> Reviewed-by: Anand Avati <avati>
COMMIT: http://review.gluster.org/5393 committed in master by Anand Avati (avati) ------ commit 0f77e30c903e6f71f30dfd6165914a43998a164f Author: Ravishankar N <ravishankar> Date: Wed Jul 24 19:11:49 2013 +0000 afr: check for non-zero call_count before doing a stack wind When one of the bricks of a 1x2 replicate volume is down, writes to the volume is causing a race between afr_flush_wrapper() and afr_flush_cbk(). The latter frees up the call_frame's local variables in the unwind, while the former accesses them in the for loop and sending a stack wind the second time. This causes the FUSE mount process (glusterfs) toa receive a SIGSEGV when the corresponding unwind is hit. This patch adds the call_count check which was removed when afr_flush_wrapper() was introduced in commit 29619b4e Change-Id: I87d12ef39ea61cc4c8244c7f895b7492b90a7042 BUG: 988182 Signed-off-by: Ravishankar N <ravishankar> Reviewed-on: http://review.gluster.org/5393 Tested-by: Gluster Build System <jenkins.com> Reviewed-by: Pranith Kumar Karampuri <pkarampu> Reviewed-by: Anand Avati <avati>
This bug is getting closed because a release has been made available that should address the reported issue. In case the problem is still not fixed with glusterfs-3.5.0, please reopen this bug report. glusterfs-3.5.0 has been announced on the Gluster Developers mailinglist [1], packages for several distributions should become available in the near future. Keep an eye on the Gluster Users mailinglist [2] and the update infrastructure for your distribution. [1] http://thread.gmane.org/gmane.comp.file-systems.gluster.devel/6137 [2] http://thread.gmane.org/gmane.comp.file-systems.gluster.user