+++ This bug was initially created as a clone of Bug #1145471 +++ Description of problem: Version-Release number of selected component (if applicable): How reproducible: Steps to Reproduce: 1. 2. 3. Actual results: Expected results: Additional info: --- Additional comment from Anand Avati on 2014-09-23 09:23:09 EDT --- REVIEW: http://review.gluster.org/8821 (cluster/afr: Fixed mem leaks in self-heal code path.) posted (#1) for review on master by Anuradha Talur (atalur) --- Additional comment from Anand Avati on 2014-09-24 00:12:21 EDT --- REVIEW: http://review.gluster.org/8821 (cluster/afr: Fixed mem leaks in self-heal code path.) posted (#2) for review on master by Anuradha Talur (atalur) --- Additional comment from Anand Avati on 2014-09-24 02:16:47 EDT --- COMMIT: http://review.gluster.org/8821 committed in master by Pranith Kumar Karampuri (pkarampu) ------ commit 3b871bee4a0ad3bc8b393ba23bfcf3ad6886cf42 Author: Anuradha <atalur> Date: Tue Sep 23 18:24:09 2014 +0530 cluster/afr: Fixed mem leaks in self-heal code path. AFR_STACK_RESET previously didn't cleanup afr_local_t, leading to memory leaks. With this patch, cleanup is done. All credit goes to Pranith Kumar Karampuri. Change-Id: I3c727ff4bb323dccb81da4b3168ac69bb340d17d BUG: 1145471 Signed-off-by: Anuradha <atalur> Reviewed-on: http://review.gluster.org/8821 Tested-by: Gluster Build System <jenkins.com> Reviewed-by: Pranith Kumar Karampuri <pkarampu> Tested-by: Pranith Kumar Karampuri <pkarampu>
REVIEW: http://review.gluster.org/8831 (cluster/afr: Fixed mem leaks in self-heal code path.) posted (#1) for review on release-3.6 by Anuradha Talur (atalur)
COMMIT: http://review.gluster.org/8831 committed in release-3.6 by Vijay Bellur (vbellur) ------ commit a8fe2d3f41c66131dd11dd506b4068ff9fb68db1 Author: Anuradha <atalur> Date: Tue Sep 23 18:24:09 2014 +0530 cluster/afr: Fixed mem leaks in self-heal code path. backport of: http://review.gluster.org/8821 AFR_STACK_RESET previously didn't cleanup afr_local_t, leading to memory leaks. With this patch, cleanup is done. All credit goes to Pranith Kumar Karampuri. Change-Id: I26506dfd9273b917eff5127c3e0cf9421e60f228 BUG: 1145914 Reviewed-on: http://review.gluster.org/8831 Reviewed-by: Pranith Kumar Karampuri <pkarampu> Reviewed-by: Vijay Bellur <vbellur> Tested-by: Gluster Build System <jenkins.com>
A beta release for GlusterFS 3.6.0 has been released. Please verify if the release solves this bug report for you. In case the glusterfs-3.6.0beta2 release does not have a resolution for this issue, leave a comment in this bug and move the status to ASSIGNED. If this release fixes the problem for you, leave a note and change the status to VERIFIED. Packages for several distributions should become available in the near future. Keep an eye on the Gluster Users mailinglist [2] and the update (possibly an "updates-testing" repository) infrastructure for your distribution. [1] http://supercolony.gluster.org/pipermail/gluster-users/2014-September/018883.html [2] http://supercolony.gluster.org/pipermail/gluster-users/
REVIEW: http://review.gluster.org/8876 (cluster/afr: Fix inode leak) posted (#1) for review on release-3.6 by Krutika Dhananjay (kdhananj)
COMMIT: http://review.gluster.org/8876 committed in release-3.6 by Vijay Bellur (vbellur) ------ commit 369f59a91e2aee13a6e12ef78e7188f29a819ff7 Author: Krutika Dhananjay <kdhananj> Date: Mon Sep 29 08:48:40 2014 +0530 cluster/afr: Fix inode leak Backport of: http://review.gluster.org/8875 Change-Id: Ib000be1238d38f8d63ff25b3873bb813bf72beec BUG: 1145914 Signed-off-by: Krutika Dhananjay <kdhananj> Reviewed-on: http://review.gluster.org/8876 Reviewed-by: Pranith Kumar Karampuri <pkarampu> Tested-by: Gluster Build System <jenkins.com> Reviewed-by: Vijay Bellur <vbellur>
This bug is getting closed because a release has been made available that should address the reported issue. In case the problem is still not fixed with glusterfs-3.6.1, please reopen this bug report. glusterfs-3.6.1 has been announced [1], packages for several distributions should become available in the near future. Keep an eye on the Gluster Users mailinglist [2] and the update infrastructure for your distribution. [1] http://supercolony.gluster.org/pipermail/gluster-users/2014-November/019410.html [2] http://supercolony.gluster.org/mailman/listinfo/gluster-users