Backport of https://bugzilla.redhat.com/show_bug.cgi?id=1352854 to 3.7.
REVIEW: http://review.gluster.org/15081 (glusterd: Fix memory leak in glusterd (un)lock RPCs) posted (#1) for review on release-3.7 by Oleksandr Natalenko (oleksandr)
Hi, For 3.7 bugzilla(https://bugzilla.redhat.com/show_bug.cgi?id=1329335) is already opened. Thanks & Regards Mohit Agrawal
COMMIT: http://review.gluster.org/15081 committed in release-3.7 by Atin Mukherjee (amukherj) ------ commit 058ccf9520794280f3fc254de00e3f604e3cfbb7 Author: root <root.eng.blr.redhat.com> Date: Tue Jul 5 14:33:15 2016 +0530 glusterd: Fix memory leak in glusterd (un)lock RPCs Problem: At the time of execute "gluster volume profile <vol> info" command It does have memory leak in glusterd. Solution: Modify the code to prevent memory leak in glusterd. Fix : 1) Unref dict and free dict_val buffer in glusterd_mgmt_v3_lock_peer and glusterd_mgmt_v3_unlock_peers. Test : To verify the patch run below loop to generate io traffic for (( i=0 ; i<=1000000 ; i++ )); do echo "hi Start Line " > file$i; cat file$i >> /dev/null; done To verify the improvement in memory leak specific to glusterd run below command cnt=0;while [ $cnt -le 1000 ]; do pmap -x <glusterd-pid> | grep total; gluster volume profile distributed info > /dev/null; cnt=`expr $cnt + 1`; done After apply this patch it will reduce leak significantly. > Reviewed-on: http://review.gluster.org/14862 > Smoke: Gluster Build System <jenkins.org> > CentOS-regression: Gluster Build System <jenkins.org> > NetBSD-regression: NetBSD Build System <jenkins.org> > Reviewed-by: Atin Mukherjee <amukherj> > Reviewed-by: Prashanth Pai <ppai> BUG: 1363747 Change-Id: I52a0ca47adb20bfe4b1848a11df23e5e37c5cea9 Signed-off-by: Mohit Agrawal <moagrawa> Signed-off-by: Oleksandr Natalenko <oleksandr> Reviewed-on: http://review.gluster.org/15081 Smoke: Gluster Build System <jenkins.org> NetBSD-regression: NetBSD Build System <jenkins.org> CentOS-regression: Gluster Build System <jenkins.org>
This bug is getting closed because a release has been made available that should address the reported issue. In case the problem is still not fixed with glusterfs-3.7.15, please open a new bug report. glusterfs-3.7.15 has been announced on the Gluster mailinglists [1], packages for several distributions should become available in the near future. Keep an eye on the Gluster Users mailinglist [2] and the update infrastructure for your distribution. [1] https://www.gluster.org/pipermail/gluster-devel/2016-September/050714.html [2] https://www.gluster.org/pipermail/gluster-users/