Bug 1352854
Summary: | GlusterFS - Memory Leak - High Memory Utilization | ||
---|---|---|---|
Product: | [Community] GlusterFS | Reporter: | Mohit Agrawal <moagrawa> |
Component: | glusterd | Assignee: | Mohit Agrawal <moagrawa> |
Status: | CLOSED CURRENTRELEASE | QA Contact: | |
Severity: | medium | Docs Contact: | |
Priority: | urgent | ||
Version: | mainline | CC: | amukherj, bugs, kaushal, moagrawa, rkavunga, uganit |
Target Milestone: | --- | Keywords: | Triaged |
Target Release: | --- | ||
Hardware: | x86_64 | ||
OS: | Linux | ||
Whiteboard: | |||
Fixed In Version: | glusterfs-3.9.0 | Doc Type: | If docs needed, set a value |
Doc Text: | Story Points: | --- | |
Clone Of: | 1329335 | Environment: | |
Last Closed: | 2017-03-27 18:17:28 UTC | Type: | Bug |
Regression: | --- | Mount Type: | --- |
Documentation: | --- | CRM: | |
Verified Versions: | Category: | --- | |
oVirt Team: | --- | RHEL 7.3 requirements from Atomic Host: | |
Cloudforms Team: | --- | Target Upstream Version: | |
Embargoed: | |||
Bug Depends On: | 1329335 | ||
Bug Blocks: |
Description
Mohit Agrawal
2016-07-05 09:29:34 UTC
REVIEW: http://review.gluster.org/14862 (Memory leak in gluster volume profile command) posted (#2) for review on master by MOHIT AGRAWAL (moagrawa) REVIEW: http://review.gluster.org/14862 (cli: Modify the code to cleanup memory leak in gluster volume profile command) posted (#3) for review on master by MOHIT AGRAWAL (moagrawa) REVIEW: http://review.gluster.org/14862 (cli: Modify the code to cleanup memory leak in gluster volume profile command) posted (#4) for review on master by Niels de Vos (ndevos) REVIEW: http://review.gluster.org/14862 (cli: Modify the code to cleanup memory leak in gluster volume profile command) posted (#5) for review on master by MOHIT AGRAWAL (moagrawa) REVIEW: http://review.gluster.org/14862 (cli: Modify the code to cleanup memory leak in glusterd and cli) posted (#6) for review on master by MOHIT AGRAWAL (moagrawa) REVIEW: http://review.gluster.org/14862 (glusterd: Modify the code to cleanup memory leak in glusterd) posted (#7) for review on master by MOHIT AGRAWAL (moagrawa) REVIEW: http://review.gluster.org/14862 (glusterd: Fix memory leak in glusterd (un)lock RPCs) posted (#8) for review on master by MOHIT AGRAWAL (moagrawa) COMMIT: http://review.gluster.org/14862 committed in master by Atin Mukherjee (amukherj) ------ commit 07b95cf8104da42d783d053d0fbb8497399f7d00 Author: root <root.eng.blr.redhat.com> Date: Tue Jul 5 14:33:15 2016 +0530 glusterd: Fix memory leak in glusterd (un)lock RPCs Problem: At the time of execute "gluster volume profile <vol> info" command It does have memory leak in glusterd. Solution: Modify the code to prevent memory leak in glusterd. Fix : 1) Unref dict and free dict_val buffer in glusterd_mgmt_v3_lock_peer and glusterd_mgmt_v3_unlock_peers. Test : To verify the patch run below loop to generate io traffic for (( i=0 ; i<=1000000 ; i++ )); do echo "hi Start Line " > file$i; cat file$i >> /dev/null; done To verify the improvement in memory leak specific to glusterd run below command cnt=0;while [ $cnt -le 1000 ]; do pmap -x <glusterd-pid> | grep total; gluster volume profile distributed info > /dev/null; cnt=`expr $cnt + 1`; done After apply this patch it will reduce leak significantly. Change-Id: I52a0ca47adb20bfe4b1848a11df23e5e37c5cea9 BUG: 1352854 Signed-off-by: Mohit Agrawal <moagrawa> Reviewed-on: http://review.gluster.org/14862 Reviewed-by: Atin Mukherjee <amukherj> Smoke: Gluster Build System <jenkins.org> NetBSD-regression: NetBSD Build System <jenkins.org> Reviewed-by: Prashanth Pai <ppai> CentOS-regression: Gluster Build System <jenkins.org> This bug is getting closed because a release has been made available that should address the reported issue. In case the problem is still not fixed with glusterfs-3.9.0, please open a new bug report. glusterfs-3.9.0 has been announced on the Gluster mailinglists [1], packages for several distributions should become available in the near future. Keep an eye on the Gluster Users mailinglist [2] and the update infrastructure for your distribution. [1] http://lists.gluster.org/pipermail/gluster-users/2016-November/029281.html [2] https://www.gluster.org/pipermail/gluster-users/ |