Bug 1635480 - Correction for glusterd memory leak because use "gluster volume status volume_name --detail" continuesly (cli)
Summary: Correction for glusterd memory leak because use "gluster volume status volume...
Keywords:
Status: CLOSED CURRENTRELEASE
Alias: None
Product: GlusterFS
Classification: Community
Component: cli
Version: mainline
Hardware: Unspecified
OS: Unspecified
unspecified
medium
Target Milestone: ---
Assignee: bugs@gluster.org
QA Contact:
URL:
Whiteboard:
Depends On: 1635100
Blocks:
TreeView+ depends on / blocked
 
Reported: 2018-10-03 04:17 UTC by Atin Mukherjee
Modified: 2019-03-25 16:31 UTC (History)
8 users (show)

Fixed In Version: glusterfs-6.0
Doc Type: If docs needed, set a value
Doc Text:
Clone Of: 1635100
Environment:
Last Closed: 2019-03-25 16:31:10 UTC
Regression: ---
Mount Type: ---
Documentation: ---
CRM:
Verified Versions:


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
Gluster.org Gerrit 21316 0 None None None 2018-10-03 04:17:37 UTC

Description Atin Mukherjee 2018-10-03 04:17:37 UTC
+++ This bug was initially created as a clone of Bug #1635100 +++

Description of problem:
Use “gluster volume status volume_name –detail” each 15s in our product to check glusterfsd process status that will cause glusterd memory increase continues.

For 20 days, the memory is 6%, in the beginning, it is just 0.9%, from statedump, the glusterd will increase about 3M a day.

  USER       PID %CPU %MEM    VSZ   RSS TTY      STAT START   TIME COMMAND

root      2078  0.4  6.2 662084 128564 ?       Ssl  Sep11 124:01 /usr/sbin/glusterd –vol

 

Could you check my patch ? I already test it, the glusterd memory leak stop increase and keep memory occupy around of that value for 12 hours.

 

root      2132  0.3  1.2 662080 25152 ?        Ssl  16:05   0:24 /usr/sbin/glusterd --volfile=/opt/nokia/libexec/StorageUtils/etc/glusterd/glusterd.vol -p /run/glusterd.pid

root      2132  0.3  1.2 662080 25152 ?        Ssl  Oct01   3:00 /usr/sbin/glusterd --volfile=/opt/nokia/libexec/StorageUtils/etc/glusterd/glusterd.vol -p /run/glusterd.pid

root      2132  0.3  1.2 662080 25140 ?        Ssl  Oct01   3:05 /usr/sbin/glusterd --volfile=/opt/nokia/libexec/StorageUtils/etc/glusterd/glusterd.vol -p /run/glusterd.pid

root cause:

In the gluster v status process ,The cli response is allocated by glusterd and should be freed by cli, but cli do not free it, so glusterd memory will keep increased.

gf_cli_status_cbk

--- a/cli/src/cli-rpc-ops.c

+++ b/cli/src/cli-rpc-ops.c

@@ -8436,6 +8436,7 @@

         ret = rsp.op_ret;


 out:

+        FREE(rsp.dict.dict_val);

         if (dict)

                 dict_unref (dict);

         GF_FREE (status.brick);

--- Additional comment from Atin Mukherjee on 2018-10-02 09:35:05 EDT ---

upstream patch : https://review.gluster.org/#/c/21316/

Comment 1 Worker Ant 2018-10-03 04:53:04 UTC
REVIEW: https://review.gluster.org/21316 (cli: fix glusterd memory leak cause by \"gluster v status volume_name\") posted (#2) for review on master by Atin Mukherjee

Comment 2 Worker Ant 2018-10-03 08:44:10 UTC
REVIEW: https://review.gluster.org/21316 (cli: fix glusterd memory leak cause by \"gluster v status volume_name\") posted (#2) for review on master by Atin Mukherjee

Comment 3 Atin Mukherjee 2018-10-05 02:17:25 UTC
Interestingly the bot didn't move this BZ to modified even though the patch is merged.

Comment 4 Shyamsundar 2019-03-25 16:31:10 UTC
This bug is getting closed because a release has been made available that should address the reported issue. In case the problem is still not fixed with glusterfs-6.0, please open a new bug report.

glusterfs-6.0 has been announced on the Gluster mailinglists [1], packages for several distributions should become available in the near future. Keep an eye on the Gluster Users mailinglist [2] and the update infrastructure for your distribution.

[1] https://lists.gluster.org/pipermail/announce/2019-March/000120.html
[2] https://www.gluster.org/pipermail/gluster-users/


Note You need to log in before you can comment on or make changes to this bug.