Bug 1635100 - Correction for glusterd memory leak because use "gluster volume status volume_name --detail" continuously (cli)
Summary: Correction for glusterd memory leak because use "gluster volume status volume...
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: Red Hat Gluster Storage
Classification: Red Hat Storage
Component: cli
Version: unspecified
Hardware: Unspecified
OS: Unspecified
medium
medium
Target Milestone: ---
: RHGS 3.4.z Batch Update 2
Assignee: Sanju
QA Contact: Upasana
URL:
Whiteboard:
Depends On:
Blocks: 1635480
TreeView+ depends on / blocked
 
Reported: 2018-10-02 06:34 UTC by Yaniv Kaul
Modified: 2018-12-17 17:07 UTC (History)
8 users (show)

Fixed In Version: glusterfs-3.12.2-27
Doc Type: If docs needed, set a value
Doc Text:
Clone Of:
: 1635480 (view as bug list)
Environment:
Last Closed: 2018-12-17 17:07:04 UTC
Embargoed:


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
Gluster.org Gerrit 21316 0 None None None 2018-10-02 13:43:00 UTC
Red Hat Bugzilla 1651915 0 low CLOSED On running "gluster volume status <vol_name> detail" continuously on the backgroud seeing glusterd memory leak 2021-02-22 00:41:40 UTC
Red Hat Product Errata RHBA-2018:3827 0 None None None 2018-12-17 17:07:16 UTC

Internal Links: 1651915

Description Yaniv Kaul 2018-10-02 06:34:22 UTC
Description of problem:
Use “gluster volume status volume_name –detail” each 15s in our product to check glusterfsd process status that will cause glusterd memory increase continues.

For 20 days, the memory is 6%, in the beginning, it is just 0.9%, from statedump, the glusterd will increase about 3M a day.

  USER       PID %CPU %MEM    VSZ   RSS TTY      STAT START   TIME COMMAND

root      2078  0.4  6.2 662084 128564 ?       Ssl  Sep11 124:01 /usr/sbin/glusterd –vol

 

Could you check my patch ? I already test it, the glusterd memory leak stop increase and keep memory occupy around of that value for 12 hours.

 

root      2132  0.3  1.2 662080 25152 ?        Ssl  16:05   0:24 /usr/sbin/glusterd --volfile=/opt/nokia/libexec/StorageUtils/etc/glusterd/glusterd.vol -p /run/glusterd.pid

root      2132  0.3  1.2 662080 25152 ?        Ssl  Oct01   3:00 /usr/sbin/glusterd --volfile=/opt/nokia/libexec/StorageUtils/etc/glusterd/glusterd.vol -p /run/glusterd.pid

root      2132  0.3  1.2 662080 25140 ?        Ssl  Oct01   3:05 /usr/sbin/glusterd --volfile=/opt/nokia/libexec/StorageUtils/etc/glusterd/glusterd.vol -p /run/glusterd.pid

root cause:

In the gluster v status process ,The cli response is allocated by glusterd and should be freed by cli, but cli do not free it, so glusterd memory will keep increased.

gf_cli_status_cbk

--- a/cli/src/cli-rpc-ops.c

+++ b/cli/src/cli-rpc-ops.c

@@ -8436,6 +8436,7 @@

         ret = rsp.op_ret;


 out:

+        FREE(rsp.dict.dict_val);

         if (dict)

                 dict_unref (dict);

         GF_FREE (status.brick);

Comment 1 Atin Mukherjee 2018-10-02 13:35:05 UTC
upstream patch : https://review.gluster.org/#/c/21316/

Comment 11 Sanju 2018-11-21 06:03:24 UTC
Upasana,

This bug is reported by an upstream user(Yaniv has raised a bug to track it). In description the user mentioned that, we see a growth of 3 MB per day. 

In upstream may be because of lot many coverity fixes, we might have fixed many resource leak issues, because of which we are seeing less memory leak in upstream. (Coverity fixes are not backported into downstream.)

To test this, please follow below steps:

1. run the "gluster v status deatil" command for 12 hours or so in RHGS-3.4.0 
   or RHGS-3.4.1
2. keep a note of memory usage of glusterd before and after running the above 
   command in loop
3. now, follow step 1 on the RHGS-3.4.2 build
4. The memory leakage should not be more than what we have seen in RHGS-3.4.0 
   or RHGS-3.4.1

Thanks,
Sanju

Comment 17 errata-xmlrpc 2018-12-17 17:07:04 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHBA-2018:3827


Note You need to log in before you can comment on or make changes to this bug.