Bug 1635100
| Summary: | Correction for glusterd memory leak because use "gluster volume status volume_name --detail" continuously (cli) | |||
|---|---|---|---|---|
| Product: | [Red Hat Storage] Red Hat Gluster Storage | Reporter: | Yaniv Kaul <ykaul> | |
| Component: | cli | Assignee: | Sanju <srakonde> | |
| Status: | CLOSED ERRATA | QA Contact: | Upasana <ubansal> | |
| Severity: | medium | Docs Contact: | ||
| Priority: | medium | |||
| Version: | unspecified | CC: | amukherj, rhs-bugs, sanandpa, sankarshan, sheggodu, srakonde, storage-qa-internal, ubansal | |
| Target Milestone: | --- | Keywords: | ZStream | |
| Target Release: | RHGS 3.4.z Batch Update 2 | |||
| Hardware: | Unspecified | |||
| OS: | Unspecified | |||
| Whiteboard: | ||||
| Fixed In Version: | glusterfs-3.12.2-27 | Doc Type: | If docs needed, set a value | |
| Doc Text: | Story Points: | --- | ||
| Clone Of: | ||||
| : | 1635480 (view as bug list) | Environment: | ||
| Last Closed: | 2018-12-17 17:07:04 UTC | Type: | Bug | |
| Regression: | --- | Mount Type: | --- | |
| Documentation: | --- | CRM: | ||
| Verified Versions: | Category: | --- | ||
| oVirt Team: | --- | RHEL 7.3 requirements from Atomic Host: | ||
| Cloudforms Team: | --- | Target Upstream Version: | ||
| Embargoed: | ||||
| Bug Depends On: | ||||
| Bug Blocks: | 1635480 | |||
upstream patch : https://review.gluster.org/#/c/21316/ Upasana, This bug is reported by an upstream user(Yaniv has raised a bug to track it). In description the user mentioned that, we see a growth of 3 MB per day. In upstream may be because of lot many coverity fixes, we might have fixed many resource leak issues, because of which we are seeing less memory leak in upstream. (Coverity fixes are not backported into downstream.) To test this, please follow below steps: 1. run the "gluster v status deatil" command for 12 hours or so in RHGS-3.4.0 or RHGS-3.4.1 2. keep a note of memory usage of glusterd before and after running the above command in loop 3. now, follow step 1 on the RHGS-3.4.2 build 4. The memory leakage should not be more than what we have seen in RHGS-3.4.0 or RHGS-3.4.1 Thanks, Sanju Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory, and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://access.redhat.com/errata/RHBA-2018:3827 |
Description of problem: Use “gluster volume status volume_name –detail” each 15s in our product to check glusterfsd process status that will cause glusterd memory increase continues. For 20 days, the memory is 6%, in the beginning, it is just 0.9%, from statedump, the glusterd will increase about 3M a day. USER PID %CPU %MEM VSZ RSS TTY STAT START TIME COMMAND root 2078 0.4 6.2 662084 128564 ? Ssl Sep11 124:01 /usr/sbin/glusterd –vol Could you check my patch ? I already test it, the glusterd memory leak stop increase and keep memory occupy around of that value for 12 hours. root 2132 0.3 1.2 662080 25152 ? Ssl 16:05 0:24 /usr/sbin/glusterd --volfile=/opt/nokia/libexec/StorageUtils/etc/glusterd/glusterd.vol -p /run/glusterd.pid root 2132 0.3 1.2 662080 25152 ? Ssl Oct01 3:00 /usr/sbin/glusterd --volfile=/opt/nokia/libexec/StorageUtils/etc/glusterd/glusterd.vol -p /run/glusterd.pid root 2132 0.3 1.2 662080 25140 ? Ssl Oct01 3:05 /usr/sbin/glusterd --volfile=/opt/nokia/libexec/StorageUtils/etc/glusterd/glusterd.vol -p /run/glusterd.pid root cause: In the gluster v status process ,The cli response is allocated by glusterd and should be freed by cli, but cli do not free it, so glusterd memory will keep increased. gf_cli_status_cbk --- a/cli/src/cli-rpc-ops.c +++ b/cli/src/cli-rpc-ops.c @@ -8436,6 +8436,7 @@ ret = rsp.op_ret; out: + FREE(rsp.dict.dict_val); if (dict) dict_unref (dict); GF_FREE (status.brick);