Bug 1686255 - glusterd leaking memory when issued gluster vol status all tasks continuosly
Summary: glusterd leaking memory when issued gluster vol status all tasks continuosly
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: Red Hat Gluster Storage
Classification: Red Hat Storage
Component: glusterd
Version: rhgs-3.4
Hardware: x86_64
OS: Linux
medium
high
Target Milestone: ---
: RHGS 3.5.0
Assignee: Sanju
QA Contact: Kshithij Iyer
URL:
Whiteboard:
Depends On: 1691164 1694610 1694612
Blocks: 1696807
TreeView+ depends on / blocked
 
Reported: 2019-03-07 05:48 UTC by Bala Konda Reddy M
Modified: 2019-10-30 12:20 UTC (History)
11 users (show)

Fixed In Version: glusterfs-6.0-2
Doc Type: Bug Fix
Doc Text:
A small memory leak that occurred when viewing the status of all volumes has been fixed.
Clone Of:
: 1691164 (view as bug list)
Environment:
Last Closed: 2019-10-30 12:20:22 UTC
Embargoed:


Attachments (Terms of Use)
Top output of glusterd for all six nodes of the cluster (304.13 KB, image/png)
2019-03-07 05:48 UTC, Bala Konda Reddy M
no flags Details


Links
System ID Private Priority Status Summary Last Updated
Red Hat Product Errata RHEA-2019:3249 0 None None None 2019-10-30 12:20:46 UTC

Description Bala Konda Reddy M 2019-03-07 05:48:42 UTC
Created attachment 1541678 [details]
Top output of glusterd for all six nodes of the cluster

Description of problem:
glusterd is leaking memory when issused "gluster vol status tasks" continuosly for 12 hours. The memory increase is from 250MB to 1.1GB. The increase have been 750 MB.


Version-Release number of selected component (if applicable):
glusterfs-3.12.2-45.el7rhgs.x86_64

How reproducible:
1/1

Steps to Reproduce:
1. On a six node cluster with brick-multiplexing enabled
2. Created 150 disperse volumes and 250 replica volumes and started them
3. Taken memory footprint from all the nodes
4. Issued "while true; do gluster volume status all tasks; sleep 2; done" with a time gap of 2 seconds 

Actual results:
Seen a memory increase of glusterd on Node N1 from 260MB to 1.1GB

Expected results:
glusterd memory shouldn't leak

Additional info:
Attaching the screenshot of the top output before and after the command has been executed.

The setup in same state for further debugging.

Comment 9 Atin Mukherjee 2019-03-12 09:11:54 UTC
Sanju,

Looks like there's a leak on the remote glusterd i.e. in the op-sm framework based on the periodic statedump I captured while testing this.

The impacted data types are:

gf_common_mt_gf_timer_t
gf_common_mt_asprintf
gf_common_mt_strdup
gf_common_mt_char
gf_common_mt_txn_opinfo_obj_t

Please check if we're not cleaning up txn_opinfo in some place in this transaction, fixing that might implicitly fix the other leaks too.

Comment 29 errata-xmlrpc 2019-10-30 12:20:22 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHEA-2019:3249


Note You need to log in before you can comment on or make changes to this bug.