DescriptionBala Konda Reddy M
2019-03-07 05:48:42 UTC
Created attachment 1541678[details]
Top output of glusterd for all six nodes of the cluster
Description of problem:
glusterd is leaking memory when issused "gluster vol status tasks" continuosly for 12 hours. The memory increase is from 250MB to 1.1GB. The increase have been 750 MB.
Version-Release number of selected component (if applicable):
glusterfs-3.12.2-45.el7rhgs.x86_64
How reproducible:
1/1
Steps to Reproduce:
1. On a six node cluster with brick-multiplexing enabled
2. Created 150 disperse volumes and 250 replica volumes and started them
3. Taken memory footprint from all the nodes
4. Issued "while true; do gluster volume status all tasks; sleep 2; done" with a time gap of 2 seconds
Actual results:
Seen a memory increase of glusterd on Node N1 from 260MB to 1.1GB
Expected results:
glusterd memory shouldn't leak
Additional info:
Attaching the screenshot of the top output before and after the command has been executed.
The setup in same state for further debugging.
Sanju,
Looks like there's a leak on the remote glusterd i.e. in the op-sm framework based on the periodic statedump I captured while testing this.
The impacted data types are:
gf_common_mt_gf_timer_t
gf_common_mt_asprintf
gf_common_mt_strdup
gf_common_mt_char
gf_common_mt_txn_opinfo_obj_t
Please check if we're not cleaning up txn_opinfo in some place in this transaction, fixing that might implicitly fix the other leaks too.
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.
For information on the advisory, and where to find the updated
files, follow the link below.
If the solution does not work for you, open a new bug report.
https://access.redhat.com/errata/RHEA-2019:3249
Created attachment 1541678 [details] Top output of glusterd for all six nodes of the cluster Description of problem: glusterd is leaking memory when issused "gluster vol status tasks" continuosly for 12 hours. The memory increase is from 250MB to 1.1GB. The increase have been 750 MB. Version-Release number of selected component (if applicable): glusterfs-3.12.2-45.el7rhgs.x86_64 How reproducible: 1/1 Steps to Reproduce: 1. On a six node cluster with brick-multiplexing enabled 2. Created 150 disperse volumes and 250 replica volumes and started them 3. Taken memory footprint from all the nodes 4. Issued "while true; do gluster volume status all tasks; sleep 2; done" with a time gap of 2 seconds Actual results: Seen a memory increase of glusterd on Node N1 from 260MB to 1.1GB Expected results: glusterd memory shouldn't leak Additional info: Attaching the screenshot of the top output before and after the command has been executed. The setup in same state for further debugging.