Bug 1601423 - memory leak in get-state when geo-replication session is configured
Summary: memory leak in get-state when geo-replication session is configured
Keywords:
Status: CLOSED CURRENTRELEASE
Alias: None
Product: GlusterFS
Classification: Community
Component: glusterd
Version: mainline
Hardware: Unspecified
OS: Unspecified
unspecified
unspecified
Target Milestone: ---
Assignee: Sanju
QA Contact:
URL:
Whiteboard:
Depends On: 1599362
Blocks:
TreeView+ depends on / blocked
 
Reported: 2018-07-16 11:47 UTC by Sanju
Modified: 2018-10-23 15:14 UTC (History)
10 users (show)

Fixed In Version: glusterfs-5.0
Clone Of: 1599362
Environment:
Last Closed: 2018-10-23 15:14:36 UTC
Regression: ---
Mount Type: ---
Documentation: ---
CRM:
Verified Versions:
Embargoed:


Attachments (Terms of Use)

Comment 1 Worker Ant 2018-07-16 11:50:07 UTC
REVIEW: https://review.gluster.org/20521 (glusterd: memory leak in get-state) posted (#1) for review on master by Sanju Rakonde

Comment 2 Worker Ant 2018-07-18 14:08:58 UTC
COMMIT: https://review.gluster.org/20521 committed in master by "Atin Mukherjee" <amukherj> with a commit message- glusterd: memory leak in get-state

Problem: gluster get-state command is leaking the memory when
geo-replication session is configured.

Cause: In glusterd_print_gsync_status(), we are trying to get
reference to the keys of gsync_dict. The references to keys of
gsync_dict are stored status_vols[i]. status_vols[i] are
allocated with a memory of size of gf_gsync_status_t.

Solution: Need not to use a array of pointers(status_vals), using
a pointer to hold the reference to a key of gsync_dict is sufficient.

Followed the below steps for testing:
1. Configured geo-rep session
2. Ran gluster get-state command for 1000 times.

Without this patch, glusterd's memory was increasing significantly
(around 22000KB per 1000 times), with this patch it reduced (1500KB
per 1000 times)

fixes: bz#1601423
Change-Id: I361f5525d71f821bb345419ccfdc20ca288ca292
Signed-off-by: Sanju Rakonde <srakonde>

Comment 3 Shyamsundar 2018-10-23 15:14:36 UTC
This bug is getting closed because a release has been made available that should address the reported issue. In case the problem is still not fixed with glusterfs-5.0, please open a new bug report.

glusterfs-5.0 has been announced on the Gluster mailinglists [1], packages for several distributions should become available in the near future. Keep an eye on the Gluster Users mailinglist [2] and the update infrastructure for your distribution.

[1] https://lists.gluster.org/pipermail/announce/2018-October/000115.html
[2] https://www.gluster.org/pipermail/gluster-users/


Note You need to log in before you can comment on or make changes to this bug.