Bug 1659676 - Memory leak: dict_t leak in rda_opendir
Summary: Memory leak: dict_t leak in rda_opendir
Keywords:
Status: CLOSED CURRENTRELEASE
Alias: None
Product: GlusterFS
Classification: Community
Component: core
Version: 5
Hardware: Unspecified
OS: Unspecified
high
high
Target Milestone: ---
Assignee: Nithya Balachandran
QA Contact:
URL:
Whiteboard:
Depends On: 1659432
Blocks: 1659439
TreeView+ depends on / blocked
 
Reported: 2018-12-15 03:24 UTC by Nithya Balachandran
Modified: 2019-10-22 02:49 UTC (History)
4 users (show)

Fixed In Version: glusterfs-5.3
Clone Of: 1659432
Environment:
Last Closed: 2019-01-22 14:08:49 UTC
Regression: ---
Mount Type: ---
Documentation: ---
CRM:
Verified Versions:
Embargoed:


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
Gluster.org Gerrit 21870 0 None Merged performance/rda: Fixed dict_t memory leak 2018-12-26 16:39:10 UTC

Description Nithya Balachandran 2018-12-15 03:24:47 UTC
+++ This bug was initially created as a clone of Bug #1659432 +++

Description of problem:


rda_opendir creates and leaks a dict_t xdata_from_req

Version-Release number of selected component (if applicable):


How reproducible:

Consistently.


Steps to Reproduce:
1. Create a 1 brick volume and fuse mount it.
2. Create a directory mydir in the volume root.
3. Compile the following test code and run it on the volume in a loop, while checking RES in top output for the fuse mount process.


<code>

#include <stdio.h>
#include <stdlib.h>
#include <sys/types.h>
#include <dirent.h>


int main (int argc, char *argv[])
{
        DIR *fd = NULL; 
        char *name = NULL; 

        if (argc != 2) { 
                printf ("Usage: %s <dirpath>\n", argv[0]);
                exit (1);
        }
        name = argv[1];
        fd = opendir(name);

        if (!fd) { 
                exit (1);
        }

        closedir (fd);
}

</code>


cd /mnt/gluster-mnt
Run
while (true); do ./a.out ./mydir; done



Actual results:
Memory use rises constantly.

Expected results:
Memory usage should stay constant after the initial run.

Additional info:

--- Additional comment from Nithya Balachandran on 2018-12-14 11:20:16 UTC ---

https://review.gluster.org/#/c/glusterfs/+/21859/

--- Additional comment from Worker Ant on 2018-12-14 11:40:17 UTC ---

REVIEW: https://review.gluster.org/21859 (performance/rda:  Fixed dict_t memory leak) posted (#1) for review on master by N Balachandran

--- Additional comment from Worker Ant on 2018-12-14 11:40:57 UTC ---

REVIEW: https://review.gluster.org/21859 (performance/rda:  Fixed dict_t memory leak) posted (#1) for review on master by N Balachandran

--- Additional comment from Worker Ant on 2018-12-14 15:22:54 UTC ---

REVIEW: https://review.gluster.org/21859 (performance/rda:  Fixed dict_t memory leak) posted (#2) for review on master by Xavi Hernandez

--- Additional comment from Yan on 2018-12-14 22:48:32 UTC ---

The fuse RSS memory increase issue is similar to below one. Could you confirm? 

https://bugzilla.redhat.com/show_bug.cgi?id=1623107

Comment 1 Nithya Balachandran 2018-12-15 03:51:34 UTC
https://review.gluster.org/#/c/glusterfs/+/21870/

Comment 2 Worker Ant 2018-12-15 03:51:57 UTC
REVIEW: https://review.gluster.org/21870 (performance/rda:  Fixed dict_t memory leak) posted (#2) for review on release-5 by N Balachandran

Comment 3 Worker Ant 2018-12-26 16:39:08 UTC
REVIEW: https://review.gluster.org/21870 (performance/rda:  Fixed dict_t memory leak) posted (#5) for review on release-5 by Shyamsundar Ranganathan

Comment 4 Rajendra 2018-12-27 16:38:53 UTC
When will this bug fix be available to users? Which bug fix release - R5.1-2, Target date Jan 10, 2019?

Comment 5 Shyamsundar 2019-01-22 14:08:49 UTC
This bug is getting closed because a release has been made available that should address the reported issue. In case the problem is still not fixed with glusterfs-5.3, please open a new bug report.

glusterfs-5.3 has been announced on the Gluster mailinglists [1], packages for several distributions should become available in the near future. Keep an eye on the Gluster Users mailinglist [2] and the update infrastructure for your distribution.

[1] https://lists.gluster.org/pipermail/announce/2019-January/000118.html
[2] https://www.gluster.org/pipermail/gluster-users/


Note You need to log in before you can comment on or make changes to this bug.