Bug 1659432 - Memory leak: dict_t leak in rda_opendir
Summary: Memory leak: dict_t leak in rda_opendir
Keywords:
Status: CLOSED CURRENTRELEASE
Alias: None
Product: GlusterFS
Classification: Community
Component: core
Version: mainline
Hardware: Unspecified
OS: Unspecified
high
high
Target Milestone: ---
Assignee: Nithya Balachandran
QA Contact:
URL:
Whiteboard:
Depends On:
Blocks: 1659439 1659676
TreeView+ depends on / blocked
 
Reported: 2018-12-14 11:08 UTC by Nithya Balachandran
Modified: 2020-07-16 13:52 UTC (History)
3 users (show)

Fixed In Version: glusterfs-6.0
Doc Type: If docs needed, set a value
Doc Text:
Clone Of:
: 1659439 1659676 (view as bug list)
Environment:
Last Closed: 2019-03-25 16:32:41 UTC
Regression: ---
Mount Type: ---
Documentation: ---
CRM:
Verified Versions:
Embargoed:
y.zhao: needinfo-


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
Gluster.org Gerrit 21859 0 None Merged performance/rda: Fixed dict_t memory leak 2018-12-14 15:22:54 UTC

Internal Links: 1738878

Description Nithya Balachandran 2018-12-14 11:08:01 UTC
Description of problem:


rda_opendir creates and leaks a dict_t xdata_from_req

Version-Release number of selected component (if applicable):


How reproducible:

Consistently.


Steps to Reproduce:
1. Create a 1 brick volume and fuse mount it.
2. Create a directory mydir in the volume root.
3. Compile the following test code and run it on the volume in a loop, while checking RES in top output for the fuse mount process.


<code>

#include <stdio.h>
#include <stdlib.h>
#include <sys/types.h>
#include <dirent.h>


int main (int argc, char *argv[])
{
        DIR *fd = NULL; 
        char *name = NULL; 

        if (argc != 2) { 
                printf ("Usage: %s <dirpath>\n", argv[0]);
                exit (1);
        }
        name = argv[1];
        fd = opendir(name);

        if (!fd) { 
                exit (1);
        }

        closedir (fd);
}

</code>





Actual results:
Memory use rises constantly.

Expected results:
Memory usage should stay constant after the initial run.

Additional info:

Comment 1 Nithya Balachandran 2018-12-14 11:20:16 UTC
https://review.gluster.org/#/c/glusterfs/+/21859/

Comment 2 Worker Ant 2018-12-14 11:40:17 UTC
REVIEW: https://review.gluster.org/21859 (performance/rda:  Fixed dict_t memory leak) posted (#1) for review on master by N Balachandran

Comment 3 Worker Ant 2018-12-14 11:40:57 UTC
REVIEW: https://review.gluster.org/21859 (performance/rda:  Fixed dict_t memory leak) posted (#1) for review on master by N Balachandran

Comment 4 Worker Ant 2018-12-14 15:22:54 UTC
REVIEW: https://review.gluster.org/21859 (performance/rda:  Fixed dict_t memory leak) posted (#2) for review on master by Xavi Hernandez

Comment 5 Yan 2018-12-14 22:48:32 UTC
The fuse RSS memory increase issue is similar to below one. Could you confirm? 

https://bugzilla.redhat.com/show_bug.cgi?id=1623107

Comment 6 Nithya Balachandran 2018-12-17 02:49:21 UTC
(In reply to Yan from comment #5)
> The fuse RSS memory increase issue is similar to below one. Could you
> confirm? 
> 
> https://bugzilla.redhat.com/show_bug.cgi?id=1623107

I have updated BZ 1623107.

Comment 7 Yan 2018-12-27 17:38:56 UTC
We've seen reduced memory leakage with readdir_ahead flag disabled but memory leakage still exits. 

Assume this fix will be included in the next release ( 5.3 @ 1/10/2019), please confirm.

Comment 8 Nithya Balachandran 2018-12-28 02:50:30 UTC
(In reply to Yan from comment #7)
> We've seen reduced memory leakage with readdir_ahead flag disabled but
> memory leakage still exits. 

That is likely to be a different cause, most likely because Gluster Fuse clients do not invalidate inodes at present so the number of inodes in the cache keeps growing. A patch has been merged upstream for this (https://review.gluster.org/#/c/glusterfs/+/19778/) but it is not part of release-5 as yet. 

Did you start seeing this in recent releases or was this always there?

Can you send additional statedumps with readdir-ahead enabled to check if this is the case? A test script to reproduce the leak would also help. 


> 
> Assume this fix will be included in the next release ( 5.3 @ 1/10/2019),
> please confirm.

This particular fix should be part of the next 5.x release (See https://review.gluster.org/#/c/glusterfs/+/21870/).

Comment 9 Shyamsundar 2019-03-25 16:32:41 UTC
This bug is getting closed because a release has been made available that should address the reported issue. In case the problem is still not fixed with glusterfs-6.0, please open a new bug report.

glusterfs-6.0 has been announced on the Gluster mailinglists [1], packages for several distributions should become available in the near future. Keep an eye on the Gluster Users mailinglist [2] and the update infrastructure for your distribution.

[1] https://lists.gluster.org/pipermail/announce/2019-March/000120.html
[2] https://www.gluster.org/pipermail/gluster-users/


Note You need to log in before you can comment on or make changes to this bug.