Bug 1659439 - Memory leak: dict_t leak in rda_opendir
Summary: Memory leak: dict_t leak in rda_opendir
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: Red Hat Gluster Storage
Classification: Red Hat Storage
Component: readdir-ahead
Version: rhgs-3.4
Hardware: Unspecified
OS: Unspecified
high
high
Target Milestone: ---
: RHGS 3.4.z Batch Update 3
Assignee: Nithya Balachandran
QA Contact: Sayalee
URL:
Whiteboard:
Depends On: 1659432 1659676
Blocks:
TreeView+ depends on / blocked
 
Reported: 2018-12-14 11:32 UTC by Nithya Balachandran
Modified: 2019-02-06 07:09 UTC (History)
7 users (show)

Fixed In Version: glusterfs-3.12.2-33
Doc Type: If docs needed, set a value
Doc Text:
Clone Of: 1659432
Environment:
Last Closed: 2019-02-04 07:41:44 UTC
Embargoed:


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
Red Hat Product Errata RHBA-2019:0263 0 None None None 2019-02-04 07:41:53 UTC

Description Nithya Balachandran 2018-12-14 11:32:58 UTC
+++ This bug was initially created as a clone of Bug #1659432 +++

Description of problem:


rda_opendir creates and leaks a dict_t xdata_from_req

Version-Release number of selected component (if applicable):


How reproducible:

Consistently.


Steps to Reproduce:
1. Create a 1 brick volume and fuse mount it. Enable readdir-ahead.
2. Create a directory mydir in the volume root.
3. Compile the following test code and run it on the volume in a loop, while checking RES in top output for the fuse mount process.


<code>

#include <stdio.h>
#include <stdlib.h>
#include <sys/types.h>
#include <dirent.h>


int main (int argc, char *argv[])
{
        DIR *fd = NULL; 
        char *name = NULL; 

        if (argc != 2) { 
                printf ("Usage: %s <dirpath>\n", argv[0]);
                exit (1);
        }
        name = argv[1];
        fd = opendir(name);

        if (!fd) { 
                exit (1);
        }

        closedir (fd);
}

</code>





Actual results:
Memory use rises constantly.

Expected results:
Memory usage should stay constant after the initial run.

Additional info:

--- Additional comment from Nithya Balachandran on 2018-12-14 11:20:16 UTC ---

https://review.gluster.org/#/c/glusterfs/+/21859/

Comment 11 Sayalee 2019-01-08 09:26:00 UTC
Moving this bug to verified as:
* Tested the issue on 3.12.2-36 build, there was no rise in memory usage after the initial run.
* Tested the issue on BU2 build (3.12.2-32), noticed that the RES in top command output increased from 3.6g to 4.2g, which means memory usage raised continuously.
--> Both these observations help to conclude that the issue has been fixed on 3.12.2-36 build.
--> Also other tests around the bug will be covered in regression and if any issue is hit, bug will be reported accordingly.

Comment 13 errata-xmlrpc 2019-02-04 07:41:44 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHBA-2019:0263


Note You need to log in before you can comment on or make changes to this bug.