Bug 1659439

Summary: Memory leak: dict_t leak in rda_opendir
Product: [Red Hat Storage] Red Hat Gluster Storage Reporter: Nithya Balachandran <nbalacha>
Component: readdir-aheadAssignee: Nithya Balachandran <nbalacha>
Status: CLOSED ERRATA QA Contact: Sayalee <saraut>
Severity: high Docs Contact:
Priority: high    
Version: rhgs-3.4CC: bugs, nbalacha, rgowdapp, rhs-bugs, sanandpa, sankarshan, saraut
Target Milestone: ---Keywords: ZStream
Target Release: RHGS 3.4.z Batch Update 3   
Hardware: Unspecified   
OS: Unspecified   
Whiteboard:
Fixed In Version: glusterfs-3.12.2-33 Doc Type: If docs needed, set a value
Doc Text:
Story Points: ---
Clone Of: 1659432 Environment:
Last Closed: 2019-02-04 07:41:44 UTC Type: ---
Regression: --- Mount Type: ---
Documentation: --- CRM:
Verified Versions: Category: ---
oVirt Team: --- RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: --- Target Upstream Version:
Embargoed:
Bug Depends On: 1659432, 1659676    
Bug Blocks:    

Description Nithya Balachandran 2018-12-14 11:32:58 UTC
+++ This bug was initially created as a clone of Bug #1659432 +++

Description of problem:


rda_opendir creates and leaks a dict_t xdata_from_req

Version-Release number of selected component (if applicable):


How reproducible:

Consistently.


Steps to Reproduce:
1. Create a 1 brick volume and fuse mount it. Enable readdir-ahead.
2. Create a directory mydir in the volume root.
3. Compile the following test code and run it on the volume in a loop, while checking RES in top output for the fuse mount process.


<code>

#include <stdio.h>
#include <stdlib.h>
#include <sys/types.h>
#include <dirent.h>


int main (int argc, char *argv[])
{
        DIR *fd = NULL; 
        char *name = NULL; 

        if (argc != 2) { 
                printf ("Usage: %s <dirpath>\n", argv[0]);
                exit (1);
        }
        name = argv[1];
        fd = opendir(name);

        if (!fd) { 
                exit (1);
        }

        closedir (fd);
}

</code>





Actual results:
Memory use rises constantly.

Expected results:
Memory usage should stay constant after the initial run.

Additional info:

--- Additional comment from Nithya Balachandran on 2018-12-14 11:20:16 UTC ---

https://review.gluster.org/#/c/glusterfs/+/21859/

Comment 11 Sayalee 2019-01-08 09:26:00 UTC
Moving this bug to verified as:
* Tested the issue on 3.12.2-36 build, there was no rise in memory usage after the initial run.
* Tested the issue on BU2 build (3.12.2-32), noticed that the RES in top command output increased from 3.6g to 4.2g, which means memory usage raised continuously.
--> Both these observations help to conclude that the issue has been fixed on 3.12.2-36 build.
--> Also other tests around the bug will be covered in regression and if any issue is hit, bug will be reported accordingly.

Comment 13 errata-xmlrpc 2019-02-04 07:41:44 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHBA-2019:0263