Description of problem: rda_opendir creates and leaks a dict_t xdata_from_req Version-Release number of selected component (if applicable): How reproducible: Consistently. Steps to Reproduce: 1. Create a 1 brick volume and fuse mount it. 2. Create a directory mydir in the volume root. 3. Compile the following test code and run it on the volume in a loop, while checking RES in top output for the fuse mount process. <code> #include <stdio.h> #include <stdlib.h> #include <sys/types.h> #include <dirent.h> int main (int argc, char *argv[]) { DIR *fd = NULL; char *name = NULL; if (argc != 2) { printf ("Usage: %s <dirpath>\n", argv[0]); exit (1); } name = argv[1]; fd = opendir(name); if (!fd) { exit (1); } closedir (fd); } </code> Actual results: Memory use rises constantly. Expected results: Memory usage should stay constant after the initial run. Additional info:
https://review.gluster.org/#/c/glusterfs/+/21859/
REVIEW: https://review.gluster.org/21859 (performance/rda: Fixed dict_t memory leak) posted (#1) for review on master by N Balachandran
REVIEW: https://review.gluster.org/21859 (performance/rda: Fixed dict_t memory leak) posted (#2) for review on master by Xavi Hernandez
The fuse RSS memory increase issue is similar to below one. Could you confirm? https://bugzilla.redhat.com/show_bug.cgi?id=1623107
(In reply to Yan from comment #5) > The fuse RSS memory increase issue is similar to below one. Could you > confirm? > > https://bugzilla.redhat.com/show_bug.cgi?id=1623107 I have updated BZ 1623107.
We've seen reduced memory leakage with readdir_ahead flag disabled but memory leakage still exits. Assume this fix will be included in the next release ( 5.3 @ 1/10/2019), please confirm.
(In reply to Yan from comment #7) > We've seen reduced memory leakage with readdir_ahead flag disabled but > memory leakage still exits. That is likely to be a different cause, most likely because Gluster Fuse clients do not invalidate inodes at present so the number of inodes in the cache keeps growing. A patch has been merged upstream for this (https://review.gluster.org/#/c/glusterfs/+/19778/) but it is not part of release-5 as yet. Did you start seeing this in recent releases or was this always there? Can you send additional statedumps with readdir-ahead enabled to check if this is the case? A test script to reproduce the leak would also help. > > Assume this fix will be included in the next release ( 5.3 @ 1/10/2019), > please confirm. This particular fix should be part of the next 5.x release (See https://review.gluster.org/#/c/glusterfs/+/21870/).
This bug is getting closed because a release has been made available that should address the reported issue. In case the problem is still not fixed with glusterfs-6.0, please open a new bug report. glusterfs-6.0 has been announced on the Gluster mailinglists [1], packages for several distributions should become available in the near future. Keep an eye on the Gluster Users mailinglist [2] and the update infrastructure for your distribution. [1] https://lists.gluster.org/pipermail/announce/2019-March/000120.html [2] https://www.gluster.org/pipermail/gluster-users/