Description of problem: While running perf-test.sh [1], I observed that the client memory usage was quite high. A statedump revealed that there were inode leaks as deleted inodes with nlookup=0 were present in the active list of the inode table. xlator.mount.fuse.itable.active_size=600003 xlator.mount.fuse.itable.lru_size=1 Version-Release number of selected component (if applicable): mainline How reproducible: Always Steps to Reproduce: 1. touch files from a fuse mount point 2. list files using ls 3. delete files from the mount point 4. check memory usage and inode table usage 5. If there are no leaks, the inode table active size Actual results: Memory and inode leaks are observed. Expected results: No memory and inode leaks should be found.
REVIEW: http://review.gluster.org/13689 (mount/fuse: cleanup an additional inode_ref()) posted (#3) for review on master by Vijay Bellur (vbellur)
COMMIT: http://review.gluster.org/13689 committed in master by Niels de Vos (ndevos) ------ commit 8fda324df01b6de9c58a1395263ce9755465b26d Author: Vijay Bellur <vbellur> Date: Sun Mar 13 10:44:12 2016 -0400 mount/fuse: cleanup an additional inode_ref() commit ca515db0127 introduced a check in fuse_resolve_inode_simple(). This results in an additional ref being held on inodes which were obtained through readdirp. As a result, the inode table keeps growing and entries remain in the active list even after deletion of such inodes. Change-Id: I780ec5513990d6ef00ea051ec57ff20e4428081e BUG: 1317948 Signed-off-by: Vijay Bellur <vbellur> Reviewed-on: http://review.gluster.org/13689 NetBSD-regression: NetBSD Build System <jenkins.org> CentOS-regression: Gluster Build System <jenkins.com> Smoke: Gluster Build System <jenkins.com> Reviewed-by: Niels de Vos <ndevos>
This bug is getting closed because a release has been made available that should address the reported issue. In case the problem is still not fixed with glusterfs-3.8.0, please open a new bug report. glusterfs-3.8.0 has been announced on the Gluster mailinglists [1], packages for several distributions should become available in the near future. Keep an eye on the Gluster Users mailinglist [2] and the update infrastructure for your distribution. [1] http://blog.gluster.org/2016/06/glusterfs-3-8-released/ [2] http://thread.gmane.org/gmane.comp.file-systems.gluster.user