+++ This bug was initially created as a clone of Bug #1779055 +++ Description of problem: when test with ior tool glusterfs client process memory leak found no else op is carry on , only do io through gluster client process. glusterfs client process eat up more and more memory Version-Release number of selected component (if applicable): # glusterfs -V glusterfs 7.0 Repository revision: git://git.gluster.org/glusterfs.git Copyright (c) 2006-2016 Red Hat, Inc. <https://www.gluster.org/> GlusterFS comes with ABSOLUTELY NO WARRANTY. It is licensed to you under your choice of the GNU Lesser General Public License, version 3 or any later version (LGPLv3 or later), or the GNU General Public License, version 2 (GPLv2), in all cases as published by the Free Software Foundation. How reproducible: Steps to Reproduce: 1.begin io with ior tool : python /opt/tool/ior -mfx -s 1000 -n 100 -t 10 /mnt/testvol/testdir 2.statedump of the glusterfs cleint process 3.from statedump the glusterfs client process memory is increasing,even after stop test and delete all created files Actual results: memory goes up never go back even after remove all created files Expected results: memory back to normal Additional info: from statedump,xlator.mount.fuse.itable.active_size seems keep increasing from enclosed statedump you can find this. --- Additional comment from zhou lin on 2019-12-03 08:57:23 CET --- # gluster v info config Volume Name: config Type: Replicate Volume ID: e4690308-7345-4e32-8d31-b13e10e87112 Status: Started Snapshot Count: 0 Number of Bricks: 1 x 2 = 2 Transport-type: tcp Bricks: Brick1: 169.254.0.30:/mnt/bricks/config/brick Brick2: 169.254.0.28:/mnt/bricks/config/brick Options Reconfigured: performance.client-io-threads: off server.allow-insecure: on network.frame-timeout: 180 network.ping-timeout: 42 cluster.consistent-metadata: off cluster.favorite-child-policy: mtime cluster.server-quorum-type: none transport.address-family: inet storage.fips-mode-rchecksum: on nfs.disable: on cluster.server-quorum-ratio: 51 --- Additional comment from zhou lin on 2019-12-04 03:54:53 CET --- from statedump it is quite obvious that xlator.mount.fuse.itable.active_size keeps growing thera are more and more follwing sections appearing in statedump [xlator.mount.fuse.itable.active.1] gfid=924e4dde-79a5-471b-9a6e-7d769f0bae61 nlookup=0 fd-count=0 active-fd-count=0 ref=100 invalidate-sent=0 ia_type=2 ref_by_xl:.hsjvol-client-0=1 ref_by_xl:.hsjvol-readdir-ahead=99
REVIEW: https://review.gluster.org/23988 (To fix readdir-ahead memory leak) posted (#1) for review on release-6 by Krutika Dhananjay
REVIEW: https://review.gluster.org/23988 (To fix readdir-ahead memory leak) merged (#2) on release-6 by hari gowtham