Description of problem: There is a leak of inodes on the brick process. [root@unused ~]# gluster volume info Volume Name: ra Type: Distribute Volume ID: 258a8e92-678b-41db-ba8e-b273a360297d Status: Started Number of Bricks: 1 Transport-type: tcp Bricks: Brick1: booradley:/home/export-2/ra Options Reconfigured: diagnostics.brick-log-level: DEBUG nfs.disable: on performance.readdir-ahead: on transport.address-family: inet Script: [root@unused mnt]# for i in {1..150}; do echo $i; cp -rf /etc . && rm -rf *; done After completion of script, I can see active inodes in the brick itable [root@unused ~]# grep ra.active /var/run/gluster/home-export-2-ra.19609.dump.1465647069 conn.0.bound_xl./home/export-2/ra.active_size=149 [root@unused ~]# grep ra.active /var/run/gluster/home-export-2-ra.19609.dump.1465647069 | wc -l 150 But the client fuse mount doesn't have any inodes. [root@unused ~]# grep active /var/run/gluster/glusterdump.20612.dump.1465629006 | grep itable xlator.mount.fuse.itable.active_size=1 [xlator.mount.fuse.itable.active.1] I've not done a detailed RCA. But initial gut feeling is that there is one inode leak for every iteration of the loop. The leaked inode mostly corresponds to /mnt/etc. Version-Release number of selected component (if applicable): Bug seen on upstream master. How reproducible: Quite consistently
REVIEW: http://review.gluster.org/14704 (libglusterfs/client_t: Dump the 0th client too) posted (#1) for review on master by Raghavendra G (rgowdapp)
RCA is not complete and we've not found the leak. Hence moving back the bug ASSIGNED.
REVIEW: http://review.gluster.org/14704 (libglusterfs/client_t: Dump the 0th client too) posted (#2) for review on master by Raghavendra G (rgowdapp)
REVIEW: http://review.gluster.org/14739 (storage/posix: fix inode leaks) posted (#1) for review on master by Raghavendra G (rgowdapp)
COMMIT: http://review.gluster.org/14704 committed in master by Jeff Darcy (jdarcy) ------ commit 60cc8ddaf6105b89e5ce3222c5c5a014deda6a15 Author: Raghavendra G <rgowdapp> Date: Sun Jun 12 13:02:05 2016 +0530 libglusterfs/client_t: Dump the 0th client too Change-Id: I565e81944b6670d26ed1962689dcfd147181b61e BUG: 1344885 Signed-off-by: Raghavendra G <rgowdapp> Reviewed-on: http://review.gluster.org/14704 Smoke: Gluster Build System <jenkins.org> NetBSD-regression: NetBSD Build System <jenkins.org> CentOS-regression: Gluster Build System <jenkins.org> Reviewed-by: Jeff Darcy <jdarcy>
COMMIT: http://review.gluster.org/14739 committed in master by Jeff Darcy (jdarcy) ------ commit 8680261cbb7cacdc565feb578d6afd3fac50cec4 Author: Raghavendra G <rgowdapp> Date: Thu Jun 16 12:03:19 2016 +0530 storage/posix: fix inode leaks Change-Id: Ibd221ba62af4db17bea5c52d37f5c0ba30b60a7d BUG: 1344885 Signed-off-by: Raghavendra G <rgowdapp> Reviewed-on: http://review.gluster.org/14739 Smoke: Gluster Build System <jenkins.org> Reviewed-by: N Balachandran <nbalacha> CentOS-regression: Gluster Build System <jenkins.org> Reviewed-by: Pranith Kumar Karampuri <pkarampu> Reviewed-by: Krutika Dhananjay <kdhananj> NetBSD-regression: NetBSD Build System <jenkins.org>
Hi, I did some more test with respect to inode-leak in bricks. I have attached the statedump and fusedump. Here is what I did: created a 2x2 volume mounted with fuse dump option enabled # glusterfs --volfile-server=<IP ADDR> --dump-fuse=/home/dump.fdump --volfile-id=/vol /mnt/mount # cd /mnt/mount # for i in {1..50}; do mkdir healthy$i; cd healthy$i; echo dsfdsafsadfsad >> healthy$i; cd ../; done # find . # rm -rf ./* # gluster volume statedump vol # for i in {1..50}; do mkdir healthy$i; cd healthy$i; done # cd /mnt/mount # find . # rm -rf ./* # gluster volume statedump vol brick/4 has inode leaks when creating directories on mount recursively and not on samelevel. fusedump is took and parsed with https://github.com/csabahenk/parsefuse to make it human readable. fusedumps are attached.
Created attachment 1202236 [details] Inode leak statedump and fusedump
This bug is getting closed because a release has been made available that should address the reported issue. In case the problem is still not fixed with glusterfs-3.9.0, please open a new bug report. glusterfs-3.9.0 has been announced on the Gluster mailinglists [1], packages for several distributions should become available in the near future. Keep an eye on the Gluster Users mailinglist [2] and the update infrastructure for your distribution. [1] http://lists.gluster.org/pipermail/gluster-users/2016-November/029281.html [2] https://www.gluster.org/pipermail/gluster-users/