+++ This bug was initially created as a clone of Bug #1344885 +++ +++ This bug was initially created as a clone of Bug #1344843 +++ Description of problem: There is a leak of inodes on the brick process. [root@unused ~]# gluster volume info Volume Name: ra Type: Distribute Volume ID: 258a8e92-678b-41db-ba8e-b273a360297d Status: Started Number of Bricks: 1 Transport-type: tcp Bricks: Brick1: booradley:/home/export-2/ra Options Reconfigured: diagnostics.brick-log-level: DEBUG nfs.disable: on performance.readdir-ahead: on transport.address-family: inet Script: [root@unused mnt]# for i in {1..150}; do echo $i; cp -rf /etc . && rm -rf *; done After completion of script, I can see active inodes in the brick itable [root@unused ~]# grep ra.active /var/run/gluster/home-export-2-ra.19609.dump.1465647069 conn.0.bound_xl./home/export-2/ra.active_size=149 [root@unused ~]# grep ra.active /var/run/gluster/home-export-2-ra.19609.dump.1465647069 | wc -l 150 But the client fuse mount doesn't have any inodes. [root@unused ~]# grep active /var/run/gluster/glusterdump.20612.dump.1465629006 | grep itable xlator.mount.fuse.itable.active_size=1 [xlator.mount.fuse.itable.active.1] I've not done a detailed RCA. But initial gut feeling is that there is one inode leak for every iteration of the loop. The leaked inode mostly corresponds to /mnt/etc. Version-Release number of selected component (if applicable): RHGS-3.1.3 git repo. Bug seen on upstream master too. How reproducible: Quite consistently Steps to Reproduce: 1. 2. 3. Actual results: Expected results: Additional info: --- Additional comment from Raghavendra G on 2016-06-12 09:31:30 CEST --- Description of problem: There is a leak of inodes on the brick process. [root@unused ~]# gluster volume info Volume Name: ra Type: Distribute Volume ID: 258a8e92-678b-41db-ba8e-b273a360297d Status: Started Number of Bricks: 1 Transport-type: tcp Bricks: Brick1: booradley:/home/export-2/ra Options Reconfigured: diagnostics.brick-log-level: DEBUG nfs.disable: on performance.readdir-ahead: on transport.address-family: inet Script: [root@unused mnt]# for i in {1..150}; do echo $i; cp -rf /etc . && rm -rf *; done After completion of script, I can see active inodes in the brick itable [root@unused ~]# grep ra.active /var/run/gluster/home-export-2-ra.19609.dump.1465647069 conn.0.bound_xl./home/export-2/ra.active_size=149 [root@unused ~]# grep ra.active /var/run/gluster/home-export-2-ra.19609.dump.1465647069 | wc -l 150 But the client fuse mount doesn't have any inodes. [root@unused ~]# grep active /var/run/gluster/glusterdump.20612.dump.1465629006 | grep itable xlator.mount.fuse.itable.active_size=1 [xlator.mount.fuse.itable.active.1] I've not done a detailed RCA. But initial gut feeling is that there is one inode leak for every iteration of the loop. The leaked inode mostly corresponds to /mnt/etc. Version-Release number of selected component (if applicable): Bug seen on upstream master. How reproducible: Quite consistently --- Additional comment from Vijay Bellur on 2016-06-12 09:42:06 CEST --- REVIEW: http://review.gluster.org/14704 (libglusterfs/client_t: Dump the 0th client too) posted (#1) for review on master by Raghavendra G (rgowdapp) --- Additional comment from Raghavendra G on 2016-06-14 06:11:48 CEST --- RCA is not complete and we've not found the leak. Hence moving back the bug ASSIGNED. --- Additional comment from Vijay Bellur on 2016-06-16 08:34:30 CEST --- REVIEW: http://review.gluster.org/14704 (libglusterfs/client_t: Dump the 0th client too) posted (#2) for review on master by Raghavendra G (rgowdapp) --- Additional comment from Vijay Bellur on 2016-06-16 08:34:33 CEST --- REVIEW: http://review.gluster.org/14739 (storage/posix: fix inode leaks) posted (#1) for review on master by Raghavendra G (rgowdapp) --- Additional comment from Vijay Bellur on 2016-06-28 22:58:22 CEST --- COMMIT: http://review.gluster.org/14704 committed in master by Jeff Darcy (jdarcy) ------ commit 60cc8ddaf6105b89e5ce3222c5c5a014deda6a15 Author: Raghavendra G <rgowdapp> Date: Sun Jun 12 13:02:05 2016 +0530 libglusterfs/client_t: Dump the 0th client too Change-Id: I565e81944b6670d26ed1962689dcfd147181b61e BUG: 1344885 Signed-off-by: Raghavendra G <rgowdapp> Reviewed-on: http://review.gluster.org/14704 Smoke: Gluster Build System <jenkins.org> NetBSD-regression: NetBSD Build System <jenkins.org> CentOS-regression: Gluster Build System <jenkins.org> Reviewed-by: Jeff Darcy <jdarcy> --- Additional comment from Vijay Bellur on 2016-07-05 14:46:10 CEST --- COMMIT: http://review.gluster.org/14739 committed in master by Jeff Darcy (jdarcy) ------ commit 8680261cbb7cacdc565feb578d6afd3fac50cec4 Author: Raghavendra G <rgowdapp> Date: Thu Jun 16 12:03:19 2016 +0530 storage/posix: fix inode leaks Change-Id: Ibd221ba62af4db17bea5c52d37f5c0ba30b60a7d BUG: 1344885 Signed-off-by: Raghavendra G <rgowdapp> Reviewed-on: http://review.gluster.org/14739 Smoke: Gluster Build System <jenkins.org> Reviewed-by: N Balachandran <nbalacha> CentOS-regression: Gluster Build System <jenkins.org> Reviewed-by: Pranith Kumar Karampuri <pkarampu> Reviewed-by: Krutika Dhananjay <kdhananj> NetBSD-regression: NetBSD Build System <jenkins.org>
All 3.8.x bugs are now reported against version 3.8 (without .x). For more information, see http://www.gluster.org/pipermail/gluster-devel/2016-September/050859.html
Hi, I did some more test with respect to inode-leak in bricks. I have attached the statedump and fusedump. Here is what I did: created a 2x2 volume mounted with fuse dump option enabled # glusterfs --volfile-server=<IP ADDR> --dump-fuse=/home/dump.fdump --volfile-id=/vol /mnt/mount # cd /mnt/mount # for i in {1..50}; do mkdir healthy$i; cd healthy$i; echo dsfdsafsadfsad >> healthy$i; cd ../; done # find . # rm -rf ./* # gluster volume statedump vol # for i in {1..50}; do mkdir healthy$i; cd healthy$i; done # cd /mnt/mount # find . # rm -rf ./* # gluster volume statedump vol brick/4 has inode leaks when creating directories on mount recursively and not on samelevel. fusedump is took and parsed with https://github.com/csabahenk/parsefuse to make it human readable. fusedumps are attached.
Created attachment 1202237 [details] Inode leak statedump and fusedump
You also might be interested in applying http://review.gluster.org/13736 for additional debugging. Leave a comment in the gerrit review for positive or negative feedback to get the change improved or merged.
(In reply to Niels de Vos from comment #4) > You also might be interested in applying http://review.gluster.org/13736 for > additional debugging. Leave a comment in the gerrit review for positive or > negative feedback to get the change improved or merged. Thanks for pointing to this patch. I have added my comments in the patch.
Created attachment 1204687 [details] Inode leak statedump and fusedump with xlator name(from http://review.gluster.org/13736)
This bug is getting closed because the 3.8 version is marked End-Of-Life. There will be no further updates to this version. Please open a new bug against a version that still receives bugfixes if you are still facing this issue in a more current release.