+++ This bug was initially created as a clone of Bug #1435779 +++ Description of problem: When using the handles API of libgfapi, invoking glfs_h_anonymous_write() and or glfs_h_anonymous_read() on an opened object handle leaks inode references. Version-Release number of selected component (if applicable): GlusterFS 3.10.0 How reproducible: glfs_new() glfs_h_open() glfs_h_anonymous_write() glfs_h_close() glfs_fini() The following message will appear in the logs: [2017-03-21 14:06:34.349446] W [inode.c:1845:inode_table_destroy] (-->/usr/lib64/libgfapi.so.0(glfs_fini+0x499) [0x7f97c4bf7679] -->/usr/lib64/libglusterfs.so.0(inode_table_destroy_all+0x51) [0x7f97c450e221] -->/usr/lib64/libglusterfs.so.0(inode_table_destroy+0xd5) [0x7f97c450e125] ) 0-gfapi: Active inode(0x7f9810552a80) with refcount(2) found during cleanup Steps to Reproduce: 1. glfs_new() 2. glfs_h_open() 3. glfs_h_anonymous_write() 4. glfs_h_close() 5. glfs_fini() 6. Read the log Actual results: [2017-03-21 14:06:34.349446] W [inode.c:1845:inode_table_destroy] (-->/usr/lib64/libgfapi.so.0(glfs_fini+0x499) [0x7f97c4bf7679] -->/usr/lib64/libglusterfs.so.0(inode_table_destroy_all+0x51) [0x7f97c450e221] -->/usr/lib64/libglusterfs.so.0(inode_table_destroy+0xd5) [0x7f97c450e125] ) 0-gfapi: Active inode(0x7f9810552a80) with refcount(2) found during cleanup Expected results: No such message Additional info: --- Additional comment from Red Hat Bugzilla Rules Engine on 2017-03-24 14:12:22 EDT --- This bug is automatically being proposed for the current release of Red Hat Gluster Storage 3 under active development, by setting the release flag 'rhgs‑3.3.0' to '?'. If this bug should be proposed for a different release, please manually change the proposed release flag. --- Additional comment from Niels de Vos on 2017-03-25 20:01:07 EDT --- We do not support GlusterFS 3.10 in Red Hat Gluster Storage, so I assume this is a bug against the community version of Gluster and am relocating this bug. I can reproduce the problem on one of my test systems with CentOS 7 and glusterfs-3.8.10. --- Additional comment from Niels de Vos on 2017-03-25 20:07 EDT --- Compile this program with (assuming that the file is saved as bug-1435779.c): $ make CFLAGS="$(pkg-config --cflags --libs glusterfs-api)" bug-1435779 Run it with: $ ./bug-1435779 <hostname> <volname> # <hostname> is a gluster server # <volname> is a name of a volume The bug-1435779.log file will contain the warning after running: [2017-03-25 23:59:50.253520] W [inode.c:1809:inode_table_destroy] (-->/lib64/libgfapi.so.0(glfs_fini+0x40d) [0x7f958a1f794d] -->/lib64/libglusterfs.so.0(inode_table_destroy_all+0x51) [0x7f9589f37111] -->/lib64/libglusterfs.so.0(inode_table_destroy+0xd7) [0x7f9589f37017] ) 0-gfapi: Active inode(0x7f95693d1128) with refcount(1) found during cleanup --- Additional comment from Worker Ant on 2017-04-04 06:24:21 EDT --- REVIEW: https://review.gluster.org/16989 (gfapi: Fix inode ref leak in anonymous fd I/O APIs) posted (#1) for review on master by soumya k (skoduri)
REVIEW: https://review.gluster.org/16989 (gfapi: Fix inode ref leak in anonymous fd I/O APIs) posted (#2) for review on master by soumya k (skoduri)
COMMIT: https://review.gluster.org/16989 committed in master by Niels de Vos (ndevos) ------ commit 761e2dc0432d3723e0f8cbb1cf192ad386addb08 Author: Soumya Koduri <skoduri> Date: Tue Apr 4 15:50:29 2017 +0530 gfapi: Fix inode ref leak in anonymous fd I/O APIs In the APIs to do I/Os using anonymous fd, there is a ref taken for inode which hasn't been unreferenced post the operation. This shall result in the leak. Change-Id: I75ea952a6b2df58c385f4f53398e5562f255248d BUG: 1438738 Signed-off-by: Soumya Koduri <skoduri> Reviewed-on: https://review.gluster.org/16989 Reviewed-by: Prashanth Pai <ppai> Smoke: Gluster Build System <jenkins.org> Reviewed-by: jiffin tony Thottan <jthottan> Reviewed-by: Niels de Vos <ndevos> NetBSD-regression: NetBSD Build System <jenkins.org> CentOS-regression: Gluster Build System <jenkins.org>
This bug is getting closed because a release has been made available that should address the reported issue. In case the problem is still not fixed with glusterfs-3.11.0, please open a new bug report. glusterfs-3.11.0 has been announced on the Gluster mailinglists [1], packages for several distributions should become available in the near future. Keep an eye on the Gluster Users mailinglist [2] and the update infrastructure for your distribution. [1] http://lists.gluster.org/pipermail/announce/2017-May/000073.html [2] https://www.gluster.org/pipermail/gluster-users/