Description of problem: Pavel Cernohorsky sent email to gluster-users: ... colleague of mine found out the problem is this line: itable = inode_table_new (131072, new_subvol); in glfs-master.c (graph_setup function). That hard-coded number is huge! And looking at the history of Gluster sources, it seems that this number used to be a number of bytes, but it became number of inodes, but someone forgot to change this hard-coded value! ... Version-Release number of selected component (if applicable): How reproducible: Steps to Reproduce: 1. 2. 3. Actual results: Expected results: Additional info:
Additional info can be found in the original mailing list post (http://www.gluster.org/pipermail/gluster-users/2016-October/028818.html). Short summary: Using libgfapi in application which operates on lot of different files consumes extreme amount of memory, just opening, reading few bytes and closing can easily cause the application to consume ~hundreds of MBs which are never freed. The value should be either lower on configurable during the library initialization.
We have different use-cases for libgfapi. Both Samba and NFS-Ganesha would benefit from a large inode-table, but QEMU just needs a few inodes. Tuning the default will be difficult, so I guess it would be best to add a function in libgfapi so that the application can decide how big the inode-table needs to be. Shyam, care to add your thoughts?
(In reply to Niels de Vos from comment #2) > We have different use-cases for libgfapi. Both Samba and NFS-Ganesha would > benefit from a large inode-table, but QEMU just needs a few inodes. Tuning > the default will be difficult, so I guess it would be best to add a function > in libgfapi so that the application can decide how big the inode-table needs > to be. Agree. As for the API, my thoughts would be to have a generic config API, with various key, value pairs. So that we do not have an API explosion for other such parameters that need to be controlled per instance.
Discussion about the design for the API can be found at http://lists.gluster.org/pipermail/gluster-devel/2017-March/052228.html
Migrated to github: https://github.com/gluster/glusterfs/issues/603 Please follow the github issue for further updates on this bug.