Hide Forgot
[Migrated from RT] - ticket 712 [http://support.gluster.com/rt/Ticket/Display.html?id=712] Mon Jan 12 03:13:37 2009 raghavendra - Ticket created This high usage can be attributed to kernel sending less no of inode_forgets when compared to the number of lookups being performed. Hence on a setup consisting very large number of files, glusterfs tends to use large amounts of memory. Note that in fuse-bridge.c, inode is not freed until the lookups on it are zero. Here is a rough calculation of the amount of memory attributed due to less no of forgets, when glusterfs is consuming around 454MB memory. [root@brick7 raghu]# grep "activating inode" glusterfs.log | wc -l 3222831 [root@brick7 raghu]# grep "destroy inode" glusterfs.log | wc -l 1185639 assuming sizeof(inode)=144 (excluding the dentries), memory consumed is, >>> ((3222831 - 1185639) * 144)/(1024 * 1024) 279 279MB (excluding dentries) is held up in inode_table, when the total memory usage of glusterfs is 454MB. -------------------------------------------------------------------------------- # Mon Mar 23 16:53:44 2009 gowda - Correspondence added Is there something that glusterfs can do about this kind of high-memory usage? Should this be open or closed? -- gowda
I will confirm we see this additional memory usage with 2.0.3. The memory usage seems stable, however.
Internal Discussion related to fuse > > 2. add an api to use the latest cache invalidation features (or did > I miss it?) > > It's easy to export the invalidation interface as such from > fuse-bridge (and add an interface to set cache timeouts [I guess it > would be fine on a per-session base]), the question is, how to make > use of it? Should we have a dedicated translator? Or which component > would invoke the invalidation calls and could keep track of what's > cached and what's outdated? client xlator? What mechanism would be > used for being notified of changes? Inotify here too? Implementing > cache invalidation requires more thought/work on glusterfs in general > than in fuse-brigde... Correct, we will be adding a new entry into xlator->cbks{} to notify xlators of cache invalidation. Each xlator will do it's job of "cleaning up" whatever is necessary considering the cache invalidation. This way, mount/fuse xlator's cbk will be calling this new invalidation call to the kernel. With the above feature, we should be able to control size of kernel cache and hence the memory usage of glusterfs. (In reply to comment #1) > I will confirm we see this additional memory usage with 2.0.3. The memory usage > seems stable, however.
Closing this ticket as we fixed some bugs related to memory leak with GlusterFS since that time. Please open again if the issues are seen again.