Bug 761825 (GLUSTER-93) - High memory usage of glusterfs - around 700 MB for a run of around 24hrs
Summary: High memory usage of glusterfs - around 700 MB for a run of around 24hrs
Keywords:
Status: CLOSED CURRENTRELEASE
Alias: GLUSTER-93
Product: GlusterFS
Classification: Community
Component: core
Version: pre-2.0
Hardware: All
OS: Linux
low
low
Target Milestone: ---
Assignee: Raghavendra G
QA Contact:
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2009-06-25 13:47 UTC by Basavanagowda Kanur
Modified: 2010-01-29 13:41 UTC (History)
5 users (show)

Fixed In Version:
Doc Type: Bug Fix
Doc Text:
Clone Of:
Environment:
Last Closed:
Regression: RTNR
Mount Type: ---
Documentation: ---
CRM:
Verified Versions:


Attachments (Terms of Use)

Description Basavanagowda Kanur 2009-06-25 13:47:27 UTC
[Migrated from RT] - ticket 712 [http://support.gluster.com/rt/Ticket/Display.html?id=712]

Mon Jan 12 03:13:37 2009  	 raghavendra - Ticket created  

This high usage can be attributed to kernel sending less no of
inode_forgets when compared to the number of lookups being performed.
Hence on a setup consisting very large number of files, glusterfs tends
to use large amounts of memory.

Note that in fuse-bridge.c, inode is not freed until the lookups on it
are zero.

Here is a rough calculation of the amount of memory attributed due to
less no of forgets, when glusterfs is consuming around 454MB memory.

[root@brick7 raghu]# grep "activating inode" glusterfs.log | wc -l
3222831
[root@brick7 raghu]# grep "destroy inode" glusterfs.log | wc -l
1185639

assuming sizeof(inode)=144 (excluding the dentries), memory consumed is,
>>> ((3222831 - 1185639) * 144)/(1024 * 1024)
279

279MB (excluding dentries) is held up in inode_table, when the total
memory usage of glusterfs is 454MB.

--------------------------------------------------------------------------------
#   	Mon Mar 23 16:53:44 2009 	gowda - Correspondence added

Is there something that glusterfs can do about this kind of high-memory
usage? Should this be open or closed?
-- 
gowda

Comment 1 Jonathan Steffan 2009-07-09 16:12:39 UTC
I will confirm we see this additional memory usage with 2.0.3. The memory usage seems stable, however.

Comment 2 Raghavendra G 2009-07-28 15:25:30 UTC
Internal Discussion related to fuse

> > 2. add an api to use the latest cache invalidation features (or did
> I miss it?)
>
> It's easy to export the invalidation interface as such from
> fuse-bridge (and add an interface to set cache timeouts [I guess it
> would be fine on a per-session base]), the question is, how to make
> use of it? Should we have a dedicated translator? Or which component
> would invoke the invalidation calls and could keep track of what's
> cached and what's outdated? client xlator? What mechanism would be
> used for being notified of changes? Inotify here too? Implementing
> cache invalidation requires more thought/work on glusterfs in general
> than in fuse-brigde...

Correct, we will be adding a new entry into xlator->cbks{} to notify
xlators of cache invalidation. Each xlator will do it's job of "cleaning up"
whatever is necessary considering the cache invalidation. This way,
mount/fuse xlator's cbk will be calling this new invalidation call to
the kernel.

With the above feature, we should be able to control size of kernel cache and hence the memory usage of glusterfs.

(In reply to comment #1)
> I will confirm we see this additional memory usage with 2.0.3. The memory usage
> seems stable, however.

Comment 3 Amar Tumballi 2009-11-26 03:55:21 UTC
Closing this ticket as we fixed some bugs related to memory leak with GlusterFS since that time. 

Please open again if the issues are seen again.


Note You need to log in before you can comment on or make changes to this bug.