| Summary: | High memory usage of glusterfs - around 700 MB for a run of around 24hrs | ||
|---|---|---|---|
| Product: | [Community] GlusterFS | Reporter: | Basavanagowda Kanur <gowda> |
| Component: | core | Assignee: | Raghavendra G <raghavendra> |
| Status: | CLOSED CURRENTRELEASE | QA Contact: | |
| Severity: | low | Docs Contact: | |
| Priority: | low | ||
| Version: | pre-2.0 | CC: | amarts, gluster-bugs, gowda, jonathansteffan, rabhat |
| Target Milestone: | --- | ||
| Target Release: | --- | ||
| Hardware: | All | ||
| OS: | Linux | ||
| Whiteboard: | |||
| Fixed In Version: | Doc Type: | Bug Fix | |
| Doc Text: | Story Points: | --- | |
| Clone Of: | Environment: | ||
| Last Closed: | Type: | --- | |
| Regression: | RTNR | Mount Type: | --- |
| Documentation: | --- | CRM: | |
| Verified Versions: | Category: | --- | |
| oVirt Team: | --- | RHEL 7.3 requirements from Atomic Host: | |
| Cloudforms Team: | --- | Target Upstream Version: | |
|
Description
Basavanagowda Kanur
2009-06-25 13:47:27 UTC
I will confirm we see this additional memory usage with 2.0.3. The memory usage seems stable, however. Internal Discussion related to fuse > > 2. add an api to use the latest cache invalidation features (or did > I miss it?) > > It's easy to export the invalidation interface as such from > fuse-bridge (and add an interface to set cache timeouts [I guess it > would be fine on a per-session base]), the question is, how to make > use of it? Should we have a dedicated translator? Or which component > would invoke the invalidation calls and could keep track of what's > cached and what's outdated? client xlator? What mechanism would be > used for being notified of changes? Inotify here too? Implementing > cache invalidation requires more thought/work on glusterfs in general > than in fuse-brigde... Correct, we will be adding a new entry into xlator->cbks{} to notify xlators of cache invalidation. Each xlator will do it's job of "cleaning up" whatever is necessary considering the cache invalidation. This way, mount/fuse xlator's cbk will be calling this new invalidation call to the kernel. With the above feature, we should be able to control size of kernel cache and hence the memory usage of glusterfs. (In reply to comment #1) > I will confirm we see this additional memory usage with 2.0.3. The memory usage > seems stable, however. Closing this ticket as we fixed some bugs related to memory leak with GlusterFS since that time. Please open again if the issues are seen again. |