Bug 1319045
| Summary: | memory increase of glusterfsd | ||||||||||||||||||||
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
| Product: | [Community] GlusterFS | Reporter: | evangelos <vpolakis> | ||||||||||||||||||
| Component: | core | Assignee: | Sanju <srakonde> | ||||||||||||||||||
| Status: | CLOSED CURRENTRELEASE | QA Contact: | |||||||||||||||||||
| Severity: | high | Docs Contact: | |||||||||||||||||||
| Priority: | medium | ||||||||||||||||||||
| Version: | mainline | CC: | bugs, hgowtham, moagrawa, ndevos, olympia.kremmyda, ryan, t.bueter, vpolakis | ||||||||||||||||||
| Target Milestone: | --- | Keywords: | Triaged | ||||||||||||||||||
| Target Release: | --- | ||||||||||||||||||||
| Hardware: | Unspecified | ||||||||||||||||||||
| OS: | Unspecified | ||||||||||||||||||||
| Whiteboard: | |||||||||||||||||||||
| Fixed In Version: | Doc Type: | Bug Fix | |||||||||||||||||||
| Doc Text: | Story Points: | --- | |||||||||||||||||||
| Clone Of: | Environment: | ||||||||||||||||||||
| Last Closed: | 2020-02-24 04:31:10 UTC | Type: | Bug | ||||||||||||||||||
| Regression: | --- | Mount Type: | --- | ||||||||||||||||||
| Documentation: | --- | CRM: | |||||||||||||||||||
| Verified Versions: | Category: | --- | |||||||||||||||||||
| oVirt Team: | --- | RHEL 7.3 requirements from Atomic Host: | |||||||||||||||||||
| Cloudforms Team: | --- | Target Upstream Version: | |||||||||||||||||||
| Embargoed: | |||||||||||||||||||||
| Attachments: |
|
||||||||||||||||||||
|
Description
evangelos
2016-03-18 14:26:15 UTC
Created attachment 1137797 [details]
statedump_1_
Created attachment 1137798 [details]
statedump_2_
Created attachment 1137800 [details]
statedump_3_
Hi ! is there any update for this issue ? thank you Created attachment 1162032 [details]
Client graph: RSS vs #ofDirTrees
Created attachment 1162033 [details]
Server-0: RSS vs #DirTrees
Created attachment 1162034 [details]
Server-1: RSS vs #DirTrees
Created attachment 1162035 [details]
Statedumps for nested directories tests
Hi, We are still running some tests in one replicated volume (named “log”), with two bricks. Our tests include Nested Directory Creation operations (Creation from 1000 up to 250000 Directory Trees) with 396 depth and no deletion is performed. We have observed the memory usage statistics shown in the images attached(statedumps are also attached) and we would like your opinion if this memory usage is normal for glusterfs. Also after our tests, we deleted these directories and the memory is not released. Can you describe us the expected memory behavior in these cases? Thank you, Olia Created attachment 1162159 [details]
statedumps after directory tree deletion
statedumps after directory tree deletion
Evangelos, Olia, I went through the statedumps you provided briefly. So when we create all the directory hierarchies, the inode-table is populated with all the inodes/dentries that are created afresh (There is an lru-limit of 16834, so as and when these inodes are forgotten we keep reclaiming the memory after the limit): pool-name=log-server:inode_t hot-count=16383 cold-count=1 padded_sizeof=156 alloc-count=170764724 max-alloc=16384 pool-misses=36330120 cur-stdalloc=68509 max-stdalloc=68793 Once the directory hierarchy is deleted, the number of inodes come down: pool-name=log-server:inode_t hot-count=6 cold-count=16378 padded_sizeof=156 alloc-count=179709943 max-alloc=16384 pool-misses=39154019 cur-stdalloc=1 max-stdalloc=68793 As per statedump, it released the memory :-/. Wonder why it is not showing that the memory is now reduced in RSS Link below gives details about how to interpret statedump file, especially in this case memory pools: https://github.com/gluster/glusterfs/blob/master/doc/debugging/statedump.md#mempools thank you, we had the same understanding for the total memory as calculated from the statedumps (i.e size value in the pools). It is interesting that from various tests (directories, files etc) when filesystem was clean up total size (from the statedump) were decreased but not RSS Could it be this related to libc or kernel cache pressure? I will try to check. hi,
Raghavendra G found a 3-4 inodes were leaking when we do cp -r /etc on to gluster mount and then rm -rf of the directory hierarchy on the mount. Since he is working on it, I am re-assigning this bug to him
Pranith
GlusterFS-3.6 is nearing its End-Of-Life, only important security bugs still make a chance on getting fixed. Moving this to the mainline 'version'. If this needs to get fixed in 3.7 or 3.8 this bug should get cloned. Been a while, Can we try the tests with latest glusterfs releases? We made some of the critical enhancements to memory related issues. Would like to hear more on how glusterfs-6.x or upstream/master works for your usecase. Hi Amar, We're seeing issues with Glusterfsd memory consumption too. I'll try and test this issue against 6.1 within the next week. Best, Ryan Currently unable to test due to bug 1728183 Hi Ryan, Can you try and reproduce it on latest version. I cannot reproduce it for distribute and replicate volumes on the latest master. If you are able to reproduce it on latest version, can you specify the steps as well ? @Ryan Can we get rolling on this issue or I will have to close the issue since I am not able to reproduce it and due to lack of activity on this. Hi Vishal, Sorry for the slow reply. I'm currently unable to test this as a result of this bug #1728183. If you're able to assist with that bug, I'd be more than happy to test once I'm able to. Best, Ryan Long time there are no updates on the bug specific to lean and I believe most of the leaks are fixed in the latest releases so I am closing the bug. Please reopen it if u face any leak issue in the latest release. |