Bug 1540403
Summary: | High memory usage on gluster volume / bricks. | ||
---|---|---|---|
Product: | [Red Hat Storage] Red Hat Gluster Storage | Reporter: | Ben Turner <bturner> |
Component: | fuse | Assignee: | Csaba Henk <csaba> |
Status: | CLOSED INSUFFICIENT_DATA | QA Contact: | Rahul Hinduja <rhinduja> |
Severity: | high | Docs Contact: | |
Priority: | high | ||
Version: | rhgs-3.3 | CC: | amukherj, bturner, nbalacha, olim, rgowdapp, rhs-bugs, storage-qa-internal |
Target Milestone: | --- | Keywords: | ZStream |
Target Release: | --- | ||
Hardware: | All | ||
OS: | All | ||
Whiteboard: | |||
Fixed In Version: | Doc Type: | If docs needed, set a value | |
Doc Text: | Story Points: | --- | |
Clone Of: | Environment: | ||
Last Closed: | 2019-09-26 03:49:26 UTC | Type: | Bug |
Regression: | --- | Mount Type: | --- |
Documentation: | --- | CRM: | |
Verified Versions: | Category: | --- | |
oVirt Team: | --- | RHEL 7.3 requirements from Atomic Host: | |
Cloudforms Team: | --- | Target Upstream Version: | |
Embargoed: | |||
Bug Depends On: | |||
Bug Blocks: | 1647277 |
Description
Ben Turner
2018-01-31 00:53:47 UTC
Hello, I noticed that some of the cur-stdalloc are quite high: ------------------- pool-name=glusterfs:data_t hot-count=16384 #number of mempool elements that are in active use. i.e. for this pool it is the number of 'fd_t' s in active use. cold-count=0 #number of mempool elements that are not in use. If a new allocation is required it will be served from here until all the elements in the pool are in use i.e. cold-count becomes 0. padded_sizeof=92 #Each mempool element is padded with a doubly-linked-list + ptr of mempool + is-in-use info to operate the pool of elements, this size is the element-size after padding alloc-count=5037173938 #Number of times this type of data is allocated through out the life of this process. This may include pool-misses as well. max-alloc=16384 #Maximum number of elements from the pool in active use at any point in the life of the process. This does *not* include pool-misses. pool-misses=5019282721 #Number of times the element had to be allocated from heap because all elements from the pool are in active use. cur-stdalloc=6311017 #Denotes the number of allocations made from heap once cold-count reaches 0, that are yet to be released via mem_put(). max-stdalloc=6396965 #Maximum number of allocations from heap that are in active use at any point in the life of the process. -----=----- pool-name=glusterfs:dict_t hot-count=4096 cold-count=0 padded_sizeof=172 alloc-count=1534198868 max-alloc=4096 pool-misses=1528426715 cur-stdalloc=6347813 max-stdalloc=6433931 -----=----- pool-name=glusterfs:data_pair_t hot-count=198 cold-count=16186 padded_sizeof=68 alloc-count=5950695670 max-alloc=525 pool-misses=0 cur-stdalloc=0 max-stdalloc=0 -----=----- pool-name=glusterfs:call_frame_t hot-count=1 cold-count=4095 padded_sizeof=212 alloc-count=2551228022 max-alloc=140 pool-misses=0 cur-stdalloc=0 max-stdalloc=0 -----=----- pool-name=fuse:dentry_t hot-count=32768 cold-count=0 padded_sizeof=84 alloc-count=15526082 max-alloc=32768 pool-misses=15427794 cur-stdalloc=7151520 max-stdalloc=7238193 -----=----- pool-name=fuse:inode_t hot-count=32768 cold-count=0 padded_sizeof=188 alloc-count=59799663 max-alloc=32768 pool-misses=59417032 cur-stdalloc=7151521 max-stdalloc=7238195 This bug indeed looks to be a problem of high memory allocated to inode-ctxs (as already pointed out by other analysis). |