Bug 809063 - glusterfs process is taking some 70% mem usage after some stress testing.
glusterfs process is taking some 70% mem usage after some stress testing.
Status: CLOSED CURRENTRELEASE
Product: GlusterFS
Classification: Community
Component: fuse (Show other bugs)
mainline
x86_64 Linux
medium Severity high
: ---
: ---
Assigned To: Raghavendra Bhat
: Triaged
Depends On:
Blocks: 848341
  Show dependency treegraph
 
Reported: 2012-04-02 07:28 EDT by Vijaykumar Koppad
Modified: 2014-08-24 20:49 EDT (History)
2 users (show)

See Also:
Fixed In Version: glusterfs-3.4.0
Doc Type: Bug Fix
Doc Text:
Story Points: ---
Clone Of:
: 848341 (view as bug list)
Environment:
Last Closed: 2013-07-24 13:26:20 EDT
Type: ---
Regression: ---
Mount Type: ---
Documentation: ---
CRM:
Verified Versions:
Category: ---
oVirt Team: ---
RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: ---


Attachments (Terms of Use)
glusterfs process statedump. (810.91 KB, application/octet-stream)
2012-04-02 07:28 EDT, Vijaykumar Koppad
no flags Details

  None (edit)
Description Vijaykumar Koppad 2012-04-02 07:28:56 EDT
Created attachment 574469 [details]
glusterfs process statedump.

Description of problem:
statedump output - 

[mallinfo]
mallinfo_arena=2080006144
mallinfo_ordblks=98988
mallinfo_smblks=4
mallinfo_hblks=13
mallinfo_hblkhd=18313216
mallinfo_usmblks=0
mallinfo_fsmblks=400
mallinfo_uordblks=2076200960
mallinfo_fordblks=3805184
mallinfo_keepcost=29040


pool-name=fuse:fd_t
hot-count=0
cold-count=1024
padded_sizeof=100
alloc-count=6911
max-alloc=1
pool-misses=0
max-stdalloc=0
-----=-----
pool-name=fuse:dentry_t
hot-count=1672
cold-count=31096
padded_sizeof=84
alloc-count=120694
max-alloc=32768
pool-misses=2072
max-stdalloc=2072
-----=-----
pool-name=fuse:inode_t
hot-count=1674
cold-count=31094
padded_sizeof=148
alloc-count=249800
max-alloc=32768
pool-misses=4273
max-stdalloc=2074
-----=-----
pool-name=master-client-0:struct saved_frame
hot-count=1
cold-count=511
padded_sizeof=124
alloc-count=186227
max-alloc=4
pool-misses=0
max-stdalloc=0
-----=-----
pool-name=master-client-0:struct rpc_req
hot-count=1
cold-count=511
padded_sizeof=2236
alloc-count=186227
max-alloc=4
pool-misses=0
max-stdalloc=0

pool-name=master-client-0:clnt_local_t
hot-count=1
cold-count=63
padded_sizeof=1284
alloc-count=179163
max-alloc=3
pool-misses=0
max-stdalloc=0
-----=-----
pool-name=master-client-1:struct saved_frame
hot-count=0
cold-count=512
padded_sizeof=124
alloc-count=187084
max-alloc=4
pool-misses=0
max-stdalloc=0
-----=-----
pool-name=master-client-1:struct rpc_req
hot-count=0
cold-count=512
padded_sizeof=2236
alloc-count=187084
max-alloc=4
pool-misses=0
max-stdalloc=0
-----=-----
pool-name=master-client-1:clnt_local_t
hot-count=0
cold-count=64
padded_sizeof=1284
alloc-count=180021
max-alloc=3
pool-misses=0
max-stdalloc=0
Comment 1 Amar Tumballi 2012-04-11 07:20:19 EDT
mostly looks like some fragmentation looking at uord_blks and ord_blks
Comment 2 Amar Tumballi 2012-04-19 03:39:39 EDT
Next time you run these set of tests, please run it through Valgrind, so we can capture the leaks well.

One of the possibility is quick-read's dictionary getting cached in md-cache, which can lead to a huge leak (Thanks to Brian Foster/Avati on the md-cache/quick-read causing memory consumption)
Comment 3 Amar Tumballi 2012-04-19 03:42:24 EDT
> One of the possibility is quick-read's dictionary getting cached in md-cache,
> which can lead to a huge leak (Thanks to Brian Foster/Avati on the
> md-cache/quick-read causing memory consumption)

Correction:

Thanks to Brian Foster and Avati on *finding* the memory consumption issue when md-cache and quick-read are used together.
Comment 4 Amar Tumballi 2012-05-04 03:01:42 EDT
Taking this out of Beta Blocker considering multiple patches which have gone into fix the obvious memory leaks. Only seriously pending tasks would be to handle md-cache/quick-read memory consumption behavior, for which Brian Foster already sent a patch.
Comment 5 Raghavendra Bhat 2012-12-04 05:20:39 EST
the patch to handle quick-read/md-cache dict's memory consumption has gone in.  http://review.gluster.com/3268

Note You need to log in before you can comment on or make changes to this bug.