Bug 1319045 - memory increase of glusterfsd
Summary: memory increase of glusterfsd
Keywords:
Status: CLOSED CURRENTRELEASE
Alias: None
Product: GlusterFS
Classification: Community
Component: core
Version: mainline
Hardware: Unspecified
OS: Unspecified
medium
high
Target Milestone: ---
Assignee: Sanju
QA Contact:
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2016-03-18 14:26 UTC by evangelos
Modified: 2020-02-24 04:31 UTC (History)
8 users (show)

Fixed In Version:
Doc Type: Bug Fix
Doc Text:
Clone Of:
Environment:
Last Closed: 2020-02-24 04:31:10 UTC
Regression: ---
Mount Type: ---
Documentation: ---
CRM:
Verified Versions:
Embargoed:


Attachments (Terms of Use)
statedump_1_ (26.91 KB, text/plain)
2016-03-18 14:26 UTC, evangelos
no flags Details
statedump_2_ (28.64 KB, text/plain)
2016-03-18 14:27 UTC, evangelos
no flags Details
statedump_3_ (28.44 KB, text/plain)
2016-03-18 14:27 UTC, evangelos
no flags Details
Client graph: RSS vs #ofDirTrees (41.07 KB, image/jpeg)
2016-05-26 14:49 UTC, Olia Kremmyda
no flags Details
Server-0: RSS vs #DirTrees (40.31 KB, image/jpeg)
2016-05-26 14:49 UTC, Olia Kremmyda
no flags Details
Server-1: RSS vs #DirTrees (41.19 KB, image/jpeg)
2016-05-26 14:50 UTC, Olia Kremmyda
no flags Details
Statedumps for nested directories tests (8.48 MB, application/zip)
2016-05-26 14:54 UTC, Olia Kremmyda
no flags Details
statedumps after directory tree deletion (25.71 KB, application/zip)
2016-05-26 15:41 UTC, evangelos
no flags Details

Description evangelos 2016-03-18 14:26:15 UTC
Description of problem:
memory increase of glusterfsd

Version-Release number of selected component (if applicable):
glusterfs 3.6.9 built on Mar 15 2016 14:28:33

How reproducible:
create file via 
# dd if=/dev/zero of=/mnt/export/1gfile bs=1M count=1024

Steps to Reproduce:
initial RSS 22092
# pidstat -urd -h -p 337 2 | grep glusterfsd
 1458306821     0       337    0.50    0.00    0.00    0.50     1      0.00      0.00  608012  22092   0.55      0.00      0.00      0.00       0  glusterfsd

statedump collected (attached as statedump_1_.txt)

execute 
# dd if=/dev/zero of=/mnt/export/1gfile bs=1M count=1024
1024+0 records in
1024+0 records out
1073741824 bytes (1.1 GB) copied, 4.55291 s, 236 MB/s

RSS usage increases to 23400
# pidstat -urd -h -p 337 2 | grep glusterfsd
 1458307852     0       337    0.00    0.00    0.00    0.00     0      0.00      0.00  673548  22092   0.55      0.00      0.00      0.00       0  glusterfsd
 1458307854     0       337    0.00    0.00    0.00    0.00     0      0.00      0.00  673548  22092   0.55      0.00      2.00      0.00       0  glusterfsd
 1458307856     0       337    0.50    0.00    0.00    0.50     0      0.00      0.00  673548  22092   0.55      0.00      0.00      0.00       0  glusterfsd
 1458307858     0       337    0.00    0.50    0.00    0.50     1      0.00      0.00  673548  22092   0.55      0.00      0.00      0.00       0  glusterfsd
 1458307860     0       337    0.00    0.00    0.00    0.00     1      0.00      0.00  673548  22092   0.55      0.00      0.00      0.00       0  glusterfsd
 1458307862     0       337    0.00    0.00    0.00    0.00     0      0.00      0.00  673548  22092   0.55      0.00      0.00      0.00       0  glusterfsd
 1458307864     0       337   38.50   28.50    0.00   67.00     1    127.50      0.00  873240  23140   0.57      0.00 203328.00      0.00       0  glusterfsd
 1458307866     0       337   74.00   31.00    0.00  105.00     0     42.00      0.00 1072932  23400   0.58      0.00 221440.00      0.00       0  glusterfsd
 1458307868     0       337   14.50   15.00    0.00   29.50     0      1.50      0.00 1072932  23400   0.58      0.00  99520.00      0.00       0  glusterfsd
 1458307870     0       337    0.00    0.00    0.00    0.00     0      0.00      0.00 1072932  23400   0.58      0.00      0.00      0.00       0  glusterfsd
 1458307872     0       337    0.50    0.00    0.00    0.50     0      0.00      0.00 1072932  23400   0.58      0.00      0.00      0.00       0  glusterfsd
 1458307874     0       337    0.00    0.50    0.00    0.50     1      1.00      0.00 1072932  23400   0.58      0.00      0.00      0.00       0  glusterfsd
 1458307876     0       337    0.00    0.00    0.00    0.00     1      0.00      0.00 1072932  23400   0.58      0.00      0.00      0.00       0  glusterfsd
 1458307878     0       337    0.50    0.00    0.00    0.50     1      0.00      0.00 1072932  23400   0.58      0.00      0.00      0.00       0  glusterfsd
 1458307880     0       337    0.00    0.00    0.00    0.00     1      0.00      0.00 1072932  23400   0.58      0.00      0.00      0.00       0  glusterfsd
 1458307882     0       337    0.00    0.00    0.00    0.00     1      0.00      0.00 1072932  23400   0.58      0.00      0.00      0.00       0  glusterfsd
 1458307884     0       337    0.50    0.50    0.00    1.00     1      0.00      0.00 1072932  23400   0.58      0.00      2.00      0.00       0  glusterfsd
 1458307886     0       337    0.00    0.00    0.00    0.00     1      0.00      0.00 1072932  23400   0.58      0.00      0.00      0.00       0  glusterfsd
 1458307888     0       337    0.00    0.00    0.00    0.00     1      0.00      0.00 1072932  23400   0.58      0.00      0.00      0.00       0  glusterfsd
 1458307890     0       337    0.50    0.00    0.00    0.50     1      0.00      0.00 1072932  23400   0.58      0.00      0.00      0.00       0  glusterfsd
 1458307892     0       337    0.00    0.00    0.00    0.00     1      0.00      0.00 1072932  23400   0.58      0.00      0.00      0.00       0  glusterfsd
 1458307894     0       337    0.00    0.50    0.00    0.50     1      0.00      0.00 1072932  23400   0.58      0.00      0.00      0.00       0  glusterfsd

statedump collected (attached as statedump_2_.txt)

then file is deleted but RSS stays the same.
1458308180     0       337    0.50    0.50    0.00    1.00     1      0.00      0.00 1072932  23400   0.58      0.00      0.00      0.00       0  glusterfsd

attached also statedump_3_.txt after the deletion of the file.

Question:
is this normal? I mean is the glusterfsd memory usage expected to raise when file system grows too?
should the memory free immediately after deletion or after some time? 
Does this depend on any configuration from the volumes ?

I tried also 
echo 3 > /proc/sys/vm/drop_caches
after the file deletion but with no effect on the RSS usage. 

by the way I tried to re-create the file and memory usage stayed in 23400 but after I created another 1G file RSS increased to 23652.
mostly my query is related as to which is the expected behavior (gluster design) in case of memory usage of glusterfsd while files are created/deleted.

Comment 1 evangelos 2016-03-18 14:26:57 UTC
Created attachment 1137797 [details]
statedump_1_

Comment 2 evangelos 2016-03-18 14:27:20 UTC
Created attachment 1137798 [details]
statedump_2_

Comment 3 evangelos 2016-03-18 14:27:36 UTC
Created attachment 1137800 [details]
statedump_3_

Comment 4 evangelos 2016-03-23 21:53:15 UTC
Hi !

is there any update for this issue ?

thank you

Comment 5 Olia Kremmyda 2016-05-26 14:49:20 UTC
Created attachment 1162032 [details]
Client graph: RSS vs #ofDirTrees

Comment 6 Olia Kremmyda 2016-05-26 14:49:55 UTC
Created attachment 1162033 [details]
Server-0: RSS vs #DirTrees

Comment 7 Olia Kremmyda 2016-05-26 14:50:32 UTC
Created attachment 1162034 [details]
Server-1: RSS vs #DirTrees

Comment 8 Olia Kremmyda 2016-05-26 14:54:55 UTC
Created attachment 1162035 [details]
Statedumps for nested directories tests

Comment 9 Olia Kremmyda 2016-05-26 14:55:54 UTC
Hi,

We are still running some tests in one replicated volume (named “log”), with two bricks.
Our tests include Nested Directory Creation operations (Creation from 1000 up to 250000 Directory Trees) with 396 depth and no deletion is performed.

We have observed the memory usage statistics shown in the images attached(statedumps are also attached) 
and we would like your opinion if this memory usage is normal for glusterfs.
Also after our tests, we deleted these directories and the memory is not released.
Can you describe us the expected memory behavior in these cases?

Thank you,
Olia

Comment 10 evangelos 2016-05-26 15:41:35 UTC
Created attachment 1162159 [details]
statedumps after directory tree deletion

statedumps after directory tree deletion

Comment 11 Pranith Kumar K 2016-05-27 04:34:50 UTC
Evangelos, Olia,
  I went through the statedumps you provided briefly. So when we create all the directory hierarchies, the inode-table is populated with all the inodes/dentries that are created afresh (There is an lru-limit of 16834, so as and when these inodes are forgotten we keep reclaiming the memory after the limit):

pool-name=log-server:inode_t
hot-count=16383
cold-count=1
padded_sizeof=156
alloc-count=170764724
max-alloc=16384
pool-misses=36330120
cur-stdalloc=68509
max-stdalloc=68793

Once the directory hierarchy is deleted, the number of inodes come down:
pool-name=log-server:inode_t
hot-count=6
cold-count=16378
padded_sizeof=156
alloc-count=179709943
max-alloc=16384
pool-misses=39154019
cur-stdalloc=1
max-stdalloc=68793

As per statedump, it released the memory :-/. Wonder why it is not showing that the memory is now reduced in RSS

Link below gives details about how to interpret statedump file, especially in this case memory pools:
https://github.com/gluster/glusterfs/blob/master/doc/debugging/statedump.md#mempools

Comment 12 evangelos 2016-05-27 18:34:30 UTC
thank you, we had the same understanding for the total memory as calculated from the statedumps (i.e size value in the pools). It is interesting that from various tests (directories, files etc) when filesystem was clean up total size (from the statedump) were decreased but not RSS
Could it be this related to libc or kernel cache pressure? 
I will try to check.

Comment 13 Pranith Kumar K 2016-06-15 07:15:49 UTC
hi,
       Raghavendra G found a 3-4 inodes were leaking when we do cp -r /etc on to gluster mount and then rm -rf of the directory hierarchy on the mount. Since he is working on it, I am re-assigning this bug to him

Pranith

Comment 14 Niels de Vos 2016-08-23 12:37:22 UTC
GlusterFS-3.6 is nearing its End-Of-Life, only important security bugs still make a chance on getting fixed. Moving this to the mainline 'version'. If this needs to get fixed in 3.7 or 3.8 this bug should get cloned.

Comment 15 Amar Tumballi 2019-06-18 10:26:03 UTC
Been a while, Can we try the tests with latest glusterfs releases? We made some of the critical enhancements to memory related issues. Would like to hear more on how glusterfs-6.x or upstream/master works for your usecase.

Comment 16 ryan 2019-06-19 16:06:41 UTC
Hi Amar,

We're seeing issues with Glusterfsd memory consumption too.
I'll try and test this issue against 6.1 within the next week.

Best,
Ryan

Comment 17 ryan 2019-07-31 12:11:26 UTC
Currently unable to test due to bug 1728183

Comment 18 Vishal Pandey 2019-10-10 08:06:52 UTC
Hi Ryan, Can you try and reproduce it on latest version. I cannot reproduce it for distribute and replicate volumes on the latest master. If you are able to reproduce it on latest version, can you specify the steps as well ?

Comment 19 Vishal Pandey 2019-10-15 11:38:42 UTC
@Ryan Can we get rolling on this issue or I will have to close the issue since I am not able to reproduce it and due to lack of activity on this.

Comment 20 ryan 2019-10-17 12:34:05 UTC
Hi Vishal,

Sorry for the slow reply.
I'm currently unable to test this as a result of this bug #1728183.
If you're able to assist with that bug, I'd be more than happy to test once I'm able to.

Best,
Ryan

Comment 21 Mohit Agrawal 2020-02-24 04:31:10 UTC
Long time there are no updates on the bug specific to lean and I believe most of the leaks are fixed in the latest releases so I am closing the bug.

Please reopen it if u face any leak issue in the latest release.


Note You need to log in before you can comment on or make changes to this bug.