Bug 1657202 - Possible memory leak in 5.1 brick process
Summary: Possible memory leak in 5.1 brick process
Keywords:
Status: CLOSED CURRENTRELEASE
Alias: None
Product: GlusterFS
Classification: Community
Component: core
Version: 5
Hardware: x86_64
OS: Linux
high
urgent
Target Milestone: ---
Assignee: bugs@gluster.org
QA Contact:
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2018-12-07 12:59 UTC by Rob de Wit
Modified: 2019-06-17 11:39 UTC (History)
3 users (show)

Fixed In Version: glusterfs-6.x, glusterfs-5.5
Doc Type: If docs needed, set a value
Doc Text:
Clone Of:
Environment:
Last Closed: 2019-06-17 11:28:56 UTC
Regression: ---
Mount Type: ---
Documentation: ---
CRM:
Verified Versions:


Attachments (Terms of Use)
statedumps (17.79 KB, application/x-gzip)
2018-12-07 12:59 UTC, Rob de Wit
no flags Details

Description Rob de Wit 2018-12-07 12:59:53 UTC
Created attachment 1512497 [details]
statedumps

Description of problem: glusterfs process keep on growing


Version-Release number of selected component (if applicable): 5.1


How reproducible: always


Steps to Reproduce:
1. mount gluster volume
2. use
3. wait for process to grow

Actual results:
glusterfs process grows to 10s of gigabytes:

root     24837 27.5 35.2 24051028 23167000 ?   Ssl  Nov29 3133:46 /usr/sbin/glusterfs --use-readdirp=off --attribute-timeout=600 --entry-timeout=600 --negative-timeout=600 --fuse-mountopts=noatime --process-name fuse --volfile-server=SERVER --volfile-id=jf-vol0 --fuse-mountopts=noatime /mnt/jf-vol0


Expected results:
glusterfs uses reasonable amounts of memory.


Additional info:

The volume contains a large number (some millions) of small files. Some of those are python code, hence the negative-timeout mount option (python tries to open a lot of non-existent files, effectively killing the volume performance).

Attached are four state-dumps. I've added redacted version where unchanged or up-down values are left out. If I check them with vimdiff, it looks like some of the values only keep on growing.

Comment 1 Rob de Wit 2018-12-18 22:52:03 UTC
Might be related to https://bugzilla.redhat.com/show_bug.cgi?id=1623107 and https://bugzilla.redhat.com/show_bug.cgi?id=1659676
Are any of these fixed in 5.2?

Comment 2 Amar Tumballi 2019-06-17 11:28:56 UTC
robdewit, apologies for delay in getting back on this. Yes, there were some serious memory leaks which got fixed in glusterfs-5.5 timeframe, and glusterfs-6.1 time...

We recommend you to upgrade and test the newer version to get the fixes.

Comment 3 Rob de Wit 2019-06-17 11:39:35 UTC
Hi Amar,

We've been running the 6.1 release for some time now and the memory consumption is back at the previous level. Close to 1GB, but not more than that.

Thanks!


Note You need to log in before you can comment on or make changes to this bug.