Bug 1657202

Summary: Possible memory leak in 5.1 brick process
Product: [Community] GlusterFS Reporter: Rob de Wit <rob.dewit>
Component: coreAssignee: bugs <bugs>
Severity: urgent Docs Contact:
Priority: high    
Version: 5CC: atumball, bugs, pasik
Target Milestone: ---   
Target Release: ---   
Hardware: x86_64   
OS: Linux   
Fixed In Version: glusterfs-6.x, glusterfs-5.5 Doc Type: If docs needed, set a value
Doc Text:
Story Points: ---
Clone Of: Environment:
Last Closed: 2019-06-17 11:28:56 UTC Type: Bug
Regression: --- Mount Type: ---
Documentation: --- CRM:
Verified Versions: Category: ---
oVirt Team: --- RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: --- Target Upstream Version:
Description Flags
statedumps none

Description Rob de Wit 2018-12-07 12:59:53 UTC
Created attachment 1512497 [details]

Description of problem: glusterfs process keep on growing

Version-Release number of selected component (if applicable): 5.1

How reproducible: always

Steps to Reproduce:
1. mount gluster volume
2. use
3. wait for process to grow

Actual results:
glusterfs process grows to 10s of gigabytes:

root     24837 27.5 35.2 24051028 23167000 ?   Ssl  Nov29 3133:46 /usr/sbin/glusterfs --use-readdirp=off --attribute-timeout=600 --entry-timeout=600 --negative-timeout=600 --fuse-mountopts=noatime --process-name fuse --volfile-server=SERVER --volfile-id=jf-vol0 --fuse-mountopts=noatime /mnt/jf-vol0

Expected results:
glusterfs uses reasonable amounts of memory.

Additional info:

The volume contains a large number (some millions) of small files. Some of those are python code, hence the negative-timeout mount option (python tries to open a lot of non-existent files, effectively killing the volume performance).

Attached are four state-dumps. I've added redacted version where unchanged or up-down values are left out. If I check them with vimdiff, it looks like some of the values only keep on growing.

Comment 1 Rob de Wit 2018-12-18 22:52:03 UTC
Might be related to https://bugzilla.redhat.com/show_bug.cgi?id=1623107 and https://bugzilla.redhat.com/show_bug.cgi?id=1659676
Are any of these fixed in 5.2?

Comment 2 Amar Tumballi 2019-06-17 11:28:56 UTC
robdewit, apologies for delay in getting back on this. Yes, there were some serious memory leaks which got fixed in glusterfs-5.5 timeframe, and glusterfs-6.1 time...

We recommend you to upgrade and test the newer version to get the fixes.

Comment 3 Rob de Wit 2019-06-17 11:39:35 UTC
Hi Amar,

We've been running the 6.1 release for some time now and the memory consumption is back at the previous level. Close to 1GB, but not more than that.