Bug 1657202
Summary: | Possible memory leak in 5.1 brick process | ||||||
---|---|---|---|---|---|---|---|
Product: | [Community] GlusterFS | Reporter: | Rob de Wit <rob.dewit> | ||||
Component: | core | Assignee: | bugs <bugs> | ||||
Status: | CLOSED CURRENTRELEASE | QA Contact: | |||||
Severity: | urgent | Docs Contact: | |||||
Priority: | high | ||||||
Version: | 5 | CC: | atumball, bugs, pasik | ||||
Target Milestone: | --- | ||||||
Target Release: | --- | ||||||
Hardware: | x86_64 | ||||||
OS: | Linux | ||||||
Whiteboard: | |||||||
Fixed In Version: | glusterfs-6.x, glusterfs-5.5 | Doc Type: | If docs needed, set a value | ||||
Doc Text: | Story Points: | --- | |||||
Clone Of: | Environment: | ||||||
Last Closed: | 2019-06-17 11:28:56 UTC | Type: | Bug | ||||
Regression: | --- | Mount Type: | --- | ||||
Documentation: | --- | CRM: | |||||
Verified Versions: | Category: | --- | |||||
oVirt Team: | --- | RHEL 7.3 requirements from Atomic Host: | |||||
Cloudforms Team: | --- | Target Upstream Version: | |||||
Embargoed: | |||||||
Attachments: |
|
Might be related to https://bugzilla.redhat.com/show_bug.cgi?id=1623107 and https://bugzilla.redhat.com/show_bug.cgi?id=1659676 Are any of these fixed in 5.2? robdewit, apologies for delay in getting back on this. Yes, there were some serious memory leaks which got fixed in glusterfs-5.5 timeframe, and glusterfs-6.1 time... We recommend you to upgrade and test the newer version to get the fixes. Hi Amar, We've been running the 6.1 release for some time now and the memory consumption is back at the previous level. Close to 1GB, but not more than that. Thanks! |
Created attachment 1512497 [details] statedumps Description of problem: glusterfs process keep on growing Version-Release number of selected component (if applicable): 5.1 How reproducible: always Steps to Reproduce: 1. mount gluster volume 2. use 3. wait for process to grow Actual results: glusterfs process grows to 10s of gigabytes: root 24837 27.5 35.2 24051028 23167000 ? Ssl Nov29 3133:46 /usr/sbin/glusterfs --use-readdirp=off --attribute-timeout=600 --entry-timeout=600 --negative-timeout=600 --fuse-mountopts=noatime --process-name fuse --volfile-server=SERVER --volfile-id=jf-vol0 --fuse-mountopts=noatime /mnt/jf-vol0 Expected results: glusterfs uses reasonable amounts of memory. Additional info: The volume contains a large number (some millions) of small files. Some of those are python code, hence the negative-timeout mount option (python tries to open a lot of non-existent files, effectively killing the volume performance). Attached are four state-dumps. I've added redacted version where unchanged or up-down values are left out. If I check them with vimdiff, it looks like some of the values only keep on growing.