Bug 1127140 - memory leak
Summary: memory leak
Keywords:
Status: CLOSED EOL
Alias: None
Product: GlusterFS
Classification: Community
Component: fuse
Version: 3.7.6
Hardware: Unspecified
OS: Unspecified
unspecified
unspecified
Target Milestone: ---
Assignee: Kaleb KEITHLEY
QA Contact:
URL:
Whiteboard:
: 1023191 1165429 (view as bug list)
Depends On:
Blocks: glusterfs-3.4.7 1199303
TreeView+ depends on / blocked
 
Reported: 2014-08-06 08:56 UTC by Joe Julian
Modified: 2017-03-08 10:58 UTC (History)
9 users (show)

Fixed In Version:
Doc Type: Bug Fix
Doc Text:
Clone Of:
Environment:
Last Closed: 2017-03-08 10:58:26 UTC
Regression: ---
Mount Type: ---
Documentation: ---
CRM:
Verified Versions:
Embargoed:


Attachments (Terms of Use)
compute021 dump file (234.87 KB, application/octet-stream)
2014-08-06 10:05 UTC, Joe Julian
no flags Details
glusterdump of my client (213.14 KB, text/plain)
2014-08-06 18:55 UTC, Jochen Lillich
no flags Details


Links
System ID Private Priority Status Summary Last Updated
Red Hat Bugzilla 1023191 0 unspecified CLOSED glusterfs consuming a large amount of system memory 2021-02-22 00:41:40 UTC
Red Hat Bugzilla 1126831 0 high CLOSED Memory leak in GlusterFs client 2023-09-14 02:45:09 UTC
Red Hat Bugzilla 1133073 0 unspecified CLOSED High memory usage by glusterfs processes 2021-02-22 00:41:40 UTC
Red Hat Bugzilla 1165429 0 unspecified CLOSED Gluster Fuse high memory consumption 2021-02-22 00:41:40 UTC

Internal Links: 1023191 1126831 1133073 1165429

Description Joe Julian 2014-08-06 08:56:57 UTC
I've got a memory leak that I'm not entirely sure how to reproduce.

This is v3.4.4 with http://review.gluster.org/8029

Perhaps a duplicate of bz 1112844?

Comment 1 Joe Julian 2014-08-06 10:05:25 UTC
Created attachment 924419 [details]
compute021 dump file

Comment 2 Jochen Lillich 2014-08-06 18:55:32 UTC
Created attachment 924569 [details]
glusterdump of my client

This 3.4.4 client had consumed ~20 GB of RAM when I triggered the dump.

Comment 3 Kaleb KEITHLEY 2014-08-29 14:14:31 UTC
ReL: Perhaps a duplicate of bz 1112844?

1112844 was fixed in 3.4.5. Are you still experiencing the leak with 3.4.5?

Comment 4 Joe Julian 2014-09-03 13:33:06 UTC
Yes, still leaking with 3.4.5.

Comment 5 Jochen Lillich 2014-09-13 17:06:36 UTC
3.4.5 actually solved our memory leak problem.

Thanks, Jochen

Comment 6 Kaleb KEITHLEY 2014-10-16 15:47:01 UTC
*** Bug 1023191 has been marked as a duplicate of this bug. ***

Comment 7 Kaleb KEITHLEY 2014-11-25 12:22:23 UTC
*** Bug 1165429 has been marked as a duplicate of this bug. ***

Comment 8 Niels de Vos 2015-05-17 21:59:03 UTC
GlusterFS 3.7.0 has been released (http://www.gluster.org/pipermail/gluster-users/2015-May/021901.html), and the Gluster project maintains N-2 supported releases. The last two releases before 3.7 are still maintained, at the moment these are 3.6 and 3.5.

This bug has been filed against the 3,4 release, and will not get fixed in a 3.4 version any more. Please verify if newer versions are affected with the reported problem. If that is the case, update the bug with a note, and update the version if you can. In case updating the version is not possible, leave a comment in this bug report with the version you tested, and set the "Need additional information the selected bugs from" below the comment box to "bugs".

If there is no response by the end of the month, this bug will get automatically closed.

Comment 9 Marcin 2015-08-10 14:48:01 UTC
In past few months I have been running version 3.5.2 and issue existed. Note how much memory was committed when glusterd was left running for a new months, almost 350GB!

http://i.imgur.com/K9eRg4w.png

 Last week, upgrade was done to version 3.7.3 and so far looks like not change, process glusterd keeps committing memory.

http://i.imgur.com/zLAcfDS.png

Comment 10 Kaleb KEITHLEY 2015-10-07 14:00:17 UTC
GlusterFS 3.4.x has reached end-of-life.

If this bug still exists in a later release please reopen this and change the version or open a new bug.

Comment 11 Daniel 2016-01-12 11:20:28 UTC
Bug still exists:

##
glusterfs --version
glusterfs 3.7.6 built on Nov  9 2015 15:19:41
##

17.7g  13g 1136 S  4.2 88.6   6301:05 glusterfs 
##

Currently in use @ 1 of 4 nodes. Other nodes are running fine but based on swapping the performance drops.

Best greetings.

Comment 12 Neil Caldwell 2016-02-16 23:03:42 UTC
Bug still exists for us too:

$ glusterfs --version
glusterfs 3.6.3 built on Apr 23 2015 16:12:23
...

server:/glusvol on /mnt/glusterfs type fuse.glusterfs (rw,allow_other,max_read=131072)

this mount, using a single glusterfs process will use up to the full 64Gb of RAM over around a week period. Then swapping starts, and users report slow performance and pausing. We need to log all users off the file share, umount and mount the volume and the memory comes back. 

Is there any files I can send to help you guys analyze the issue? 

Warm Regards, Neil

Comment 13 Hans Henrik Happe 2016-03-22 17:10:42 UTC
We are at 3.5.7 and see the same problem. It seems that it deallocates some when memory utilization is high. However, we have seen that it is not fast enough for the OOM, which will go ahead and kill glusterfs.

We will start some 3.7 testing soon.

Cheers, Hans Henrik

Comment 14 Joe Julian 2016-03-22 17:14:42 UTC
3.6.9 is the latest 3.6 release. You should start by testing from there.
Same goes for the 3.5 report, please test against 3.5.9.

Comment 15 Hans Henrik Happe 2016-03-22 17:33:04 UTC
Well, the changes between 3.5.7 to 3.5.9 does not indicate a fix. Also, the community releases still has 3.5.7 as LATEST.

Why shouldn't we test 3.7 instead of 3.6? It will be a separate test system.

Comment 16 Kaushal 2017-03-08 10:58:26 UTC
This bug is getting closed because GlusteFS-3.7 has reached its end-of-life.

Note: This bug is being closed using a script. No verification has been performed to check if it still exists on newer releases of GlusterFS.
If this bug still exists in newer GlusterFS releases, please reopen this bug against the newer release.


Note You need to log in before you can comment on or make changes to this bug.