Bug 1133073 - High memory usage by glusterfs processes
Summary: High memory usage by glusterfs processes
Keywords:
Status: CLOSED EOL
Alias: None
Product: GlusterFS
Classification: Community
Component: fuse
Version: 3.4.4
Hardware: x86_64
OS: Linux
unspecified
unspecified
Target Milestone: ---
Assignee: bugs@gluster.org
QA Contact:
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2014-08-22 15:43 UTC by Igor Biryulin
Modified: 2015-10-07 14:00 UTC (History)
4 users (show)

Fixed In Version:
Clone Of:
Environment:
Last Closed: 2015-10-07 14:00:17 UTC
Regression: ---
Mount Type: ---
Documentation: ---
CRM:
Verified Versions:
Embargoed:


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
Red Hat Bugzilla 1127140 0 unspecified CLOSED memory leak 2021-02-22 00:41:40 UTC

Internal Links: 1127140

Description Igor Biryulin 2014-08-22 15:43:42 UTC
Description of problem:
I have a cluster with 2 node replicas. 
In cluster we have 1 brick\volume. Capacity 44Tb. Used 27Tb.
If some times do recursive listing of files (ls -laR) on volume, processes glusterfs get memory and don't return it. After some cycle recursive listing all memory used by glusterfs. It can be more 50Gb on server with 64Gb RAM. After that occurs OOM.

Try dissable options:
performance.quick-read
performance.io-cache
but it didn't give good result.

echo 2 > /proc/sys/vm/drop_caches 
didn't help in this situation.

As temporary solution use umount and mount volume when memory exhausted.


Version-Release number of selected component (if applicable):
OS: Ubuntu 12.04.4 LTS \n \l
Core: 3.10.28-6
Packages:
ii  glusterfs-client                     3.4.4-5                           clustered file-system (client package)
ii  glusterfs-common                     3.4.4-5                           GlusterFS common libraries and translator modules
ii  glusterfs-server                     3.4.4-5                           clustered file-system (server package)


How reproducible:
Some times do recursive listing on files in volume.

Steps to Reproduce:
1. Create replica set from 2 nodes with 1 brick. 
2. Add files and get big usage capacity volume. We have total copacity 44T and used 27T.
3. Do some times recursive listing on all mounted volume.

Actual results:
Exhaustion memory (RAM)

Expected results:


Additional info:
# gluster volume info
 
Volume Name: repofiles
Type: Replicate
Volume ID: 34f85192-2b9a-4d36-8468-3576d0cc922a
Status: Started
Number of Bricks: 1 x 2 = 2
Transport-type: tcp
Bricks:
Brick1: xxx1:/mnt/gluster_source/brick
Brick2: xxx2:/mnt/gluster_source/brick
Options Reconfigured:
performance.read-ahead: on
performance.write-behind-window-size: 4MB
nfs.disable: on
performance.nfs.stat-prefetch: off
performance.nfs.io-threads: off
performance.nfs.read-ahead: on
performance.nfs.io-cache: on
performance.stat-prefetch: on
performance.client-io-threads: off
performance.io-thread-count: 32
performance.quick-read: off
performance.io-cache: off
performance.cache-size: 6442450944
performance.nfs.quick-read: on

Comment 1 meher.gara@gmail.com 2014-10-06 20:16:12 UTC
I have the exact same issue on the client host with this lineup  : 

glusterfs-libs-3.4.2-1.el6.x86_64
glusterfs-server-3.4.2-1.el6.x86_64
glusterfs-3.4.2-1.el6.x86_64
glusterfs-cli-3.4.2-1.el6.x86_64
glusterfs-fuse-3.4.2-1.el6.x86_64
glusterfs-api-3.4.2-1.el6.x86_64


Memory usage 24GB so far: 
/usr/sbin/glusterfs --read-only --volfile-id=/gv0 --volfile-server=192.168.131.153 /mnt/gv0

I used the same workaround to "fix" the issue (umount/mount) 

Any help will be appreciated.

Comment 2 Niels de Vos 2015-05-17 22:00:05 UTC
GlusterFS 3.7.0 has been released (http://www.gluster.org/pipermail/gluster-users/2015-May/021901.html), and the Gluster project maintains N-2 supported releases. The last two releases before 3.7 are still maintained, at the moment these are 3.6 and 3.5.

This bug has been filed against the 3,4 release, and will not get fixed in a 3.4 version any more. Please verify if newer versions are affected with the reported problem. If that is the case, update the bug with a note, and update the version if you can. In case updating the version is not possible, leave a comment in this bug report with the version you tested, and set the "Need additional information the selected bugs from" below the comment box to "bugs".

If there is no response by the end of the month, this bug will get automatically closed.

Comment 3 Kaleb KEITHLEY 2015-10-07 14:00:17 UTC
GlusterFS 3.4.x has reached end-of-life.

If this bug still exists in a later release please reopen this and change the version or open a new bug.


Note You need to log in before you can comment on or make changes to this bug.