Bug 1261234 - Possible memory leak during rebalance with large quantity of files
Summary: Possible memory leak during rebalance with large quantity of files
Keywords:
Status: CLOSED CURRENTRELEASE
Alias: None
Product: GlusterFS
Classification: Community
Component: distribute
Version: 3.7.3
Hardware: x86_64
OS: Linux
urgent
high
Target Milestone: ---
Assignee: Susant Kumar Palai
QA Contact:
URL:
Whiteboard:
Depends On:
Blocks: 1266877 1272933
TreeView+ depends on / blocked
 
Reported: 2015-09-09 01:22 UTC by Max Gashkov
Modified: 2016-04-28 22:48 UTC (History)
7 users (show)

Fixed In Version: glusterfs-3.7.5
Doc Type: Bug Fix
Doc Text:
Clone Of:
: 1266877 (view as bug list)
Environment:
Last Closed: 2015-10-14 10:27:30 UTC
Regression: ---
Mount Type: ---
Documentation: ---
CRM:
Verified Versions:
Embargoed:


Attachments (Terms of Use)
statedump of rebalance process (33.45 KB, text/plain)
2015-09-09 01:22 UTC, Max Gashkov
no flags Details

Description Max Gashkov 2015-09-09 01:22:56 UTC
Created attachment 1071537 [details]
statedump of rebalance process

Description of problem:
Gluster distributed volume with 4 bricks fails to rebalance due to memory exhaustion.

I have a gluster distributed volume with 4 bricks on one physical server (this seems strange but there are reasons for this). Bricks are formatted with ext4. Volume spans 57T of storage space and currently contains ~2.5T in 30M files, mostly located on brick 1. Rebalance fix-layout completed successfully, but main rebalance fails to complete as server runs out of memory.

I've tried running
echo 2 > /proc/sys/vm/drop_caches

After approximately 24hrs server starts thrashing.

Version-Release number of selected component (if applicable):
glusterfs 3.7.3 built on Jul 28 2015 14:28:57

How reproducible:
Always

Steps to Reproduce:
1. Start rebalance
2. Wait ~24hrs

Actual results:
Server starts thrashing due to memory exhaustion.

Expected results:
Memory occupated by gluster remains relatively constant.

Comment 1 Susant Kumar Palai 2015-09-16 09:26:12 UTC
Hi Max,
Can you share rebalance logs? What was the mem-usage of rebalance process when it was OOM killed?

Comment 2 Max Gashkov 2015-09-16 09:31:52 UTC
Hi,

Rebalance log is rather large (about 600M), I can grep for specific strings if needed or share whole file privately (please indicate method for contacting you directly).

OOM didn't kill the process, I did. It was around 2G RES at the time and with the other glusterfsd processes it started swapping to the point when system became unstable.

Comment 3 Susant Kumar Palai 2015-09-16 09:35:15 UTC
(In reply to Max Gashkov from comment #2)
> Hi,
> 
> Rebalance log is rather large (about 600M), I can grep for specific strings
> if needed or share whole file privately (please indicate method for
> contacting you directly).
Can you grep for Error messages in rebalance log and update?
For contact:On IRC [#gluster]  nick: [spalai]
> 
> OOM didn't kill the process, I did. It was around 2G RES at the time and
> with the other glusterfsd processes it started swapping to the point when
> system became unstable.

Comment 4 Pranith Kumar K 2015-10-14 10:27:30 UTC
This bug is getting closed because a release has been made available that should address the reported issue. In case the problem is still not fixed with glusterfs-glusterfs-3.7.5, please open a new bug report.

glusterfs-glusterfs-3.7.5 has been announced on the Gluster mailinglists [1], packages for several distributions should become available in the near future. Keep an eye on the Gluster Users mailinglist [2] and the update infrastructure for your distribution.

[1] http://www.gluster.org/pipermail/gluster-users/2015-October/023968.html
[2] http://thread.gmane.org/gmane.comp.file-systems.gluster.user

Comment 5 Pranith Kumar K 2015-10-14 10:37:24 UTC
This bug is getting closed because a release has been made available that should address the reported issue. In case the problem is still not fixed with glusterfs-3.7.5, please open a new bug report.

glusterfs-3.7.5 has been announced on the Gluster mailinglists [1], packages for several distributions should become available in the near future. Keep an eye on the Gluster Users mailinglist [2] and the update infrastructure for your distribution.

[1] http://www.gluster.org/pipermail/gluster-users/2015-October/023968.html
[2] http://thread.gmane.org/gmane.comp.file-systems.gluster.user


Note You need to log in before you can comment on or make changes to this bug.