Bug 1261234 - Possible memory leak during rebalance with large quantity of files
Possible memory leak during rebalance with large quantity of files
Status: CLOSED CURRENTRELEASE
Product: GlusterFS
Classification: Community
Component: distribute (Show other bugs)
3.7.3
x86_64 Linux
urgent Severity high
: ---
: ---
Assigned To: Susant Kumar Palai
: Triaged
Depends On:
Blocks: 1266877 1272933
  Show dependency treegraph
 
Reported: 2015-09-08 21:22 EDT by Max Gashkov
Modified: 2016-04-28 18:48 EDT (History)
7 users (show)

See Also:
Fixed In Version: glusterfs-3.7.5
Doc Type: Bug Fix
Doc Text:
Story Points: ---
Clone Of:
: 1266877 (view as bug list)
Environment:
Last Closed: 2015-10-14 06:27:30 EDT
Type: Bug
Regression: ---
Mount Type: ---
Documentation: ---
CRM:
Verified Versions:
Category: ---
oVirt Team: ---
RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: ---


Attachments (Terms of Use)
statedump of rebalance process (33.45 KB, text/plain)
2015-09-08 21:22 EDT, Max Gashkov
no flags Details

  None (edit)
Description Max Gashkov 2015-09-08 21:22:56 EDT
Created attachment 1071537 [details]
statedump of rebalance process

Description of problem:
Gluster distributed volume with 4 bricks fails to rebalance due to memory exhaustion.

I have a gluster distributed volume with 4 bricks on one physical server (this seems strange but there are reasons for this). Bricks are formatted with ext4. Volume spans 57T of storage space and currently contains ~2.5T in 30M files, mostly located on brick 1. Rebalance fix-layout completed successfully, but main rebalance fails to complete as server runs out of memory.

I've tried running
echo 2 > /proc/sys/vm/drop_caches

After approximately 24hrs server starts thrashing.

Version-Release number of selected component (if applicable):
glusterfs 3.7.3 built on Jul 28 2015 14:28:57

How reproducible:
Always

Steps to Reproduce:
1. Start rebalance
2. Wait ~24hrs

Actual results:
Server starts thrashing due to memory exhaustion.

Expected results:
Memory occupated by gluster remains relatively constant.
Comment 1 Susant Kumar Palai 2015-09-16 05:26:12 EDT
Hi Max,
Can you share rebalance logs? What was the mem-usage of rebalance process when it was OOM killed?
Comment 2 Max Gashkov 2015-09-16 05:31:52 EDT
Hi,

Rebalance log is rather large (about 600M), I can grep for specific strings if needed or share whole file privately (please indicate method for contacting you directly).

OOM didn't kill the process, I did. It was around 2G RES at the time and with the other glusterfsd processes it started swapping to the point when system became unstable.
Comment 3 Susant Kumar Palai 2015-09-16 05:35:15 EDT
(In reply to Max Gashkov from comment #2)
> Hi,
> 
> Rebalance log is rather large (about 600M), I can grep for specific strings
> if needed or share whole file privately (please indicate method for
> contacting you directly).
Can you grep for Error messages in rebalance log and update?
For contact:On IRC [#gluster]  nick: [spalai]
> 
> OOM didn't kill the process, I did. It was around 2G RES at the time and
> with the other glusterfsd processes it started swapping to the point when
> system became unstable.
Comment 4 Pranith Kumar K 2015-10-14 06:27:30 EDT
This bug is getting closed because a release has been made available that should address the reported issue. In case the problem is still not fixed with glusterfs-glusterfs-3.7.5, please open a new bug report.

glusterfs-glusterfs-3.7.5 has been announced on the Gluster mailinglists [1], packages for several distributions should become available in the near future. Keep an eye on the Gluster Users mailinglist [2] and the update infrastructure for your distribution.

[1] http://www.gluster.org/pipermail/gluster-users/2015-October/023968.html
[2] http://thread.gmane.org/gmane.comp.file-systems.gluster.user
Comment 5 Pranith Kumar K 2015-10-14 06:37:24 EDT
This bug is getting closed because a release has been made available that should address the reported issue. In case the problem is still not fixed with glusterfs-3.7.5, please open a new bug report.

glusterfs-3.7.5 has been announced on the Gluster mailinglists [1], packages for several distributions should become available in the near future. Keep an eye on the Gluster Users mailinglist [2] and the update infrastructure for your distribution.

[1] http://www.gluster.org/pipermail/gluster-users/2015-October/023968.html
[2] http://thread.gmane.org/gmane.comp.file-systems.gluster.user

Note You need to log in before you can comment on or make changes to this bug.