Bug 1110777 - glusterfsd OOM - using all memory when quota is enabled
Summary: glusterfsd OOM - using all memory when quota is enabled
Keywords:
Status: CLOSED CURRENTRELEASE
Alias: None
Product: GlusterFS
Classification: Community
Component: quota
Version: mainline
Hardware: Unspecified
OS: Unspecified
unspecified
unspecified
Target Milestone: ---
Assignee: vpshastry
QA Contact:
URL:
Whiteboard:
Depends On:
Blocks: 1108324 1111468 1111523
TreeView+ depends on / blocked
 
Reported: 2014-06-18 12:34 UTC by vpshastry
Modified: 2014-06-24 11:06 UTC (History)
4 users (show)

Fixed In Version: glusterfs-3.5.1
Doc Type: Bug Fix
Doc Text:
Clone Of:
Environment:
Last Closed: 2014-06-24 11:06:53 UTC
Regression: ---
Mount Type: ---
Documentation: ---
CRM:
Verified Versions:
Embargoed:


Attachments (Terms of Use)

Description vpshastry 2014-06-18 12:34:48 UTC
Description of problem:

Today morning output
========================
top - 11:03:51 up  7:28,  1 user,  load average: 0.47, 0.97, 1.02
Tasks: 234 total,   2 running, 230 sleeping,   0 stopped,   2 zombie
Cpu(s): 17.0%us, 10.4%sy,  0.0%ni, 69.4%id,  0.9%wa,  0.4%hi,  1.9%si,  0.0%st
Mem:  16334404k total, 16145520k used,   188884k free,     2476k buffers
Swap:  2097144k total,  2097144k used,        0k free,    87048k cached

  PID USER      PR  NI  VIRT  RES  SHR S %CPU %MEM    TIME+  COMMAND
 4015 root      20   0 17.1g  14g 1992 S 95.7 93.7 335:55.27 glusterfsd
 4264 root      20   0  384m 108m 1868 S 11.9  0.7  14:38.52 glusterfs
16487 root      20   0  585m  42m 1516 S  7.6  0.3   1:39.71 glusterfs
15156 root      20   0  341m  72m 1896 R  7.3  0.5   1:44.81 glusterfs


Has been happening consistently since last 4days on different machines in the same trusted pool


How reproducible:
Very

Actual results:

glusterfsd OOM's with quota

Expected results:

glusterfsd does not OOM with quota

Additional info:

After turning off quota - memory usage has been contained.

Topology for volume:

Distribute set
 │     
 ├──── Replica set 0
 │      │     
 │      ├──── Brick 0:
 │      │     
 │      └──── Brick 1:
 │     
 └──── Replica set 1
        │     
        ├──── Brick 0:
        │     
        └──── Brick 1:

[2014-05-06 19:54:14.171492]  : volume quota test-vol enable : SUCCESS

[2014-05-06 19:56:47.530188]  : volume quota test-vol limit-usage / 1.2TB : SUCCESS

[2014-05-06 20:02:30.535137]  : vol set test-vol features.quota-deem-statfs on : SUCCESS

[2014-05-09 19:53:58.264764]  : volume quota test-vol limit-usage / 2.5TB : SUCCESS

REMINDER: Currently quota is disabled, since we cannot have the system go down on all the nodes again.

Comment 1 Anand Avati 2014-06-18 12:42:01 UTC
REVIEW: http://review.gluster.org/8102 (features/quota: Fix dict leak) posted (#1) for review on master by Varun Shastry (vshastry)

Comment 2 Anand Avati 2014-06-18 16:52:11 UTC
COMMIT: http://review.gluster.org/8102 committed in master by Raghavendra G (rgowdapp) 
------
commit 3dccc3da7485059996ad490d4bf9ba23693110f7
Author: Varun Shastry <vshastry>
Date:   Wed Jun 18 17:55:54 2014 +0530

    features/quota: Fix dict leak
    
    Change-Id: I971a52163c0f1a887bbb8585cd69df2339af51cb
    BUG: 1110777
    Signed-off-by: Varun Shastry <vshastry>
    Reviewed-on: http://review.gluster.org/8102
    Tested-by: Gluster Build System <jenkins.com>
    Reviewed-by: Raghavendra G <rgowdapp>
    Tested-by: Raghavendra G <rgowdapp>

Comment 3 Anand Avati 2014-06-20 11:56:53 UTC
REVIEW: http://review.gluster.org/8132 (features/quota: Fix dict leak) posted (#1) for review on release-3.5 by Varun Shastry (vshastry)

Comment 4 Anand Avati 2014-06-23 09:54:04 UTC
COMMIT: http://review.gluster.org/8132 committed in release-3.5 by Niels de Vos (ndevos) 
------
commit e82b527a09019109a07ea3e4280a1e74d9802ae7
Author: Varun Shastry <vshastry>
Date:   Wed Jun 18 17:55:54 2014 +0530

    features/quota: Fix dict leak
    
    Change-Id: Id4542d1629175cce5fec5ab8f9a5899eec48e2eb
    BUG: 1110777
    Signed-off-by: Varun Shastry <vshastry>
    Reviewed-on: http://review.gluster.org/8132
    Tested-by: Gluster Build System <jenkins.com>
    Reviewed-by: Niels de Vos <ndevos>

Comment 5 Niels de Vos 2014-06-24 11:06:53 UTC
This bug is getting closed because a release has been made available that should address the reported issue. In case the problem is still not fixed with glusterfs-3.5.1, please reopen this bug report.

glusterfs-3.5.1 has been announced on the Gluster Users mailinglist [1], packages for several distributions should become available in the near future. Keep an eye on the Gluster Users mailinglist [2] and the update infrastructure for your distribution.

[1] http://supercolony.gluster.org/pipermail/gluster-users/2014-June/040723.html
[2] http://thread.gmane.org/gmane.comp.file-systems.gluster.user


Note You need to log in before you can comment on or make changes to this bug.