Bug 988182 - OOM: observed for fuse client process (glusterfs) when one brick from replica pairs were offlined and high IO was in progress from client
Summary: OOM: observed for fuse client process (glusterfs) when one brick from replica...
Keywords:
Status: CLOSED CURRENTRELEASE
Alias: None
Product: GlusterFS
Classification: Community
Component: fuse
Version: mainline
Hardware: x86_64
OS: Linux
high
urgent
Target Milestone: ---
Assignee: Ravishankar N
QA Contact:
URL:
Whiteboard:
Depends On: 981158
Blocks: 1112844
TreeView+ depends on / blocked
 
Reported: 2013-07-25 02:48 UTC by Ravishankar N
Modified: 2014-06-24 19:45 UTC (History)
5 users (show)

Fixed In Version: glusterfs-3.5.0
Doc Type: Bug Fix
Doc Text:
Clone Of: 981158
: 1112844 (view as bug list)
Environment:
Last Closed: 2014-04-17 11:44:11 UTC
Regression: ---
Mount Type: ---
Documentation: ---
CRM:
Verified Versions:
Embargoed:


Attachments (Terms of Use)

Comment 1 Anand Avati 2013-07-25 02:56:16 UTC
REVIEW: http://review.gluster.org/5392 (fuse: fix memory leak in fuse_getxattr()) posted (#1) for review on master by Ravishankar N (ravishankar)

Comment 2 Anand Avati 2013-08-02 04:27:00 UTC
REVIEW: http://review.gluster.org/5393 (afr: check for non-zero call_count before doing a stack wind) posted (#2) for review on master by Ravishankar N (ravishankar)

Comment 3 Anand Avati 2013-08-03 09:45:35 UTC
REVIEW: http://review.gluster.org/5393 (afr: check for non-zero call_count before doing a stack wind) posted (#3) for review on master by Ravishankar N (ravishankar)

Comment 4 Anand Avati 2013-08-03 16:48:51 UTC
COMMIT: http://review.gluster.org/5392 committed in master by Anand Avati (avati) 
------
commit b777fc478d74b2582671fef7cb2c55206432c2bb
Author: Ravishankar N <ravishankar>
Date:   Wed Jul 24 18:44:42 2013 +0000

    fuse: fix memory leak in fuse_getxattr()
    
    The fuse_getxattr() function was not freeing fuse_state_t resulting in a
    memory leak. As a result, when continuous writes (run dd command in a loop)
    were done from a FUSE mount point, the OOM killer killed the client
    process (glusterfs).
    
    Change-Id: I6ded1a4c25d26ceab0cb3b89ac81066cb51343ec
    BUG: 988182
    Signed-off-by: Ravishankar N <ravishankar>
    Reviewed-on: http://review.gluster.org/5392
    Reviewed-by: Pranith Kumar Karampuri <pkarampu>
    Tested-by: Gluster Build System <jenkins.com>
    Reviewed-by: Anand Avati <avati>

Comment 5 Anand Avati 2013-08-07 10:35:50 UTC
COMMIT: http://review.gluster.org/5393 committed in master by Anand Avati (avati) 
------
commit 0f77e30c903e6f71f30dfd6165914a43998a164f
Author: Ravishankar N <ravishankar>
Date:   Wed Jul 24 19:11:49 2013 +0000

    afr: check for non-zero call_count before doing a stack wind
    
    When one of the bricks of a 1x2 replicate volume is down,
    writes to the volume is causing a race between afr_flush_wrapper() and
    afr_flush_cbk(). The latter frees up the call_frame's local variables
    in the unwind, while the former accesses them in the for loop and
    sending a stack wind the second time. This causes the FUSE mount process
    (glusterfs) toa receive a SIGSEGV when the corresponding unwind is hit.
    
    This patch adds the call_count check which was removed when
    afr_flush_wrapper() was introduced in commit 29619b4e
    
    Change-Id: I87d12ef39ea61cc4c8244c7f895b7492b90a7042
    BUG: 988182
    Signed-off-by: Ravishankar N <ravishankar>
    Reviewed-on: http://review.gluster.org/5393
    Tested-by: Gluster Build System <jenkins.com>
    Reviewed-by: Pranith Kumar Karampuri <pkarampu>
    Reviewed-by: Anand Avati <avati>

Comment 6 Niels de Vos 2014-04-17 11:44:11 UTC
This bug is getting closed because a release has been made available that should address the reported issue. In case the problem is still not fixed with glusterfs-3.5.0, please reopen this bug report.

glusterfs-3.5.0 has been announced on the Gluster Developers mailinglist [1], packages for several distributions should become available in the near future. Keep an eye on the Gluster Users mailinglist [2] and the update infrastructure for your distribution.

[1] http://thread.gmane.org/gmane.comp.file-systems.gluster.devel/6137
[2] http://thread.gmane.org/gmane.comp.file-systems.gluster.user


Note You need to log in before you can comment on or make changes to this bug.