Bug 1420810 - Massive xlator_t leak in graph-switch code
Summary: Massive xlator_t leak in graph-switch code
Keywords:
Status: CLOSED CURRENTRELEASE
Alias: None
Product: GlusterFS
Classification: Community
Component: core
Version: 3.10
Hardware: Unspecified
OS: Unspecified
high
high
Target Milestone: ---
Assignee: Jeff Darcy
QA Contact:
URL:
Whiteboard:
Depends On: 1420571
Blocks:
TreeView+ depends on / blocked
 
Reported: 2017-02-09 14:51 UTC by Jeff Darcy
Modified: 2017-03-06 17:45 UTC (History)
1 user (show)

Fixed In Version: glusterfs-3.10.0
Clone Of: 1420571
Environment:
Last Closed: 2017-03-06 17:45:52 UTC
Regression: ---
Mount Type: ---
Documentation: ---
CRM:
Verified Versions:
Embargoed:


Attachments (Terms of Use)

Description Jeff Darcy 2017-02-09 14:51:31 UTC
+++ This bug was initially created as a clone of Bug #1420571 +++

In high-scale tests (hundreds of volumes with four bricks each) it became apparent that a frightening percentage of our memory in a brick process was being consumed by xlator_t structures, in numbers exactly proportional to the *square* of how many volumes we had.  This happens because every volume creation would result in a FETCHSPEC request being sent and acted upon for every previously existing volume.  Since the teardown of the graph used to process each such request wasn't freeing the xlator_t, this would leak every xlator_t in every temporary graph on every iteration.

The fix, quite simply, is to take advantage of the painstaking work already done and *actually free* the structure at the right point.

--- Additional comment from Worker Ant on 2017-02-08 19:56:33 EST ---

REVIEW: https://review.gluster.org/16570 (libglusterfs: fix serious leak of xlator_t structures) posted (#1) for review on master by Jeff Darcy (jdarcy)

--- Additional comment from Worker Ant on 2017-02-09 08:49:13 EST ---

COMMIT: https://review.gluster.org/16570 committed in master by Shyamsundar Ranganathan (srangana) 
------
commit 2199c688b73dfe90868f9469f92e21b0e0795e57
Author: Jeff Darcy <jdarcy>
Date:   Wed Feb 8 19:45:46 2017 -0500

    libglusterfs: fix serious leak of xlator_t structures
    
    There's a lot of logic (and some long comments) around how to free
    these structures safely, but then we didn't do it.  Now we do.
    
    Change-Id: I9731ae75c60e99cc43d33d0813a86912db97fd96
    BUG: 1420571
    Signed-off-by: Jeff Darcy <jdarcy>
    Reviewed-on: https://review.gluster.org/16570
    Smoke: Gluster Build System <jenkins.org>
    NetBSD-regression: NetBSD Build System <jenkins.org>
    CentOS-regression: Gluster Build System <jenkins.org>
    Reviewed-by: Poornima G <pgurusid>
    Reviewed-by: Shyamsundar Ranganathan <srangana>

Comment 1 Worker Ant 2017-02-09 14:52:43 UTC
REVIEW: https://review.gluster.org/16583 (libglusterfs: fix serious leak of xlator_t structures) posted (#1) for review on release-3.10 by Jeff Darcy (jdarcy)

Comment 2 Worker Ant 2017-02-09 21:01:49 UTC
COMMIT: https://review.gluster.org/16583 committed in release-3.10 by Shyamsundar Ranganathan (srangana) 
------
commit 226d7c442509172b2209515841ef499ec12fc9f2
Author: Jeff Darcy <jdarcy>
Date:   Wed Feb 8 19:45:46 2017 -0500

    libglusterfs: fix serious leak of xlator_t structures
    
    There's a lot of logic (and some long comments) around how to free
    these structures safely, but then we didn't do it.  Now we do.
    
    Backport of:
    > Change-Id: I9731ae75c60e99cc43d33d0813a86912db97fd96
    > BUG: 1420571
    > Reviewed-on: https://review.gluster.org/16570
    
    Change-Id: I54415b614b277224196f5723bce5a4c5a404d881
    BUG: 1420810
    Signed-off-by: Jeff Darcy <jdarcy>
    Reviewed-on: https://review.gluster.org/16583
    Smoke: Gluster Build System <jenkins.org>
    NetBSD-regression: NetBSD Build System <jenkins.org>
    CentOS-regression: Gluster Build System <jenkins.org>
    Reviewed-by: Shyamsundar Ranganathan <srangana>

Comment 3 Shyamsundar 2017-03-06 17:45:52 UTC
This bug is getting closed because a release has been made available that should address the reported issue. In case the problem is still not fixed with glusterfs-3.10.0, please open a new bug report.

glusterfs-3.10.0 has been announced on the Gluster mailinglists [1], packages for several distributions should become available in the near future. Keep an eye on the Gluster Users mailinglist [2] and the update infrastructure for your distribution.

[1] http://lists.gluster.org/pipermail/gluster-users/2017-February/030119.html
[2] https://www.gluster.org/pipermail/gluster-users/


Note You need to log in before you can comment on or make changes to this bug.