Bug 1470170 - mem-pool: mem_pool_fini() doesn't release entire memory allocated
Summary: mem-pool: mem_pool_fini() doesn't release entire memory allocated
Keywords:
Status: CLOSED CURRENTRELEASE
Alias: None
Product: GlusterFS
Classification: Community
Component: core
Version: mainline
Hardware: All
OS: All
unspecified
high
Target Milestone: ---
Assignee: Niels de Vos
QA Contact: bugs@gluster.org
URL:
Whiteboard:
Depends On:
Blocks: 1425623 1471137 1473191
TreeView+ depends on / blocked
 
Reported: 2017-07-12 13:30 UTC by Soumya Koduri
Modified: 2017-09-05 17:36 UTC (History)
2 users (show)

Fixed In Version: glusterfs-3.12.0
Clone Of:
Environment:
Last Closed: 2017-09-05 17:36:59 UTC
Regression: ---
Mount Type: ---
Documentation: ---
CRM:
Verified Versions:
Embargoed:


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
Red Hat Bugzilla 1461543 0 unspecified CLOSED [Ganesha] : Ganesha occupies increased memory (RES size) even when all the files are deleted from the mount 2021-02-22 00:41:40 UTC
Red Hat Bugzilla 1477668 0 unspecified CLOSED Cleanup retired mem-pool allocations 2021-02-22 00:41:40 UTC

Internal Links: 1461543 1477668

Description Soumya Koduri 2017-07-12 13:30:46 UTC
Description of problem:

At the moment, all the work which mem_pool_fini() does is to exit and cleanup the sweeper thread. That doesn't ensure that all the memory allocated is cleaned up as some of those allocations may still be in hot list or if in cold list, sweeper thread may not have got chance to run through them.
hence we need to iterate though all those per-thread mem-pools and clean them up as part of fini()

Version-Release number of selected component (if applicable):


How reproducible:


Steps to Reproduce:
1.
2.
3.

Actual results:


Expected results:


Additional info:

Comment 1 Niels de Vos 2017-07-12 18:02:48 UTC
I'm looking into this with help from https://github.com/gluster/gluster-debug-tools/tree/master/gfapi-load-volfile

Comment 2 Worker Ant 2017-07-13 11:58:57 UTC
REVIEW: https://review.gluster.org/17768 (mem-pool: free objects from pools on mem_pools_fini()) posted (#1) for review on master by Niels de Vos (ndevos)

Comment 3 Worker Ant 2017-07-14 09:31:22 UTC
REVIEW: https://review.gluster.org/17768 (mem-pool: free objects from pools on mem_pools_fini()) posted (#2) for review on master by Niels de Vos (ndevos)

Comment 4 Worker Ant 2017-07-14 13:37:50 UTC
REVIEW: https://review.gluster.org/17768 (mem-pool: free objects from pools on mem_pools_fini()) posted (#3) for review on master by Niels de Vos (ndevos)

Comment 5 Worker Ant 2017-07-14 13:37:54 UTC
REVIEW: https://review.gluster.org/17778 (mem-pool: remove references to unused ctx->mempool_list) posted (#1) for review on master by Niels de Vos (ndevos)

Comment 6 Worker Ant 2017-07-14 18:57:24 UTC
REVIEW: https://review.gluster.org/17768 (mem-pool: free objects from pools on mem_pools_fini()) posted (#4) for review on master by Niels de Vos (ndevos)

Comment 7 Worker Ant 2017-07-14 18:57:28 UTC
REVIEW: https://review.gluster.org/17779 (mem-pool: initialize pthread_key_t pool_key in mem_pool_init_early()) posted (#1) for review on master by Niels de Vos (ndevos)

Comment 8 Worker Ant 2017-07-14 19:15:51 UTC
REVIEW: https://review.gluster.org/17779 (mem-pool: initialize pthread_key_t pool_key in mem_pool_init_early()) posted (#2) for review on master by Niels de Vos (ndevos)

Comment 9 Worker Ant 2017-07-14 19:15:55 UTC
REVIEW: https://review.gluster.org/17768 (mem-pool: free objects from pools on mem_pools_fini()) posted (#5) for review on master by Niels de Vos (ndevos)

Comment 10 Worker Ant 2017-07-14 19:18:04 UTC
COMMIT: https://review.gluster.org/17778 committed in master by Niels de Vos (ndevos) 
------
commit 6ed8518cfa352f53b84a511f3618b50ba4a42853
Author: Niels de Vos <ndevos>
Date:   Fri Jul 14 13:17:48 2017 +0200

    mem-pool: remove references to unused ctx->mempool_list
    
    Only the old (removed) implementation of mem-pools provided access to
    the ctx->mempool_list to faciliate state-dumps. The usage of the
    mempool_list has been disabled with "#if defined(OLD_MEM_POOLS)", but a
    few occurences were missed.
    
    Change-Id: I912fb63830efc06247eb0c6551d198271ee57a86
    BUG: 1470170
    Signed-off-by: Niels de Vos <ndevos>
    Reviewed-on: https://review.gluster.org/17778
    Reviewed-by: Kaleb KEITHLEY <kkeithle>
    Smoke: Gluster Build System <jenkins.org>
    Reviewed-by: Jeff Darcy <jeff.us>
    CentOS-regression: Gluster Build System <jenkins.org>

Comment 11 Worker Ant 2017-07-15 08:33:18 UTC
REVIEW: https://review.gluster.org/17779 (mem-pool: initialize pthread_key_t pool_key in mem_pool_init_early()) posted (#3) for review on master by Niels de Vos (ndevos)

Comment 12 Worker Ant 2017-07-15 08:33:22 UTC
REVIEW: https://review.gluster.org/17768 (mem-pool: free objects from pools on mem_pools_fini()) posted (#6) for review on master by Niels de Vos (ndevos)

Comment 13 Worker Ant 2017-07-17 10:12:26 UTC
REVIEW: https://review.gluster.org/17779 (mem-pool: initialize pthread_key_t pool_key in mem_pool_init_early()) posted (#4) for review on master by Niels de Vos (ndevos)

Comment 14 Worker Ant 2017-07-17 10:12:30 UTC
REVIEW: https://review.gluster.org/17768 (mem-pool: free objects from pools on mem_pools_fini()) posted (#7) for review on master by Niels de Vos (ndevos)

Comment 15 Worker Ant 2017-07-18 12:31:34 UTC
REVIEW: https://review.gluster.org/17779 (mem-pool: initialize pthread_key_t pool_key in mem_pool_init_early()) posted (#5) for review on master by Niels de Vos (ndevos)

Comment 16 Worker Ant 2017-07-18 12:31:38 UTC
REVIEW: https://review.gluster.org/17768 (mem-pool: free objects from pools on mem_pools_fini()) posted (#8) for review on master by Niels de Vos (ndevos)

Comment 17 Worker Ant 2017-07-19 12:49:35 UTC
REVIEW: https://review.gluster.org/17779 (mem-pool: initialize pthread_key_t pool_key in mem_pool_init_early()) posted (#6) for review on master by Niels de Vos (ndevos)

Comment 18 Worker Ant 2017-07-19 20:18:28 UTC
COMMIT: https://review.gluster.org/17779 committed in master by Jeff Darcy (jeff.us) 
------
commit 1e8e6264033669332d4cfa117faf678d7631a7b1
Author: Niels de Vos <ndevos>
Date:   Fri Jul 14 18:35:10 2017 +0200

    mem-pool: initialize pthread_key_t pool_key in mem_pool_init_early()
    
    It is not possible to call pthread_key_delete for the pool_key that is
    intialized in the constructor for the memory pools. This makes it
    difficult to do a full cleanup of all the resources in mem_pools_fini().
    For this, the initialization of pool_key should be moved to
    mem_pool_init().
    
    However, the glusterfsd binary has a rather complex initialization
    procedure. The memory pools need to get initialized partially to get
    mem_get() functionality working. But, the pool_sweeper thread can get
    killed in case it is started before glusterfsd deamonizes.
    
    In order to solve this, mem_pools_init() is split into two pieces:
    1. mem_pools_init_early() for initializing the basic structures
    2. mem_pools_init_late() to start the pool_sweeper thread
    
    With the split of mem_pools_init(), and placing the pthread_key_create()
    in mem_pools_init_early(), it is now possible to correctly cleanup the
    pool_key with pthread_key_delete() in mem_pools_fini().
    
    It seems that there was no memory pool initialization in the CLI. This
    has been added as well now. Without it, the CLI will not be able to call
    mem_get() successfully which results in a hang of the process.
    
    Change-Id: I1de0153dfe600fd79eac7468cc070e4bd35e71dd
    BUG: 1470170
    Signed-off-by: Niels de Vos <ndevos>
    Reviewed-on: https://review.gluster.org/17779
    Smoke: Gluster Build System <jenkins.org>
    CentOS-regression: Gluster Build System <jenkins.org>
    Reviewed-by: Jeff Darcy <jeff.us>

Comment 19 Worker Ant 2017-07-20 11:35:27 UTC
COMMIT: https://review.gluster.org/17768 committed in master by Jeff Darcy (jeff.us) 
------
commit 8a09d78076cf506f0750cccd63cc983496473cf3
Author: Niels de Vos <ndevos>
Date:   Thu Jul 13 13:44:19 2017 +0200

    mem-pool: free objects from pools on mem_pools_fini()
    
    When using a minimal gfapi application that only initializes a small
    graph (sink, shard and meta xlators) the following memory leaks are
    reported by Valgrind:
    
      HEAP SUMMARY:
          in use at exit: 322,976 bytes in 75 blocks
        total heap usage: 684 allocs, 609 frees, 2,092,116 bytes allocated
    
    With this change, the mem-pools are cleaned up on calling of
    mem_pools_fini() and the objects in the pool are free'd.
    
      HEAP SUMMARY:
          in use at exit: 315,265 bytes in 58 blocks
        total heap usage: 684 allocs, 626 frees, 2,092,079 bytes allocated
    
    This information was gathered with `./run-xlator.sh features/shard` that
    comes with `gfapi-load-volfile` from gluster-debug-tools.
    
    While working on the free'ing of the per_thread_pool_list_t structures,
    it became apparent that GF_CALLOC() in mem_get_pool_list() gets
    redirected to a standard calloc() without prepending the Gluster
    specific memory header. This is because mem_pools_init() gets called
    before THIS->ctx is valid, so it is not possible to check if memory
    accounting is enabled or not. Because of this, the GF_CALLOC() call in
    mem_get_pool_list() has been replaced by CALLOC() to prevent potential
    mismatches between the allocation/free'ing of per_thread_pool_list_t
    structures.
    
    Change-Id: Id6f558816f399b0c613d74df36deac2300b6dd98
    BUG: 1470170
    URL: https://github.com/gluster/gluster-debug-tools
    Signed-off-by: Niels de Vos <ndevos>
    Reviewed-on: https://review.gluster.org/17768
    Smoke: Gluster Build System <jenkins.org>
    CentOS-regression: Gluster Build System <jenkins.org>
    Reviewed-by: Jeff Darcy <jeff.us>
    Reviewed-by: soumya k <skoduri>

Comment 20 Shyamsundar 2017-09-05 17:36:59 UTC
This bug is getting closed because a release has been made available that should address the reported issue. In case the problem is still not fixed with glusterfs-3.12.0, please open a new bug report.

glusterfs-3.12.0 has been announced on the Gluster mailinglists [1], packages for several distributions should become available in the near future. Keep an eye on the Gluster Users mailinglist [2] and the update infrastructure for your distribution.

[1] http://lists.gluster.org/pipermail/announce/2017-September/000082.html
[2] https://www.gluster.org/pipermail/gluster-users/


Note You need to log in before you can comment on or make changes to this bug.