Bug 1477668 - Cleanup retired mem-pool allocations
Summary: Cleanup retired mem-pool allocations
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: Red Hat Gluster Storage
Classification: Red Hat Storage
Component: core
Version: rhgs-3.3
Hardware: Unspecified
OS: Unspecified
unspecified
unspecified
Target Milestone: ---
: RHGS 3.3.0
Assignee: Niels de Vos
QA Contact: Manisha Saini
URL:
Whiteboard:
Depends On:
Blocks: 1417151 1461543 1481398
TreeView+ depends on / blocked
 
Reported: 2017-08-02 15:18 UTC by Niels de Vos
Modified: 2017-09-21 05:04 UTC (History)
8 users (show)

Fixed In Version: glusterfs-3.8.4-38
Doc Type: If docs needed, set a value
Doc Text:
Clone Of:
Environment:
Last Closed: 2017-09-21 05:04:21 UTC
Embargoed:


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
Red Hat Bugzilla 1470170 0 unspecified CLOSED mem-pool: mem_pool_fini() doesn't release entire memory allocated 2021-02-22 00:41:40 UTC
Red Hat Product Errata RHBA-2017:2774 0 normal SHIPPED_LIVE glusterfs bug fix and enhancement update 2017-09-21 08:16:29 UTC

Internal Links: 1470170

Description Niels de Vos 2017-08-02 15:18:11 UTC
Description of problem:
The new mem-pools have a new "pool_sweeper" thread that cleans up the cold and hot lists of unallocated objects. This thread is not started for gfapi applications or when libgfchangelog is used. Not having this thread running prevents memory to be freed from the mem-pools.

Version-Release number of selected component (if applicable):
rhgs-3.3.0

How reproducible:
100%

Steps to Reproduce:
1. call mem_get() many times, and mem_put() as many times
2. notice that the memory consumption does not reduce

Actual results:
Memory consumption peaks and does not reduce (for mem-pool allocations)

Expected results:
Memory should be free'd once the "pool_sweeper" thread goes through the objects that are on the cold list.

Additional info:
This is similar to bug 1470170 where the memory from mem-pools is not released when mem_pools_fini() is called.

Comment 4 Niels de Vos 2017-08-02 15:41:26 UTC
Patches that need backporting (if missing):

- libglusterfs: add mem_pools_fini
  https://review.gluster.org/17662

- gfapi: add mem_pools_init and mem_pools_fini calls
  https://review.gluster.org/17666

- gfapi+libglusterfs: fix mem_pools_fini without mem_pools_init case
  https://review.gluster.org/17728

- gfapi: prevent mem-pool leak in case glfs_new_fs() fails
  https://review.gluster.org/17734

- mem-pool: initialize pthread_key_t pool_key in mem_pool_init_early()
  https://review.gluster.org/17779

- mem-pool: track and verify initialization state
  https://review.gluster.org/17915

- changelog: add mem-pool initialization
  https://review.gluster.org/17900

Comment 7 Niels de Vos 2017-08-03 11:26:58 UTC
Although comment #4 lists all 7 patches that prevent resource leaks related to the starting and stopping of the pool_sweeper thread, these three are the ones that Kaleb backported for testing in bug 1461543 and made most difference related to memory consumption:

  https://review.gluster.org/17662
  https://review.gluster.org/17666
  https://review.gluster.org/17728

The others are enhancements to this and address the leaks in libgfchangelog and cli too.

Comment 12 Manisha Saini 2017-08-11 06:32:23 UTC
Verified this bug on 

# rpm -qa | grep ganesha
glusterfs-ganesha-3.8.4-38.el7rhgs.x86_64
nfs-ganesha-2.4.4-16.el7rhgs.x86_64
nfs-ganesha-gluster-2.4.4-16.el7rhgs.x86_64



To validate this,Ran some manual/automation cases around HA,root-squash,ACLs,multi volume cases.Moving this bug to verified state.

Comment 14 errata-xmlrpc 2017-09-21 05:04:21 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHBA-2017:2774


Note You need to log in before you can comment on or make changes to this bug.