Bug 1477668 - Cleanup retired mem-pool allocations
Cleanup retired mem-pool allocations
Status: CLOSED ERRATA
Product: Red Hat Gluster Storage
Classification: Red Hat
Component: core (Show other bugs)
3.3
Unspecified Unspecified
unspecified Severity unspecified
: ---
: RHGS 3.3.0
Assigned To: Niels de Vos
Manisha Saini
:
Depends On:
Blocks: 1417151 1461543 1481398
  Show dependency treegraph
 
Reported: 2017-08-02 11:18 EDT by Niels de Vos
Modified: 2017-09-21 01:04 EDT (History)
8 users (show)

See Also:
Fixed In Version: glusterfs-3.8.4-38
Doc Type: If docs needed, set a value
Doc Text:
Story Points: ---
Clone Of:
Environment:
Last Closed: 2017-09-21 01:04:21 EDT
Type: Bug
Regression: ---
Mount Type: ---
Documentation: ---
CRM:
Verified Versions:
Category: ---
oVirt Team: ---
RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: ---


Attachments (Terms of Use)


External Trackers
Tracker ID Priority Status Summary Last Updated
Red Hat Product Errata RHBA-2017:2774 normal SHIPPED_LIVE glusterfs bug fix and enhancement update 2017-09-21 04:16:29 EDT

  None (edit)
Description Niels de Vos 2017-08-02 11:18:11 EDT
Description of problem:
The new mem-pools have a new "pool_sweeper" thread that cleans up the cold and hot lists of unallocated objects. This thread is not started for gfapi applications or when libgfchangelog is used. Not having this thread running prevents memory to be freed from the mem-pools.

Version-Release number of selected component (if applicable):
rhgs-3.3.0

How reproducible:
100%

Steps to Reproduce:
1. call mem_get() many times, and mem_put() as many times
2. notice that the memory consumption does not reduce

Actual results:
Memory consumption peaks and does not reduce (for mem-pool allocations)

Expected results:
Memory should be free'd once the "pool_sweeper" thread goes through the objects that are on the cold list.

Additional info:
This is similar to bug 1470170 where the memory from mem-pools is not released when mem_pools_fini() is called.
Comment 4 Niels de Vos 2017-08-02 11:41:26 EDT
Patches that need backporting (if missing):

- libglusterfs: add mem_pools_fini
  https://review.gluster.org/17662

- gfapi: add mem_pools_init and mem_pools_fini calls
  https://review.gluster.org/17666

- gfapi+libglusterfs: fix mem_pools_fini without mem_pools_init case
  https://review.gluster.org/17728

- gfapi: prevent mem-pool leak in case glfs_new_fs() fails
  https://review.gluster.org/17734

- mem-pool: initialize pthread_key_t pool_key in mem_pool_init_early()
  https://review.gluster.org/17779

- mem-pool: track and verify initialization state
  https://review.gluster.org/17915

- changelog: add mem-pool initialization
  https://review.gluster.org/17900
Comment 7 Niels de Vos 2017-08-03 07:26:58 EDT
Although comment #4 lists all 7 patches that prevent resource leaks related to the starting and stopping of the pool_sweeper thread, these three are the ones that Kaleb backported for testing in bug 1461543 and made most difference related to memory consumption:

  https://review.gluster.org/17662
  https://review.gluster.org/17666
  https://review.gluster.org/17728

The others are enhancements to this and address the leaks in libgfchangelog and cli too.
Comment 12 Manisha Saini 2017-08-11 02:32:23 EDT
Verified this bug on 

# rpm -qa | grep ganesha
glusterfs-ganesha-3.8.4-38.el7rhgs.x86_64
nfs-ganesha-2.4.4-16.el7rhgs.x86_64
nfs-ganesha-gluster-2.4.4-16.el7rhgs.x86_64



To validate this,Ran some manual/automation cases around HA,root-squash,ACLs,multi volume cases.Moving this bug to verified state.
Comment 14 errata-xmlrpc 2017-09-21 01:04:21 EDT
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHBA-2017:2774

Note You need to log in before you can comment on or make changes to this bug.