Description of problem: At the moment, all the work which mem_pool_fini() does is to exit and cleanup the sweeper thread. That doesn't ensure that all the memory allocated is cleaned up as some of those allocations may still be in hot list or if in cold list, sweeper thread may not have got chance to run through them. hence we need to iterate though all those per-thread mem-pools and clean them up as part of mem_pools_fini(). This is mainly important for applications where glfs_init() and glfs_fini() can be called a number of times during the application runtime (NFS-Ganesha, QEMU, libvirt, ...). Version-Release number of selected component (if applicable): any version that support brick-multiplexing (which sneaked in new mem-pools) How reproducible: 100% Steps to Reproduce: 1. call glfs_init() and glfs_fini() in an application running under Valgrind 2. notice the major increase in memory leaks related to mem_get() Actual results: There are known memory leaks in libglusterfs (that is used by all Gluster executables). But the brick-multiplex feature increased the number of leaks significantly. Expected results: No increase in memory leaks (ideally the number of leaks compared to previous versions would be reduced). Additional info: The series of patches that makes sure to release the allocated memory when mem_pools_fini() is called can be found at https://review.gluster.org/#/q/topic:bug-1470170
upstream patch(es) : https://review.gluster.org/#/q/topic:bug-1470170
These patches are already part of release-3.12 branch, and hence will be part of RHGS3.4 release.