Description of problem: Applications using libgfapi are leaking under high load in when self healing daemon queue is full. How reproducible: Setting cluster.heal-wait-queue-length makes the issue easier to reproduce.
REVIEW: http://review.gluster.org/15968 (selfheal: fix memory leak on full shd queue) posted (#1) for review on master by Anonymous Coward (mateusz.slupny)
REVIEW: http://review.gluster.org/15968 (selfheal: fix memory leak on full shd queue) posted (#2) for review on master by Mateusz Slupny (mateusz.slupny)
REVIEW: http://review.gluster.org/15968 (selfheal: fix memory leak on client side healing queue) posted (#3) for review on master by Mateusz Slupny (mateusz.slupny)
REVIEW: http://review.gluster.org/15968 (selfheal: fix memory leak on full shd queue) posted (#4) for review on master by Mateusz Slupny (mateusz.slupny)
REVIEW: http://review.gluster.org/15968 (selfheal: fix memory leak on client side healing queue) posted (#5) for review on master by Mateusz Slupny (mateusz.slupny)
COMMIT: http://review.gluster.org/15968 committed in master by Pranith Kumar Karampuri (pkarampu) ------ commit fb95eb4da6f4fc0b9c69e3b159a2214fe47e6d1d Author: Mateusz Slupny <mateusz.slupny> Date: Tue Nov 29 12:01:48 2016 +0100 selfheal: fix memory leak on client side healing queue Change-Id: I2beaba829710565a3246f7449a5cd21755cf5f7d BUG: 1399592 Signed-off-by: Mateusz Slupny <mateusz.slupny> Reviewed-on: http://review.gluster.org/15968 Tested-by: Pranith Kumar Karampuri <pkarampu> NetBSD-regression: NetBSD Build System <jenkins.org> CentOS-regression: Gluster Build System <jenkins.org> Reviewed-by: Pranith Kumar Karampuri <pkarampu> Reviewed-by: Ravishankar N <ravishankar> Smoke: Gluster Build System <jenkins.org>
This bug is getting closed because a release has been made available that should address the reported issue. In case the problem is still not fixed with glusterfs-3.10.0, please open a new bug report. glusterfs-3.10.0 has been announced on the Gluster mailinglists [1], packages for several distributions should become available in the near future. Keep an eye on the Gluster Users mailinglist [2] and the update infrastructure for your distribution. [1] http://lists.gluster.org/pipermail/gluster-users/2017-February/030119.html [2] https://www.gluster.org/pipermail/gluster-users/