Description of problem: For the past 1-2 weeks, we've experienced major memory leaks in the glusterfs-client, resulting in glusterfs consuming whatever RAM is available until the machine dies. Version-Release number of selected component (if applicable): glusterfs-client 3.12.12-ubuntu1~xenial1 How reproducible: Leave a k8s pod running with a small MySQL database and over a few days the associated glusterfs mount will grown to use all the free RAM. Steps to Reproduce: The bug has apparently been spotted and a patch committed: https://bugzilla.redhat.com/show_bug.cgi?id=1593826#c24 https://review.gluster.org/#/c/20437/ Actual results: Consuming ram until it OOM or kills the machine. Expected results: Consume a normal amount of RAM according to the configured cache performance flags. Additional info: As suggested, I'm opening this issue to track a backport of the Change-Id: If4cc4c2db075221b9ed731bacb7cc035f7007c5b back into 3.12.x branch. Cheers!
Amar - I see the patch mentioned is written by you. Would you take care of the backport request?
REVIEW: https://review.gluster.org/20723 (cluster/afr: Fix dict-leak in pre-op) posted (#1) for review on release-3.12 by Amar Tumballi
Atin, while the RCA above was wrong, I could identify the proper fix, and posted the patch.
COMMIT: https://review.gluster.org/20723 committed in release-3.12 by "jiffin tony Thottan" <jthottan> with a commit message- cluster/afr: Fix dict-leak in pre-op At the time of pre-op, pre_op_xdata is populted with the xattrs we get from the disk and at the time of post-op it gets over-written without unreffing the previous value stored leading to a leak. This is a regression we missed in https://review.gluster.org/#/q/ba149bac92d169ae2256dbc75202dc9e5d06538e Originally: > Signed-off-by: Pranith Kumar K <pkarampu> > (cherry picked from commit e7b79c59590c203c65f7ac8548b30d068c232d33) Change-Id: I0456f9ad6f77ce6248b747964a037193af3a3da7 Fixes: bz#1613512 Signed-off-by: Amar Tumballi <amarts>
This bug is getting closed because a release has been made available that should address the reported issue. In case the problem is still not fixed with glusterfs-3.12.3, please open a new bug report. glusterfs-3.12.3 has been announced on the Gluster mailinglists [1], packages for several distributions should become available in the near future. Keep an eye on the Gluster Users mailinglist [2] and the update infrastructure for your distribution. [1] https://lists.gluster.org/pipermail/announce/2018-August/000107.html [2] https://www.gluster.org/pipermail/gluster-users/