Description of problem:
For the past 1-2 weeks, we've experienced major memory leaks in the glusterfs-client, resulting in glusterfs consuming whatever RAM is available until the machine dies.
Version-Release number of selected component (if applicable):
Leave a k8s pod running with a small MySQL database and over a few days the associated glusterfs mount will grown to use all the free RAM.
Steps to Reproduce:
The bug has apparently been spotted and a patch committed:
Consuming ram until it OOM or kills the machine.
Consume a normal amount of RAM according to the configured cache performance flags.
As suggested, I'm opening this issue to track a backport of the
Change-Id: If4cc4c2db075221b9ed731bacb7cc035f7007c5b back into 3.12.x branch.
Amar - I see the patch mentioned is written by you. Would you take care of the backport request?
REVIEW: https://review.gluster.org/20723 (cluster/afr: Fix dict-leak in pre-op) posted (#1) for review on release-3.12 by Amar Tumballi
Atin, while the RCA above was wrong, I could identify the proper fix, and posted the patch.
COMMIT: https://review.gluster.org/20723 committed in release-3.12 by "jiffin tony Thottan" <firstname.lastname@example.org> with a commit message- cluster/afr: Fix dict-leak in pre-op
At the time of pre-op, pre_op_xdata is populted with the xattrs we get from the
disk and at the time of post-op it gets over-written without unreffing the
previous value stored leading to a leak.
This is a regression we missed in
> Signed-off-by: Pranith Kumar K <email@example.com>
> (cherry picked from commit e7b79c59590c203c65f7ac8548b30d068c232d33)
Signed-off-by: Amar Tumballi <firstname.lastname@example.org>
This bug is getting closed because a release has been made available that should address the reported issue. In case the problem is still not fixed with glusterfs-3.12.3, please open a new bug report.
glusterfs-3.12.3 has been announced on the Gluster mailinglists , packages for several distributions should become available in the near future. Keep an eye on the Gluster Users mailinglist  and the update infrastructure for your distribution.