Bug 1550078

Summary: memory leak in pre-op in replicate volumes for every write
Product: [Community] GlusterFS Reporter: Pranith Kumar K <pkarampu>
Component: replicateAssignee: bugs <bugs>
Status: CLOSED CURRENTRELEASE QA Contact:
Severity: unspecified Docs Contact:
Priority: unspecified    
Version: mainlineCC: bugs
Target Milestone: ---   
Target Release: ---   
Hardware: Unspecified   
OS: Unspecified   
Whiteboard:
Fixed In Version: glusterfs-v4.1.0 Doc Type: If docs needed, set a value
Doc Text:
Story Points: ---
Clone Of:
: 1550808 1552360 (view as bug list) Environment:
Last Closed: 2018-06-20 18:01:20 UTC Type: Bug
Regression: --- Mount Type: ---
Documentation: --- CRM:
Verified Versions: Category: ---
oVirt Team: --- RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: --- Target Upstream Version:
Embargoed:
Bug Depends On:    
Bug Blocks: 1550808, 1552360    

Description Pranith Kumar K 2018-02-28 13:19:32 UTC
Description of problem:
When glusterfs is git cloned on the volume, we see lot of dict leaks
root@dhcp35-190 - ~ 
18:48:02 :) ⚡ grep -w num_allocs /var/run/gluster/glusterdump.3513.dump.1519822531 | cut -f2 -d'=' | sort -n | tail -10
334
334
442
442
18656
18656
18656
18656
18656
18656

After fix, for the same workload:
root@dhcp35-190 - ~ 
18:48:07 :) ⚡ grep -w num_allocs /var/run/gluster/glusterdump.12424.dump.1519823735 | cut -f2 -d'=' | sort -n | tail -10
26
29
54
89
324
324
334
334
442
442

This is a regression we missed in:
https://review.gluster.org/#/q/ba149bac92d169ae2256dbc75202dc9e5d06538e

Version-Release number of selected component (if applicable):


How reproducible:


Steps to Reproduce:
1.
2.
3.

Actual results:


Expected results:


Additional info:

Comment 1 Worker Ant 2018-02-28 13:24:39 UTC
REVIEW: https://review.gluster.org/19647 (cluster/afr: Fix dict-leak in pre-op) posted (#1) for review on master by Pranith Kumar Karampuri

Comment 2 Shyamsundar 2018-06-20 18:01:20 UTC
This bug is getting closed because a release has been made available that should address the reported issue. In case the problem is still not fixed with glusterfs-v4.1.0, please open a new bug report.

glusterfs-v4.1.0 has been announced on the Gluster mailinglists [1], packages for several distributions should become available in the near future. Keep an eye on the Gluster Users mailinglist [2] and the update infrastructure for your distribution.

[1] http://lists.gluster.org/pipermail/announce/2018-June/000102.html
[2] https://www.gluster.org/pipermail/gluster-users/