Bug 1550078 - memory leak in pre-op in replicate volumes for every write
Summary: memory leak in pre-op in replicate volumes for every write
Keywords:
Status: CLOSED CURRENTRELEASE
Alias: None
Product: GlusterFS
Classification: Community
Component: replicate
Version: mainline
Hardware: Unspecified
OS: Unspecified
unspecified
unspecified
Target Milestone: ---
Assignee: bugs@gluster.org
QA Contact:
URL:
Whiteboard:
Depends On:
Blocks: 1550808 1552360
TreeView+ depends on / blocked
 
Reported: 2018-02-28 13:19 UTC by Pranith Kumar K
Modified: 2018-06-20 18:01 UTC (History)
1 user (show)

Fixed In Version: glusterfs-v4.1.0
Clone Of:
: 1550808 1552360 (view as bug list)
Environment:
Last Closed: 2018-06-20 18:01:20 UTC
Regression: ---
Mount Type: ---
Documentation: ---
CRM:
Verified Versions:
Embargoed:


Attachments (Terms of Use)

Description Pranith Kumar K 2018-02-28 13:19:32 UTC
Description of problem:
When glusterfs is git cloned on the volume, we see lot of dict leaks
root@dhcp35-190 - ~ 
18:48:02 :) ⚡ grep -w num_allocs /var/run/gluster/glusterdump.3513.dump.1519822531 | cut -f2 -d'=' | sort -n | tail -10
334
334
442
442
18656
18656
18656
18656
18656
18656

After fix, for the same workload:
root@dhcp35-190 - ~ 
18:48:07 :) ⚡ grep -w num_allocs /var/run/gluster/glusterdump.12424.dump.1519823735 | cut -f2 -d'=' | sort -n | tail -10
26
29
54
89
324
324
334
334
442
442

This is a regression we missed in:
https://review.gluster.org/#/q/ba149bac92d169ae2256dbc75202dc9e5d06538e

Version-Release number of selected component (if applicable):


How reproducible:


Steps to Reproduce:
1.
2.
3.

Actual results:


Expected results:


Additional info:

Comment 1 Worker Ant 2018-02-28 13:24:39 UTC
REVIEW: https://review.gluster.org/19647 (cluster/afr: Fix dict-leak in pre-op) posted (#1) for review on master by Pranith Kumar Karampuri

Comment 2 Shyamsundar 2018-06-20 18:01:20 UTC
This bug is getting closed because a release has been made available that should address the reported issue. In case the problem is still not fixed with glusterfs-v4.1.0, please open a new bug report.

glusterfs-v4.1.0 has been announced on the Gluster mailinglists [1], packages for several distributions should become available in the near future. Keep an eye on the Gluster Users mailinglist [2] and the update infrastructure for your distribution.

[1] http://lists.gluster.org/pipermail/announce/2018-June/000102.html
[2] https://www.gluster.org/pipermail/gluster-users/


Note You need to log in before you can comment on or make changes to this bug.