Bug 1552360 - memory leak in pre-op in replicate volumes for every write
Summary: memory leak in pre-op in replicate volumes for every write
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: Red Hat Gluster Storage
Classification: Red Hat
Component: replicate
Version: rhgs-3.4
Hardware: Unspecified
OS: Unspecified
unspecified
unspecified
Target Milestone: ---
: RHGS 3.4.0
Assignee: Pranith Kumar K
QA Contact: Vijay Avuthu
URL:
Whiteboard:
Depends On: 1550078
Blocks: 1503137 1550808
TreeView+ depends on / blocked
 
Reported: 2018-03-07 00:29 UTC by Pranith Kumar K
Modified: 2018-09-18 06:52 UTC (History)
5 users (show)

Fixed In Version: glusterfs-3.12.2-6
Doc Type: If docs needed, set a value
Doc Text:
Clone Of: 1550078
Environment:
Last Closed: 2018-09-04 06:44:11 UTC


Attachments (Terms of Use)


Links
System ID Priority Status Summary Last Updated
Red Hat Product Errata RHSA-2018:2607 None None None 2018-09-04 06:45:12 UTC

Description Pranith Kumar K 2018-03-07 00:29:25 UTC
+++ This bug was initially created as a clone of Bug #1550078 +++

Description of problem:
When glusterfs is git cloned on the volume, we see lot of dict leaks
root@dhcp35-190 - ~ 
18:48:02 :) ⚡ grep -w num_allocs /var/run/gluster/glusterdump.3513.dump.1519822531 | cut -f2 -d'=' | sort -n | tail -10
334
334
442
442
18656
18656
18656
18656
18656
18656

After fix, for the same workload:
root@dhcp35-190 - ~ 
18:48:07 :) ⚡ grep -w num_allocs /var/run/gluster/glusterdump.12424.dump.1519823735 | cut -f2 -d'=' | sort -n | tail -10
26
29
54
89
324
324
334
334
442
442

This is a regression we missed in:
https://review.gluster.org/#/q/ba149bac92d169ae2256dbc75202dc9e5d06538e

Version-Release number of selected component (if applicable):


How reproducible:


Steps to Reproduce:
1.
2.
3.

Actual results:


Expected results:


Additional info:

--- Additional comment from Worker Ant on 2018-02-28 08:24:39 EST ---

REVIEW: https://review.gluster.org/19647 (cluster/afr: Fix dict-leak in pre-op) posted (#1) for review on master by Pranith Kumar Karampuri

Comment 6 Vijay Avuthu 2018-07-27 10:04:41 UTC
Update:
=========

1) create 1 * 3 replicate volume and start
2) git clone the glusterfs on the volume ( mount point )
3) take the dump from client and check the number of num_allocs 
4) remove the cloned glusterfs
5) Again git clone the glusterfs on the volume
6) take the dump from client and check the number of num_allocs
7) remove the cloned glusterfs
8) Again git clone the glusterfs on the volume
9) take the dump from client and check the number of num_allocs

Observations:

Didn't see much increase in num_allocs for every iteration

# grep -w num_allocs glusterdump.9167.dump.1532676241_after_1st_clone | cut -f2 -d'=' | sort -n | tail -10
2651
2651
2651
2651
2657
2657
2667
2667
7954
7954
# grep -w num_allocs glusterdump.9167.dump.1532683881_after_2nd_clone | cut -f2 -d'=' | sort -n | tail -10
2651
2651
2651
2651
2836
2836
2846
2846
7954
7954
# grep -w num_allocs glusterdump.9167.dump.1532684833_after_3rd_clone | cut -f2 -d'=' | sort -n | tail -10
2651
2651
2651
2651
2890
2890
2900
2900
7954
7954
#

Comment 7 Vijay Avuthu 2018-07-31 07:04:45 UTC
Details in above comment. 

Changing status to Verified.

Comment 9 errata-xmlrpc 2018-09-04 06:44:11 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHSA-2018:2607


Note You need to log in before you can comment on or make changes to this bug.