Bug 1779055

Summary: glusterfs process memory leak in ior test
Product: [Community] GlusterFS Reporter: zhou lin <zz.sh.cynthia>
Component: read-aheadAssignee: bugs <bugs>
Status: CLOSED NEXTRELEASE QA Contact:
Severity: high Docs Contact:
Priority: unspecified    
Version: mainlineCC: bugs, jahernan, pasik, shujun.huang
Target Milestone: ---Keywords: Triaged
Target Release: ---   
Hardware: x86_64   
OS: Linux   
Whiteboard:
Fixed In Version: Doc Type: If docs needed, set a value
Doc Text:
Story Points: ---
Clone Of:
: 1781550 1789336 1789337 (view as bug list) Environment:
Last Closed: 2019-12-10 05:01:24 UTC Type: Bug
Regression: --- Mount Type: ---
Documentation: --- CRM:
Verified Versions: Category: ---
oVirt Team: --- RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: --- Target Upstream Version:
Embargoed:
Bug Depends On:    
Bug Blocks: 1781550, 1789336, 1789337    
Attachments:
Description Flags
glusterfs client process statedump, memory is increasing none

Description zhou lin 2019-12-03 07:45:36 UTC
Created attachment 1641577 [details]
glusterfs client process statedump, memory is increasing

Description of problem:
when test with ior tool glusterfs client process memory leak found
no else op is carry on , only do io through gluster client process.
glusterfs client process eat up more and more memory
Version-Release number of selected component (if applicable):
# glusterfs -V
glusterfs 7.0
Repository revision: git://git.gluster.org/glusterfs.git
Copyright (c) 2006-2016 Red Hat, Inc. <https://www.gluster.org/>
GlusterFS comes with ABSOLUTELY NO WARRANTY.
It is licensed to you under your choice of the GNU Lesser
General Public License, version 3 or any later version (LGPLv3
or later), or the GNU General Public License, version 2 (GPLv2),
in all cases as published by the Free Software Foundation.


How reproducible:


Steps to Reproduce:
1.begin io with ior tool :
python /opt/tool/ior -mfx -s 1000 -n 100 -t 10 /mnt/testvol/testdir
2.statedump of the glusterfs cleint process
3.from statedump the glusterfs client process memory is increasing,even after stop test and delete all created files

Actual results:
memory goes up never go back even after remove all created files

Expected results:
memory back to normal

Additional info:
from statedump,xlator.mount.fuse.itable.active_size seems keep increasing
from enclosed statedump you can find this.

Comment 1 zhou lin 2019-12-03 07:57:23 UTC
# gluster v info config
 
Volume Name: config
Type: Replicate
Volume ID: e4690308-7345-4e32-8d31-b13e10e87112
Status: Started
Snapshot Count: 0
Number of Bricks: 1 x 2 = 2
Transport-type: tcp
Bricks:
Brick1: 169.254.0.30:/mnt/bricks/config/brick
Brick2: 169.254.0.28:/mnt/bricks/config/brick
Options Reconfigured:
performance.client-io-threads: off
server.allow-insecure: on
network.frame-timeout: 180
network.ping-timeout: 42
cluster.consistent-metadata: off
cluster.favorite-child-policy: mtime
cluster.server-quorum-type: none
transport.address-family: inet
storage.fips-mode-rchecksum: on
nfs.disable: on
cluster.server-quorum-ratio: 51

Comment 2 zhou lin 2019-12-04 02:54:53 UTC
from statedump it is quite obvious that
xlator.mount.fuse.itable.active_size keeps growing
thera are more and more follwing sections appearing in statedump

[xlator.mount.fuse.itable.active.1]
gfid=924e4dde-79a5-471b-9a6e-7d769f0bae61
nlookup=0
fd-count=0
active-fd-count=0
ref=100
invalidate-sent=0
ia_type=2
ref_by_xl:.hsjvol-client-0=1
ref_by_xl:.hsjvol-readdir-ahead=99

Comment 3 Worker Ant 2019-12-05 05:55:21 UTC
REVIEW: https://review.gluster.org/23811 (To fix readdir-ahead memory leak) posted (#1) for review on master by None

Comment 4 Worker Ant 2019-12-05 06:37:29 UTC
REVIEW: https://review.gluster.org/23812 (To fix readdir-ahead memory leak) posted (#1) for review on master by None

Comment 5 Worker Ant 2019-12-05 07:05:12 UTC
REVIEW: https://review.gluster.org/23813 (To fix readdir-ahead memory leak) posted (#1) for review on master by None

Comment 6 Worker Ant 2019-12-05 08:08:58 UTC
REVIEW: https://review.gluster.org/23815 (To fix readdir-ahead memory leak) posted (#1) for review on master by None

Comment 7 Worker Ant 2019-12-10 05:01:24 UTC
REVIEW: https://review.gluster.org/23815 (To fix readdir-ahead memory leak) merged (#2) on master by Amar Tumballi