Bug 1789337 - glusterfs process memory leak in ior test
Summary: glusterfs process memory leak in ior test
Keywords:
Status: CLOSED NEXTRELEASE
Alias: None
Product: GlusterFS
Classification: Community
Component: read-ahead
Version: 6
Hardware: x86_64
OS: Linux
unspecified
high
Target Milestone: ---
Assignee: Xavi Hernandez
QA Contact:
URL:
Whiteboard:
Depends On: 1779055
Blocks: 1806846
TreeView+ depends on / blocked
 
Reported: 2020-01-09 11:56 UTC by Xavi Hernandez
Modified: 2020-03-02 07:57 UTC (History)
5 users (show)

Fixed In Version:
Doc Type: If docs needed, set a value
Doc Text:
Clone Of: 1779055
Environment:
Last Closed: 2020-02-11 08:28:24 UTC
Regression: ---
Mount Type: ---
Documentation: ---
CRM:
Verified Versions:
Embargoed:


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
Gluster.org Gerrit 23988 0 None Merged To fix readdir-ahead memory leak 2020-02-11 08:28:22 UTC

Description Xavi Hernandez 2020-01-09 11:56:01 UTC
+++ This bug was initially created as a clone of Bug #1779055 +++

Description of problem:
when test with ior tool glusterfs client process memory leak found
no else op is carry on , only do io through gluster client process.
glusterfs client process eat up more and more memory
Version-Release number of selected component (if applicable):
# glusterfs -V
glusterfs 7.0
Repository revision: git://git.gluster.org/glusterfs.git
Copyright (c) 2006-2016 Red Hat, Inc. <https://www.gluster.org/>
GlusterFS comes with ABSOLUTELY NO WARRANTY.
It is licensed to you under your choice of the GNU Lesser
General Public License, version 3 or any later version (LGPLv3
or later), or the GNU General Public License, version 2 (GPLv2),
in all cases as published by the Free Software Foundation.


How reproducible:


Steps to Reproduce:
1.begin io with ior tool :
python /opt/tool/ior -mfx -s 1000 -n 100 -t 10 /mnt/testvol/testdir
2.statedump of the glusterfs cleint process
3.from statedump the glusterfs client process memory is increasing,even after stop test and delete all created files

Actual results:
memory goes up never go back even after remove all created files

Expected results:
memory back to normal

Additional info:
from statedump,xlator.mount.fuse.itable.active_size seems keep increasing
from enclosed statedump you can find this.

--- Additional comment from zhou lin on 2019-12-03 08:57:23 CET ---

# gluster v info config
 
Volume Name: config
Type: Replicate
Volume ID: e4690308-7345-4e32-8d31-b13e10e87112
Status: Started
Snapshot Count: 0
Number of Bricks: 1 x 2 = 2
Transport-type: tcp
Bricks:
Brick1: 169.254.0.30:/mnt/bricks/config/brick
Brick2: 169.254.0.28:/mnt/bricks/config/brick
Options Reconfigured:
performance.client-io-threads: off
server.allow-insecure: on
network.frame-timeout: 180
network.ping-timeout: 42
cluster.consistent-metadata: off
cluster.favorite-child-policy: mtime
cluster.server-quorum-type: none
transport.address-family: inet
storage.fips-mode-rchecksum: on
nfs.disable: on
cluster.server-quorum-ratio: 51

--- Additional comment from zhou lin on 2019-12-04 03:54:53 CET ---

from statedump it is quite obvious that
xlator.mount.fuse.itable.active_size keeps growing
thera are more and more follwing sections appearing in statedump

[xlator.mount.fuse.itable.active.1]
gfid=924e4dde-79a5-471b-9a6e-7d769f0bae61
nlookup=0
fd-count=0
active-fd-count=0
ref=100
invalidate-sent=0
ia_type=2
ref_by_xl:.hsjvol-client-0=1
ref_by_xl:.hsjvol-readdir-ahead=99

Comment 1 Worker Ant 2020-01-10 06:04:48 UTC
REVIEW: https://review.gluster.org/23988 (To fix readdir-ahead memory leak) posted (#1) for review on release-6 by Krutika Dhananjay

Comment 2 Worker Ant 2020-02-11 08:28:24 UTC
REVIEW: https://review.gluster.org/23988 (To fix readdir-ahead memory leak) merged (#2) on release-6 by hari gowtham


Note You need to log in before you can comment on or make changes to this bug.