Bug 1781550 - glusterfs process memory leak in ior test
Summary: glusterfs process memory leak in ior test
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: Red Hat Gluster Storage
Classification: Red Hat Storage
Component: read-ahead
Version: rhgs-3.5
Hardware: x86_64
OS: Linux
unspecified
high
Target Milestone: ---
: RHGS 3.5.z Batch Update 3
Assignee: Xavi Hernandez
QA Contact: Manisha Saini
URL:
Whiteboard:
Depends On: 1779055
Blocks:
TreeView+ depends on / blocked
 
Reported: 2019-12-10 09:17 UTC by hari gowtham
Modified: 2020-12-17 04:51 UTC (History)
11 users (show)

Fixed In Version: glusterfs-6.0-38
Doc Type: No Doc Update
Doc Text:
Clone Of: 1779055
Environment:
Last Closed: 2020-12-17 04:50:48 UTC
Embargoed:


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
Red Hat Product Errata RHBA-2020:5603 0 None None None 2020-12-17 04:51:35 UTC

Description hari gowtham 2019-12-10 09:17:24 UTC
+++ This bug was initially created as a clone of Bug #1779055 +++

Description of problem:
when test with ior tool glusterfs client process memory leak found
no else op is carry on , only do io through gluster client process.
glusterfs client process eat up more and more memory
Version-Release number of selected component (if applicable):
# glusterfs -V
glusterfs 7.0
Repository revision: git://git.gluster.org/glusterfs.git
Copyright (c) 2006-2016 Red Hat, Inc. <https://www.gluster.org/>
GlusterFS comes with ABSOLUTELY NO WARRANTY.
It is licensed to you under your choice of the GNU Lesser
General Public License, version 3 or any later version (LGPLv3
or later), or the GNU General Public License, version 2 (GPLv2),
in all cases as published by the Free Software Foundation.


How reproducible:


Steps to Reproduce:
1.begin io with ior tool :
python /opt/tool/ior -mfx -s 1000 -n 100 -t 10 /mnt/testvol/testdir
2.statedump of the glusterfs cleint process
3.from statedump the glusterfs client process memory is increasing,even after stop test and delete all created files

Actual results:
memory goes up never go back even after remove all created files

Expected results:
memory back to normal

Additional info:
from statedump,xlator.mount.fuse.itable.active_size seems keep increasing
from enclosed statedump you can find this.

--- Additional comment from zhou lin on 2019-12-03 07:57:23 UTC ---

# gluster v info config
 
Volume Name: config
Type: Replicate
Volume ID: e4690308-7345-4e32-8d31-b13e10e87112
Status: Started
Snapshot Count: 0
Number of Bricks: 1 x 2 = 2
Transport-type: tcp
Bricks:
Brick1: 169.254.0.30:/mnt/bricks/config/brick
Brick2: 169.254.0.28:/mnt/bricks/config/brick
Options Reconfigured:
performance.client-io-threads: off
server.allow-insecure: on
network.frame-timeout: 180
network.ping-timeout: 42
cluster.consistent-metadata: off
cluster.favorite-child-policy: mtime
cluster.server-quorum-type: none
transport.address-family: inet
storage.fips-mode-rchecksum: on
nfs.disable: on
cluster.server-quorum-ratio: 51

--- Additional comment from zhou lin on 2019-12-04 02:54:53 UTC ---

from statedump it is quite obvious that
xlator.mount.fuse.itable.active_size keeps growing
thera are more and more follwing sections appearing in statedump

[xlator.mount.fuse.itable.active.1]
gfid=924e4dde-79a5-471b-9a6e-7d769f0bae61
nlookup=0
fd-count=0
active-fd-count=0
ref=100
invalidate-sent=0
ia_type=2
ref_by_xl:.hsjvol-client-0=1
ref_by_xl:.hsjvol-readdir-ahead=99

--- Additional comment from Worker Ant on 2019-12-05 05:55:21 UTC ---

REVIEW: https://review.gluster.org/23811 (To fix readdir-ahead memory leak) posted (#1) for review on master by None

--- Additional comment from Worker Ant on 2019-12-05 06:37:29 UTC ---

REVIEW: https://review.gluster.org/23812 (To fix readdir-ahead memory leak) posted (#1) for review on master by None

--- Additional comment from Worker Ant on 2019-12-05 07:05:12 UTC ---

REVIEW: https://review.gluster.org/23813 (To fix readdir-ahead memory leak) posted (#1) for review on master by None

--- Additional comment from Worker Ant on 2019-12-05 08:08:58 UTC ---

REVIEW: https://review.gluster.org/23815 (To fix readdir-ahead memory leak) posted (#1) for review on master by None

--- Additional comment from Worker Ant on 2019-12-10 05:01:24 UTC ---

REVIEW: https://review.gluster.org/23815 (To fix readdir-ahead memory leak) merged (#2) on master by Amar Tumballi

Comment 13 Manisha Saini 2020-11-11 08:11:41 UTC
Verified this BZ with 

# rpm -qa | grep gluster
glusterfs-libs-6.0-46.el7rhgs.x86_64
glusterfs-api-6.0-46.el7rhgs.x86_64
glusterfs-geo-replication-6.0-46.el7rhgs.x86_64
glusterfs-6.0-46.el7rhgs.x86_64
glusterfs-fuse-6.0-46.el7rhgs.x86_64
glusterfs-cli-6.0-46.el7rhgs.x86_64
python2-gluster-6.0-46.el7rhgs.x86_64
glusterfs-client-xlators-6.0-46.el7rhgs.x86_64
glusterfs-server-6.0-46.el7rhgs.x86_64


Created multiple files on mount point.
Also Ran fio and crefi to create multiple files on mount point.Deleted the files from mount point.No spike in memory was observed.Moving this BZ to verified state.

Comment 15 errata-xmlrpc 2020-12-17 04:50:48 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory (glusterfs bug fix and enhancement update), and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHBA-2020:5603


Note You need to log in before you can comment on or make changes to this bug.