Bug 1476992 - inode table lru list leak with glusterfs fuse mount
Summary: inode table lru list leak with glusterfs fuse mount
Keywords:
Status: CLOSED CURRENTRELEASE
Alias: None
Product: GlusterFS
Classification: Community
Component: fuse
Version: mainline
Hardware: Unspecified
OS: Linux
high
high
Target Milestone: ---
Assignee: bugs@gluster.org
QA Contact:
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2017-08-01 03:39 UTC by Ryan Ding
Modified: 2019-06-18 10:28 UTC (History)
5 users (show)

Fixed In Version: glusterfs-6.x
Clone Of:
Environment:
Last Closed: 2019-06-18 10:28:49 UTC
Regression: ---
Mount Type: ---
Documentation: ---
CRM:
Verified Versions:
Embargoed:


Attachments (Terms of Use)
Gluster Volume File (4.37 KB, text/plain)
2017-10-26 14:11 UTC, Danny Lee
no flags Details

Description Ryan Ding 2017-08-01 03:39:32 UTC
Description of problem:
glusterfs fuse mount point run out of system memory.

Steps to Reproduce:
1. mount glusterfs fuse client.
2. keep create file on that mount point.
3. glusterfs process's memory usage keep increase, until run out of all system memory.

Expected results:
glusterfs fuse client should disable inode cache(lru inode) in default.

Comment 1 Csaba Henk 2017-08-02 20:25:38 UTC
Hi Ryan,

Can you please specify:

- your OS
- your Gluster version / git revision
- output of gluster vol info <your vol>
- your volfile 
- describe the pattern / method of creating files.

Thanks,
Csaba

Comment 2 Danny Lee 2017-10-26 14:03:42 UTC
Hi Csaba,

We also have a similar problem with the same steps to reproduce.  It looks very similar to https://bugzilla.redhat.com/show_bug.cgi?id=1501146, as well.

OS: CentOS Linux 7 (Core)
Kernel: Linux 3.10.0-693.2.2.el7.x86_64
Architecture: x86-64

Gluster Version(s) tried: 3.10.5, 3.12.1, 3.12.2 (using rpm)
Gluster Volume Info Output:
Volume Name: node
Type: Distribute
Volume ID: 1e7f74fe-e0e9-48b9-b80b-f35959f39647
Status: Started
Snapshot Count: 0
Number of Bricks: 1
Transport-type: tcp
Bricks:
Brick1: xxx.xxx.xxx.xxx:/usr/local/node/local-data/mirrored-data
Options Reconfigured:
nfs.disable: on
transport.address-family: inet
performance.io-thread-count: 64
network.ping-timeout: 0
auth.allow: xxx.xxx.xxx.xxx

Pattern:
We used the smallfiles scripts to create the files (https://github.com/bengland2/smallfile).  The command we used is "./smallfile/smallfile_cli.py --top /usr/local/node/data/mirrored-data/test --threads 16 --file-size 16 --files 10000 --response-times Y"

glusterfs started with ~20mb of memory.  After we created the files, glusterfs used ~450mb of memory.  After 10 hours of idle use, it seems to be stabilizing around ~400mb.

Our production sites are 2-node-with-arbiter and 3-node clusters and they are also having the same issue.  For the 3-nodes, we are working around it with a rolling restart, but for the 2-nodes, we have to take a full outage, so it has become a big issue.

Comment 3 Danny Lee 2017-10-26 14:11:22 UTC
Created attachment 1343779 [details]
Gluster Volume File

Comment 4 Danny Lee 2017-10-26 17:11:12 UTC
It also looks like reading files can also cause the memory to increase (about 50% less increase than writing).  My steps were:

1. Delete all the files on the mount, remount gluster fuse client mount and clear disk cache.  At this point, glusterfs process is around 20mb
2. Run script to create 100k files
3. At this point, glusterfs process is around 450mb
4. Remount gluster fuse client mount, and clear disk cache. At this point, glusterfs process is around 20mb again.
5. Do a "find ." on the mount
6. At this point, glusterfs process is around 225mb

Expected Result: I guess it would spike to 225mb during the "find .", but then got back to 20mb once it finishes (plus any internal gluster file caching)

Comment 5 Shyamsundar 2018-10-23 14:55:09 UTC
Release 3.12 has been EOLd and this bug was still found to be in the NEW state, hence moving the version to mainline, to triage the same and take appropriate actions.

Comment 6 Amar Tumballi 2019-06-18 10:28:49 UTC
We did fix the issues in latest releases: please use glusterfs-6.x release

Patch: https://review.gluster.org/19778


Note You need to log in before you can comment on or make changes to this bug.