Description of problem: glusterfs fuse mount point run out of system memory. Steps to Reproduce: 1. mount glusterfs fuse client. 2. keep create file on that mount point. 3. glusterfs process's memory usage keep increase, until run out of all system memory. Expected results: glusterfs fuse client should disable inode cache(lru inode) in default.
Hi Ryan, Can you please specify: - your OS - your Gluster version / git revision - output of gluster vol info <your vol> - your volfile - describe the pattern / method of creating files. Thanks, Csaba
Hi Csaba, We also have a similar problem with the same steps to reproduce. It looks very similar to https://bugzilla.redhat.com/show_bug.cgi?id=1501146, as well. OS: CentOS Linux 7 (Core) Kernel: Linux 3.10.0-693.2.2.el7.x86_64 Architecture: x86-64 Gluster Version(s) tried: 3.10.5, 3.12.1, 3.12.2 (using rpm) Gluster Volume Info Output: Volume Name: node Type: Distribute Volume ID: 1e7f74fe-e0e9-48b9-b80b-f35959f39647 Status: Started Snapshot Count: 0 Number of Bricks: 1 Transport-type: tcp Bricks: Brick1: xxx.xxx.xxx.xxx:/usr/local/node/local-data/mirrored-data Options Reconfigured: nfs.disable: on transport.address-family: inet performance.io-thread-count: 64 network.ping-timeout: 0 auth.allow: xxx.xxx.xxx.xxx Pattern: We used the smallfiles scripts to create the files (https://github.com/bengland2/smallfile). The command we used is "./smallfile/smallfile_cli.py --top /usr/local/node/data/mirrored-data/test --threads 16 --file-size 16 --files 10000 --response-times Y" glusterfs started with ~20mb of memory. After we created the files, glusterfs used ~450mb of memory. After 10 hours of idle use, it seems to be stabilizing around ~400mb. Our production sites are 2-node-with-arbiter and 3-node clusters and they are also having the same issue. For the 3-nodes, we are working around it with a rolling restart, but for the 2-nodes, we have to take a full outage, so it has become a big issue.
Created attachment 1343779 [details] Gluster Volume File
It also looks like reading files can also cause the memory to increase (about 50% less increase than writing). My steps were: 1. Delete all the files on the mount, remount gluster fuse client mount and clear disk cache. At this point, glusterfs process is around 20mb 2. Run script to create 100k files 3. At this point, glusterfs process is around 450mb 4. Remount gluster fuse client mount, and clear disk cache. At this point, glusterfs process is around 20mb again. 5. Do a "find ." on the mount 6. At this point, glusterfs process is around 225mb Expected Result: I guess it would spike to 225mb during the "find .", but then got back to 20mb once it finishes (plus any internal gluster file caching)
Release 3.12 has been EOLd and this bug was still found to be in the NEW state, hence moving the version to mainline, to triage the same and take appropriate actions.
We did fix the issues in latest releases: please use glusterfs-6.x release Patch: https://review.gluster.org/19778