Description of problem: Client is using FUSE to access gluster Tier volume. When they executed their batch application to write files to this Tier volume, over 30G memory will be consumed on both server and client node. Version-Release number of selected component (if applicable): RHGS 3.2 How reproducible: Every time On customer environment Steps to Reproduce: 1. run customer batch application to write files to gluster volume 2. 3. Actual results: Memory exhaustion will happen Expected results: No memory exhaustion should happen Additional info:
Created attachment 1326952 [details] error shown
> One interesting thing is the HUGE number of inodes in use - 785929 active inodes. Do you think there could be so many number of inodes/files/directories are actively used at the time this statedump was taken? Note that these inodes might also represent directories even though a file is accessed in the form of dentry structure. Also, the access need not be just by user space applications. It could also be due to internal daemons like tier promotion/demotion, heal, quotad etc.
Created attachment 1330479 [details] valgrind information
Created attachment 1330901 [details] The new valgrind log
Hi, One more thing to add. We need the server statedump for the volume CCIFL as well as that was the volume reported to have high consumption. Thanks, Hari.
*** Bug 1497108 has been marked as a duplicate of this bug. ***
There is a proposal of a patch which fixed the leak, and also no activity on the case! We are trying to fix all the known leaks in upstream with ASan based tests etc. With these information closing the bug, will reopen if there is more activity here! (The proposal to close this bug happened 5 months back).
The needinfo request[s] on this closed bug have been removed as they have been unresolved for 1000 days