Bug 763680 (GLUSTER-1948)

Summary: For each subvolume started, glusterfs process takes up around 30-35MB more memory
Product: [Community] GlusterFS Reporter: Raghavendra G <raghavendra>
Component: nfsAssignee: Raghavendra G <raghavendra>
Status: CLOSED CURRENTRELEASE QA Contact:
Severity: medium Docs Contact:
Priority: low    
Version: mainlineCC: gluster-bugs, shehjart
Target Milestone: ---   
Target Release: ---   
Hardware: All   
OS: Linux   
Whiteboard:
Fixed In Version: Doc Type: Bug Fix
Doc Text:
Story Points: ---
Clone Of: Environment:
Last Closed: Type: ---
Regression: RTNR Mount Type: All
Documentation: DNR CRM:
Verified Versions: Category: ---
oVirt Team: --- RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: --- Target Upstream Version:
Embargoed:

Description Raghavendra G 2010-10-14 11:44:51 UTC
Nfs server allocates an inode table for each subvolume. Each inode table allocates a mempool of around 200,000 entries for inode and dentry. This contributes to relatively high memory used by each of subvolumes. One of the possible solutions would be to lower the number of entries in inode and dentry mem-pool.

Comment 1 Shehjar Tikoo 2010-10-19 08:18:43 UTC
just to be sure we fix the right bug, this should be seen for every glusterfs daemon because that mem allocation is coming from inode_table_new. There this value is hard-coded. Patch coming soon.

Comment 2 Anand Avati 2010-10-27 03:11:43 UTC
PATCH: http://patches.gluster.com/patch/5575 in master (core: Use lru_limit as count for inode and dentry mempool)