Bug 1269730

Summary: Sharding - Send inode forgets on _all_ shards if/when the protocol layer (FUSE/Gfapi) at the top sends a forget on the actual file
Product: [Community] GlusterFS Reporter: Krutika Dhananjay <kdhananj>
Component: shardingAssignee: Krutika Dhananjay <kdhananj>
Status: CLOSED CURRENTRELEASE QA Contact: bugs <bugs>
Severity: high Docs Contact:
Priority: high    
Version: 3.7.6CC: bugs
Target Milestone: ---Keywords: Triaged
Target Release: ---   
Hardware: Unspecified   
OS: Unspecified   
Whiteboard:
Fixed In Version: glusterfs-3.7.6 Doc Type: Bug Fix
Doc Text:
Story Points: ---
Clone Of: 1252263 Environment:
Last Closed: 2015-11-17 05:59:41 UTC Type: Bug
Regression: --- Mount Type: ---
Documentation: --- CRM:
Verified Versions: Category: ---
oVirt Team: --- RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: --- Target Upstream Version:
Embargoed:
Bug Depends On: 1252263    
Bug Blocks: 1258386, 1275914    

Description Krutika Dhananjay 2015-10-08 06:13:30 UTC
+++ This bug was initially created as a clone of Bug #1252263 +++

Description of problem:
=======================

Same as subject. Not performing inode_forget()s on individual shards also will lead to memory leaks, given that FUSE (possibly GFAPI as well) have no knowledge of inodes associated with individual shards. It is for sharding translator to clean up these resources.

Version-Release number of selected component (if applicable):


How reproducible:


Steps to Reproduce:
1.
2.
3.

Actual results:


Expected results:


Additional info:

--- Additional comment from Krutika Dhananjay on 2015-09-23 06:02:27 EDT ---

In addition to the above, shard translator will also need to regulate memory consumption by inode_t objects for individual shards of different sharded files, so that too many inode_t objects in memory don't result in the client process getting OOM-killed.

--- Additional comment from Krutika Dhananjay on 2015-09-30 02:35:37 EDT ---

http://review.gluster.org/#/c/12254/

Comment 1 Krutika Dhananjay 2015-10-08 06:19:01 UTC
http://review.gluster.org/#/c/12313/

Comment 2 Raghavendra Talur 2015-11-17 05:59:41 UTC
This bug is getting closed because a release has been made available that should address the reported issue. In case the problem is still not fixed with glusterfs-3.7.6, please open a new bug report.

glusterfs-3.7.6 has been announced on the Gluster mailinglists [1], packages for several distributions should become available in the near future. Keep an eye on the Gluster Users mailinglist [2] and the update infrastructure for your distribution.

[1] http://www.gluster.org/pipermail/gluster-users/2015-November/024359.html
[2] http://thread.gmane.org/gmane.comp.file-systems.gluster.user