Bug 1269730 - Sharding - Send inode forgets on _all_ shards if/when the protocol layer (FUSE/Gfapi) at the top sends a forget on the actual file
Summary: Sharding - Send inode forgets on _all_ shards if/when the protocol layer (FUS...
Keywords:
Status: CLOSED CURRENTRELEASE
Alias: None
Product: GlusterFS
Classification: Community
Component: sharding
Version: 3.7.6
Hardware: Unspecified
OS: Unspecified
high
high
Target Milestone: ---
Assignee: Krutika Dhananjay
QA Contact: bugs@gluster.org
URL:
Whiteboard:
Depends On: 1252263
Blocks: Gluster-HC-1 glusterfs-3.7.6
TreeView+ depends on / blocked
 
Reported: 2015-10-08 06:13 UTC by Krutika Dhananjay
Modified: 2015-11-17 05:59 UTC (History)
1 user (show)

Fixed In Version: glusterfs-3.7.6
Doc Type: Bug Fix
Doc Text:
Clone Of: 1252263
Environment:
Last Closed: 2015-11-17 05:59:41 UTC
Regression: ---
Mount Type: ---
Documentation: ---
CRM:
Verified Versions:
Embargoed:


Attachments (Terms of Use)

Description Krutika Dhananjay 2015-10-08 06:13:30 UTC
+++ This bug was initially created as a clone of Bug #1252263 +++

Description of problem:
=======================

Same as subject. Not performing inode_forget()s on individual shards also will lead to memory leaks, given that FUSE (possibly GFAPI as well) have no knowledge of inodes associated with individual shards. It is for sharding translator to clean up these resources.

Version-Release number of selected component (if applicable):


How reproducible:


Steps to Reproduce:
1.
2.
3.

Actual results:


Expected results:


Additional info:

--- Additional comment from Krutika Dhananjay on 2015-09-23 06:02:27 EDT ---

In addition to the above, shard translator will also need to regulate memory consumption by inode_t objects for individual shards of different sharded files, so that too many inode_t objects in memory don't result in the client process getting OOM-killed.

--- Additional comment from Krutika Dhananjay on 2015-09-30 02:35:37 EDT ---

http://review.gluster.org/#/c/12254/

Comment 1 Krutika Dhananjay 2015-10-08 06:19:01 UTC
http://review.gluster.org/#/c/12313/

Comment 2 Raghavendra Talur 2015-11-17 05:59:41 UTC
This bug is getting closed because a release has been made available that should address the reported issue. In case the problem is still not fixed with glusterfs-3.7.6, please open a new bug report.

glusterfs-3.7.6 has been announced on the Gluster mailinglists [1], packages for several distributions should become available in the near future. Keep an eye on the Gluster Users mailinglist [2] and the update infrastructure for your distribution.

[1] http://www.gluster.org/pipermail/gluster-users/2015-November/024359.html
[2] http://thread.gmane.org/gmane.comp.file-systems.gluster.user


Note You need to log in before you can comment on or make changes to this bug.