Bug 1252263 - Sharding - Send inode forgets on _all_ shards if/when the protocol layer (FUSE/Gfapi) at the top sends a forget on the actual file
Summary: Sharding - Send inode forgets on _all_ shards if/when the protocol layer (FUS...
Keywords:
Status: CLOSED CURRENTRELEASE
Alias: None
Product: GlusterFS
Classification: Community
Component: sharding
Version: mainline
Hardware: Unspecified
OS: Unspecified
high
high
Target Milestone: ---
Assignee: Krutika Dhananjay
QA Contact: bugs@gluster.org
URL:
Whiteboard:
Depends On:
Blocks: 1269730
TreeView+ depends on / blocked
 
Reported: 2015-08-11 06:00 UTC by Krutika Dhananjay
Modified: 2016-06-16 13:30 UTC (History)
2 users (show)

Fixed In Version: glusterfs-3.8rc2
Doc Type: Bug Fix
Doc Text:
Clone Of:
: 1269730 (view as bug list)
Environment:
Last Closed: 2016-06-16 13:30:14 UTC
Regression: ---
Mount Type: ---
Documentation: ---
CRM:
Verified Versions:
Embargoed:


Attachments (Terms of Use)

Description Krutika Dhananjay 2015-08-11 06:00:08 UTC
Description of problem:
=======================

Same as subject. Not performing inode_forget()s on individual shards also will lead to memory leaks, given that FUSE (possibly GFAPI as well) have no knowledge of inodes associated with individual shards. It is for sharding translator to clean up these resources.

Version-Release number of selected component (if applicable):


How reproducible:


Steps to Reproduce:
1.
2.
3.

Actual results:


Expected results:


Additional info:

Comment 1 Krutika Dhananjay 2015-09-23 10:02:27 UTC
In addition to the above, shard translator will also need to regulate memory consumption by inode_t objects for individual shards of different sharded files, so that too many inode_t objects in memory don't result in the client process getting OOM-killed.

Comment 2 Krutika Dhananjay 2015-09-30 06:35:37 UTC
http://review.gluster.org/#/c/12254/

Comment 3 Niels de Vos 2016-06-16 13:30:14 UTC
This bug is getting closed because a release has been made available that should address the reported issue. In case the problem is still not fixed with glusterfs-3.8.0, please open a new bug report.

glusterfs-3.8.0 has been announced on the Gluster mailinglists [1], packages for several distributions should become available in the near future. Keep an eye on the Gluster Users mailinglist [2] and the update infrastructure for your distribution.

[1] http://blog.gluster.org/2016/06/glusterfs-3-8-released/
[2] http://thread.gmane.org/gmane.comp.file-systems.gluster.user


Note You need to log in before you can comment on or make changes to this bug.