Bug 1269730 - Sharding - Send inode forgets on _all_ shards if/when the protocol layer (FUSE/Gfapi) at the top sends a forget on the actual file
Sharding - Send inode forgets on _all_ shards if/when the protocol layer (FUS...
Status: CLOSED CURRENTRELEASE
Product: GlusterFS
Classification: Community
Component: sharding (Show other bugs)
3.7.6
Unspecified Unspecified
high Severity high
: ---
: ---
Assigned To: Krutika Dhananjay
bugs@gluster.org
: Triaged
Depends On: 1252263
Blocks: Gluster-HC-1 glusterfs-3.7.6
  Show dependency treegraph
 
Reported: 2015-10-08 02:13 EDT by Krutika Dhananjay
Modified: 2015-11-17 00:59 EST (History)
1 user (show)

See Also:
Fixed In Version: glusterfs-3.7.6
Doc Type: Bug Fix
Doc Text:
Story Points: ---
Clone Of: 1252263
Environment:
Last Closed: 2015-11-17 00:59:41 EST
Type: Bug
Regression: ---
Mount Type: ---
Documentation: ---
CRM:
Verified Versions:
Category: ---
oVirt Team: ---
RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: ---


Attachments (Terms of Use)

  None (edit)
Description Krutika Dhananjay 2015-10-08 02:13:30 EDT
+++ This bug was initially created as a clone of Bug #1252263 +++

Description of problem:
=======================

Same as subject. Not performing inode_forget()s on individual shards also will lead to memory leaks, given that FUSE (possibly GFAPI as well) have no knowledge of inodes associated with individual shards. It is for sharding translator to clean up these resources.

Version-Release number of selected component (if applicable):


How reproducible:


Steps to Reproduce:
1.
2.
3.

Actual results:


Expected results:


Additional info:

--- Additional comment from Krutika Dhananjay on 2015-09-23 06:02:27 EDT ---

In addition to the above, shard translator will also need to regulate memory consumption by inode_t objects for individual shards of different sharded files, so that too many inode_t objects in memory don't result in the client process getting OOM-killed.

--- Additional comment from Krutika Dhananjay on 2015-09-30 02:35:37 EDT ---

http://review.gluster.org/#/c/12254/
Comment 1 Krutika Dhananjay 2015-10-08 02:19:01 EDT
http://review.gluster.org/#/c/12313/
Comment 2 Raghavendra Talur 2015-11-17 00:59:41 EST
This bug is getting closed because a release has been made available that should address the reported issue. In case the problem is still not fixed with glusterfs-3.7.6, please open a new bug report.

glusterfs-3.7.6 has been announced on the Gluster mailinglists [1], packages for several distributions should become available in the near future. Keep an eye on the Gluster Users mailinglist [2] and the update infrastructure for your distribution.

[1] http://www.gluster.org/pipermail/gluster-users/2015-November/024359.html
[2] http://thread.gmane.org/gmane.comp.file-systems.gluster.user

Note You need to log in before you can comment on or make changes to this bug.