Bug 1560969 - Garbage collect inactive inodes in fuse-bridge
Summary: Garbage collect inactive inodes in fuse-bridge
Keywords:
Status: CLOSED CURRENTRELEASE
Alias: None
Product: GlusterFS
Classification: Community
Component: fuse
Version: mainline
Hardware: Unspecified
OS: Unspecified
high
high
Target Milestone: ---
Assignee: bugs@gluster.org
QA Contact:
URL:
Whiteboard:
Depends On: 1511779
Blocks:
TreeView+ depends on / blocked
 
Reported: 2018-03-27 11:31 UTC by Amar Tumballi
Modified: 2019-03-25 16:30 UTC (History)
7 users (show)

Fixed In Version: glusterfs-6.0
Doc Type: Enhancement
Doc Text:
Clone Of: 1511779
Environment:
Last Closed: 2019-03-25 16:30:19 UTC
Regression: ---
Mount Type: ---
Documentation: ---
CRM:
Verified Versions:
Embargoed:


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
Gluster.org Gerrit 19778 0 None Merged fuse: add --lru-limit option 2018-12-14 17:35:20 UTC

Comment 1 Amar Tumballi 2018-03-27 11:31:38 UTC
Description of problem:
Currently fuse-bridge has an lru limit of the inode table as infinite. This means we are dependent on kernel to send forgets even though the inode is not active. However, we can implement garbage collection of inodes in lru list of itable. We can ask kernel to send a forget by calling inode/entry_invalidate on the inode.

Comment 2 Worker Ant 2018-03-27 11:35:22 UTC
REVIEW: https://review.gluster.org/19778 (fuse: add --lru-limit option) posted (#2) for review on master by Amar Tumballi

Comment 3 Worker Ant 2018-12-14 17:35:20 UTC
REVIEW: https://review.gluster.org/19778 (fuse: add --lru-limit option) posted (#36) for review on master by Amar Tumballi

Comment 4 Shyamsundar 2019-03-25 16:30:19 UTC
This bug is getting closed because a release has been made available that should address the reported issue. In case the problem is still not fixed with glusterfs-6.0, please open a new bug report.

glusterfs-6.0 has been announced on the Gluster mailinglists [1], packages for several distributions should become available in the near future. Keep an eye on the Gluster Users mailinglist [2] and the update infrastructure for your distribution.

[1] https://lists.gluster.org/pipermail/announce/2019-March/000120.html
[2] https://www.gluster.org/pipermail/gluster-users/


Note You need to log in before you can comment on or make changes to this bug.