+++ This bug was initially created as a clone of Bug #1711240 +++ Description of problem: gf_nfs_mt_inode_ctx leak [nfs/server.nfs-server - usage-type gf_nfs_mt_inode_ctx memusage] size=2628907920 num_allocs=109537830 max_size=2628907920 max_num_allocs=109537830 total_allocs=109537830 Version-Release number of selected component (if applicable): latest master branch How reproducible: high read or write activities Steps to Reproduce: 1. 2. 3. Actual results: Expected results: Additional info: --- Additional comment from INVALID USER on 2019-05-17 10:18:54 UTC --- This bug is automatically being proposed for the next minor release of Red Hat Gluster Storage by setting the release flag 'rhgs‑3.5.0' to '?'. If this bug should be proposed for a different release, please manually change the proposed release flag. --- Additional comment from Xie Changlong on 2019-05-17 10:20:17 UTC --- unlink file nerver invoke nfs_forget --- Additional comment from Amar Tumballi on 2019-05-17 10:35:53 UTC --- Which version of Gluster Release is it? --- Additional comment from Worker Ant on 2019-05-17 10:57:54 UTC --- REVIEW: https://review.gluster.org/22738 (inode: fix wrong loop count in __inode_ctx_free) posted (#1) for review on master by Xie Changlong --- Additional comment from Xie Changlong on 2019-05-17 11:03:41 UTC --- @Amar Tumballi test gnfs with master branch 836e5b6b, nfs_forget never call. It seems glusterfs-3.12.2-47.el7 also has this problem. --- Additional comment from Worker Ant on 2019-05-23 08:49:54 UTC --- REVIEW: https://review.gluster.org/22738 (inode: fix wrong loop count in __inode_ctx_free) merged (#4) on master by Xavi Hernandez
Since we plan to move from gNFS to Ganesha, can we close this one? Although looking at the fix - https://review.gluster.org/#/c/glusterfs/+/22738/ - sounds serious and important enough if we have not backported it yet?
Will this be backported to 6? We are seeing similar leaks in several hundred 6.7 and 4.0.2 clusters, but aren’t ready to pull the trigger on 7.x just yet.
Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory, and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://access.redhat.com/errata/RHBA-2020:2572