Description of problem: This is an RFE for adding per xlator ref counting for inode Version-Release number of selected component (if applicable): How reproducible: Steps to Reproduce: 1. 2. 3. Actual results: Expected results: Additional info:
REVIEW: http://review.gluster.org/13736 (inode: Add per xlator ref count for inode) posted (#3) for review on master by Poornima G (pgurusid)
REVIEW: http://review.gluster.org/13736 (inode: Add per xlator ref count for inode) posted (#4) for review on master by Poornima G (pgurusid)
REVIEW: http://review.gluster.org/13736 (inode: Add per xlator ref count for inode) posted (#5) for review on master by Poornima G (pgurusid)
REVIEW: http://review.gluster.org/13736 (inode: Add per xlator ref count for inode) posted (#6) for review on master by Poornima G (pgurusid)
COMMIT: http://review.gluster.org/13736 committed in master by Raghavendra G (rgowdapp) ------ commit c9239db7961afd648f1fa3310e5ce9b8281c8ad2 Author: Poornima G <pgurusid> Date: Tue Mar 15 03:14:16 2016 -0400 inode: Add per xlator ref count for inode Debugging inode ref leaks is very difficult as there is no info except for the ref count on the inode. Hence this patch is first step towards debugging inode ref leaks. With this patch, in the statedump we get additional info that tells the ref count taken by each xlator on the inode. Change-Id: I7802f7e7b13c04eb4d41fdf52d5469fd2c2a185a BUG: 1325531 Signed-off-by: Poornima G <pgurusid> Reviewed-on: http://review.gluster.org/13736 CentOS-regression: Gluster Build System <jenkins.org> NetBSD-regression: NetBSD Build System <jenkins.org> Reviewed-by: Pranith Kumar Karampuri <pkarampu> Smoke: Gluster Build System <jenkins.org>
This bug is getting closed because a release has been made available that should address the reported issue. In case the problem is still not fixed with glusterfs-3.10.0, please open a new bug report. glusterfs-3.10.0 has been announced on the Gluster mailinglists [1], packages for several distributions should become available in the near future. Keep an eye on the Gluster Users mailinglist [2] and the update infrastructure for your distribution. [1] http://lists.gluster.org/pipermail/gluster-users/2017-February/030119.html [2] https://www.gluster.org/pipermail/gluster-users/