+++ This bug was initially created as a clone of Bug #1423373 +++ Description of problem: Some regression test cases in afr were failing because of this bug. RCA: Currently inode ref count is gaurded by inode_table->lock, and inode_ctx is gauarded by inode->lock. With the new patch [1] inode_ref was modified to change the inode_ctx to track the ref count per xlator. Thus inode_ref performed under inode_table->lock is modifying inode_ctx which has to be modified only under inode->lock Solution: When a inode is created, inode_ctx holder is allocated for all the xlators. Hence in case of inode_ctx_set instead of using the first free index in inode ctx holder, we can have predecided index for every xlator in the graph. Version-Release number of selected component (if applicable): How reproducible: Steps to Reproduce: 1. 2. 3. Actual results: Expected results: Additional info: --- Additional comment from Worker Ant on 2017-02-17 03:17:16 EST --- REVIEW: https://review.gluster.org/16622 (libglusterfs: Fix a crash due to race between inode_ctx_set and inode_ref) posted (#4) for review on master by Poornima G (pgurusid)
REVIEW: https://review.gluster.org/16655 (libglusterfs: Fix a crash due to race between inode_ctx_set and inode_ref) posted (#1) for review on release-3.10 by Poornima G (pgurusid)
*** Bug 1423065 has been marked as a duplicate of this bug. ***
REVIEW: https://review.gluster.org/16655 (libglusterfs: Fix a crash due to race between inode_ctx_set and inode_ref) posted (#2) for review on release-3.10 by Poornima G (pgurusid)
COMMIT: https://review.gluster.org/16655 committed in release-3.10 by Shyamsundar Ranganathan (srangana) ------ commit d10c5375b33520f36fd6acbd47b617d43f529ca2 Author: Poornima G <pgurusid> Date: Wed Feb 15 11:18:31 2017 +0530 libglusterfs: Fix a crash due to race between inode_ctx_set and inode_ref Issue: Currently inode ref count is guarded by inode_table->lock, and inode_ctx is guarded by inode->lock. With the new patch [1] inode_ref was modified to change the inode_ctx to track the ref count per xlator. Thus inode_ref performed under inode_table->lock is modifying inode_ctx which has to be modified only under inode->lock Solution: When a inode is created, inode_ctx holder is allocated for all the xlators. Hence in case of inode_ctx_set instead of using the first free index in inode ctx holder, we can have predecided index for every xlator in the graph. Credits Pranith K <pkarampu> [1] http://review.gluster.org/13736 > Reviewed-on: https://review.gluster.org/16622 > Smoke: Gluster Build System <jenkins.org> > NetBSD-regression: NetBSD Build System <jenkins.org> > Reviewed-by: Niels de Vos <ndevos> > CentOS-regression: Gluster Build System <jenkins.org> > Reviewed-by: Pranith Kumar Karampuri <pkarampu> Change-Id: I1bfe111c211fcc4fcd761bba01dc87c4c69b5170 BUG: 1423385 Signed-off-by: Poornima G <pgurusid> Reviewed-on: https://review.gluster.org/16655 NetBSD-regression: NetBSD Build System <jenkins.org> Smoke: Gluster Build System <jenkins.org> CentOS-regression: Gluster Build System <jenkins.org> Reviewed-by: Shyamsundar Ranganathan <srangana>
This bug is getting closed because a release has been made available that should address the reported issue. In case the problem is still not fixed with glusterfs-3.10.0, please open a new bug report. glusterfs-3.10.0 has been announced on the Gluster mailinglists [1], packages for several distributions should become available in the near future. Keep an eye on the Gluster Users mailinglist [2] and the update infrastructure for your distribution. [1] http://lists.gluster.org/pipermail/gluster-devel/2017-February/052173.html [2] https://www.gluster.org/pipermail/gluster-users/
This bug is getting closed because a release has been made available that should address the reported issue. In case the problem is still not fixed with glusterfs-3.10.0, please open a new bug report. glusterfs-3.10.0 has been announced on the Gluster mailinglists [1], packages for several distributions should become available in the near future. Keep an eye on the Gluster Users mailinglist [2] and the update infrastructure for your distribution. [1] http://lists.gluster.org/pipermail/gluster-users/2017-February/030119.html [2] https://www.gluster.org/pipermail/gluster-users/