REVIEW: http://review.gluster.org/15892 (afr: Fix the EIO that can occur in afr_inode_refresh as a result of cache invalidation(upcall).) posted (#1) for review on master by Poornima G (pgurusid)
REVIEW: http://review.gluster.org/15892 (afr: Fix the EIO that can occur in afr_inode_refresh as a result of cache invalidation(upcall).) posted (#2) for review on master by Poornima G (pgurusid)
REVIEW: http://review.gluster.org/15892 (afr: Fix the EIO that can occur in afr_inode_refresh as a result of cache invalidation(upcall).) posted (#3) for review on master by Poornima G (pgurusid)
COMMIT: http://review.gluster.org/15892 committed in master by Pranith Kumar Karampuri (pkarampu) ------ commit 570aefeb280e53e98cb5060cf384f1d74379a521 Author: Poornima G <pgurusid> Date: Mon Nov 21 11:49:35 2016 +0530 afr: Fix the EIO that can occur in afr_inode_refresh as a result of cache invalidation(upcall). Issue: ------ When a cache invalidation is recieved as a result of changing pending xattr, the read_subvol is reset. Consider the below chain of execution: CHILD_DOWN ... afr_readv ... afr_inode_refresh ... afr_inode_read_subvol_reset <- as a result of pending xattr set by some other client GF_EVENT_UPCALL will be sent afr_refresh_done -> this results in an EIO, as the read subvol was reset by the end of the afr_inode_refresh Solution: --------- When GF_EVENT_UPCALL is recieved, instead of resetting read_subvol, set a variable need_refresh in inode_ctx, the next time some one starts a txn, along with event gen, need_rrefresh also needs to be checked. Change-Id: Ifda21a7a8039b8874215e1afa4bdf20f7d991b58 BUG: 1396952 Signed-off-by: Poornima G <pgurusid> Reviewed-on: http://review.gluster.org/15892 Reviewed-by: Ravishankar N <ravishankar> Smoke: Gluster Build System <jenkins.org> NetBSD-regression: NetBSD Build System <jenkins.org> CentOS-regression: Gluster Build System <jenkins.org> Reviewed-by: Pranith Kumar Karampuri <pkarampu>
This bug is getting closed because a release has been made available that should address the reported issue. In case the problem is still not fixed with glusterfs-3.10.0, please open a new bug report. glusterfs-3.10.0 has been announced on the Gluster mailinglists [1], packages for several distributions should become available in the near future. Keep an eye on the Gluster Users mailinglist [2] and the update infrastructure for your distribution. [1] http://lists.gluster.org/pipermail/gluster-users/2017-February/030119.html [2] https://www.gluster.org/pipermail/gluster-users/