REVIEW: http://review.gluster.org/14310 (cluster/afr : Do post-op in case of symmetric errors) posted (#1) for review on master by Anuradha Talur (atalur)
REVIEW: http://review.gluster.org/14310 (cluster/afr : Do post-op in case of symmetric errors) posted (#2) for review on master by Anuradha Talur (atalur)
REVIEW: http://review.gluster.org/14310 (cluster/afr : Do post-op in case of symmetric errors) posted (#3) for review on master by Anuradha Talur (atalur)
COMMIT: http://review.gluster.org/14310 committed in master by Pranith Kumar Karampuri (pkarampu) ------ commit 53d16409f933110da11338ef26d1fa7b2e921cec Author: Anuradha Talur <atalur> Date: Fri May 13 15:34:06 2016 +0530 cluster/afr : Do post-op in case of symmetric errors In afr_changelog_post_op_now(), if there was any error, meaning op_ret < 0, post-op was not being done even when the errors were symmetric and there were no "failed subvols". Fix: When the errors are symmetric, perform post-op. How was the bug found : In a 1 X 3 volume with shard and write behind on when writes were done into a file with one brick down, the trusted.afr.dirty xattr's value for .shard directory would keep increasing as post op was not done but pre-op was. This incorrectly showed .shard to be in split-brain. RCA: When WB is on, due to multiple writes being sent on offset lying in the same shard, chances are that same shard file will be created more than once with the second one failing with op_ret < 0 and op_errno = EEXIST. As op_ret was negative, afr wouldn't do post-op, leading to no decrement of trusted.afr.dirty xattr. Thus showing .shard directory to be in split-brain. Change-Id: I711bdeaa1397244e6a7790e96f0c84501798fc59 BUG: 1335652 Signed-off-by: Anuradha Talur <atalur> Reviewed-on: http://review.gluster.org/14310 Reviewed-by: Pranith Kumar Karampuri <pkarampu> Tested-by: Pranith Kumar Karampuri <pkarampu> Smoke: Gluster Build System <jenkins.com> NetBSD-regression: NetBSD Build System <jenkins.org> Reviewed-by: Ravishankar N <ravishankar> CentOS-regression: Gluster Build System <jenkins.com>
This bug is getting closed because a release has been made available that should address the reported issue. In case the problem is still not fixed with glusterfs-3.9.0, please open a new bug report. glusterfs-3.9.0 has been announced on the Gluster mailinglists [1], packages for several distributions should become available in the near future. Keep an eye on the Gluster Users mailinglist [2] and the update infrastructure for your distribution. [1] http://lists.gluster.org/pipermail/gluster-users/2016-November/029281.html [2] https://www.gluster.org/pipermail/gluster-users/