+++ This bug was initially created as a clone of Bug #1539358 +++ Description of problem: We currently don't have a roll-back/undoing of post-ops if quorum is not met. Though the FOP is still unwound with failure, the xattrs remain on the disk. Due to these partial post-ops and partial heals (healing only when 2 bricks are up), we can end up in split-brain purely from the afr xattrs point of view i.e each brick is blamed by atleast one of the others. These scenarios are hit when there is frequent connect/disconnect of the client/shd to the bricks. --- Additional comment from Worker Ant on 2018-01-28 03:40:54 EST --- REVIEW: https://review.gluster.org/19349 (afr: don't treat false cases of split-brain as genuine) posted (#1) for review on master by Ravishankar N --- Additional comment from Worker Ant on 2018-02-01 09:18:21 EST --- COMMIT: https://review.gluster.org/19349 committed in master by "Ravishankar N" <ravishankar> with a commit message- afr: don't treat all cases all bricks being blamed as split-brain Problem: We currently don't have a roll-back/undoing of post-ops if quorum is not met. Though the FOP is still unwound with failure, the xattrs remain on the disk. Due to these partial post-ops and partial heals (healing only when 2 bricks are up), we can end up in split-brain purely from the afr xattrs point of view i.e each brick is blamed by atleast one of the others. These scenarios are hit when there is frequent connect/disconnect of the client/shd to the bricks while I/O or heal are in progress. Fix: Instead of undoing the post-op, pick a source based on the xattr values. If 2 bricks blame one, the blamed one must be treated as sink. If there is no majority, all are sources. Once we pick a source, self-heal will then do the heal instead of erroring out due to split-brain. Change-Id: I3d0224b883eb0945785ade0e9697a1c828aec0ae BUG: 1539358 Signed-off-by: Ravishankar N <ravishankar> --- Additional comment from Shyamsundar on 2018-06-20 13:58:10 EDT --- This bug is getting closed because a release has been made available that should address the reported issue. In case the problem is still not fixed with glusterfs-v4.1.0, please open a new bug report. glusterfs-v4.1.0 has been announced on the Gluster mailinglists [1], packages for several distributions should become available in the near future. Keep an eye on the Gluster Users mailinglist [2] and the update infrastructure for your distribution. [1] http://lists.gluster.org/pipermail/announce/2018-June/000102.html [2] https://www.gluster.org/pipermail/gluster-users/
REVIEW: https://review.gluster.org/20433 (afr: don't treat all cases all bricks being blamed as split-brain) posted (#1) for review on release-3.12 by Ravishankar N
COMMIT: https://review.gluster.org/20433 committed in release-3.12 by "jiffin tony Thottan" <jthottan> with a commit message- afr: don't treat all cases all bricks being blamed as split-brain Problem: We currently don't have a roll-back/undoing of post-ops if quorum is not met. Though the FOP is still unwound with failure, the xattrs remain on the disk. Due to these partial post-ops and partial heals (healing only when 2 bricks are up), we can end up in split-brain purely from the afr xattrs point of view i.e each brick is blamed by atleast one of the others. These scenarios are hit when there is frequent connect/disconnect of the client/shd to the bricks while I/O or heal are in progress. Fix: Instead of undoing the post-op, pick a source based on the xattr values. If 2 bricks blame one, the blamed one must be treated as sink. If there is no majority, all are sources. Once we pick a source, self-heal will then do the heal instead of erroring out due to split-brain. Change-Id: I3d0224b883eb0945785ade0e9697a1c828aec0ae BUG: 1597123 Signed-off-by: Ravishankar N <ravishankar> (cherry picked from commit 0e6e8216823c2d9dafb81aae0f6ee3497c23d140)
This bug is getting closed because a release has been made available that should address the reported issue. In case the problem is still not fixed with glusterfs-3.12.12, please open a new bug report. glusterfs-3.12.12 has been announced on the Gluster mailinglists [1], packages for several distributions should become available in the near future. Keep an eye on the Gluster Users mailinglist [2] and the update infrastructure for your distribution. [1] https://lists.gluster.org/pipermail/announce/2018-July/000105.html [2] https://www.gluster.org/pipermail/gluster-users/