afr: fix bug-1363721.t failure Backport of https://review.gluster.org/#/c/20036/ Problem: In the .t, when the only good brick was brought down, writes on the fd were still succeeding on the bad bricks. The inflight split-brain check was marking the write as failure but since the write succeeded on all the bad bricks, afr_txn_nothing_failed() was set to true and we were unwinding writev with success to DHT and then catching the failure in post-op in the background. Fix: Don't wind the FOP phase if the write_subvol (which is populated with readable subvols obtained in pre-op cbk) does not have at least 1 good brick which was up when the transaction started. Change-Id: I4a1fef4569609c31cffeaef591a64c10870e8d0b Signed-off-by: Ravishankar N <ravishankar>
REVIEW: https://review.gluster.org/20471 (afr: fix bug-1363721.t failure) posted (#1) for review on release-3.12 by Ravishankar N
REVIEW: https://review.gluster.org/20471 (afr: fix bug-1363721.t failure) posted (#2) for review on release-3.12 by Ravishankar N
REVIEW: https://review.gluster.org/20471 (afr: fix bug-1363721.t failure) posted (#3) for review on release-3.12 by Ravishankar N
COMMIT: https://review.gluster.org/20471 committed in release-3.12 by "Ravishankar N" <ravishankar> with a commit message- afr: fix bug-1363721.t failure Backport of https://review.gluster.org/#/c/20036/ Note: We need to update inode context's write_subvol even in case of compound fops. This is not there in master and 4.1 since compound FOPS was removed in it. Problem: In the .t, when the only good brick was brought down, writes on the fd were still succeeding on the bad bricks. The inflight split-brain check was marking the write as failure but since the write succeeded on all the bad bricks, afr_txn_nothing_failed() was set to true and we were unwinding writev with success to DHT and then catching the failure in post-op in the background. Fix: Don't wind the FOP phase if the write_subvol (which is populated with readable subvols obtained in pre-op cbk) does not have at least 1 good brick which was up when the transaction started. Change-Id: I4a1fef4569609c31cffeaef591a64c10870e8d0b BUG: 1598720 Signed-off-by: Ravishankar N <ravishankar>
This bug is getting closed because a release has been made available that should address the reported issue. In case the problem is still not fixed with glusterfs-3.12.12, please open a new bug report. glusterfs-3.12.12 has been announced on the Gluster mailinglists [1], packages for several distributions should become available in the near future. Keep an eye on the Gluster Users mailinglist [2] and the update infrastructure for your distribution. [1] https://lists.gluster.org/pipermail/announce/2018-July/000105.html [2] https://www.gluster.org/pipermail/gluster-users/