+++ This bug was initially created as a clone of Bug #1318895 +++ Description of problem: 1.Create a file in gfid-split brain on root of the volume. 2.'gluster vol heal volname info` shows '/' as "Possibly undergoing heal" instead of "Is in split-brain" 3.`gluster v heal volname info split-brain` shows zero entries. --- Additional comment from Vijay Bellur on 2016-03-18 01:52:19 EDT --- REVIEW: http://review.gluster.org/13772 (afr: Detect split-brain during afr_selfheal_unlocked_inspect) posted (#1) for review on master by Ravishankar N (ravishankar) --- Additional comment from Mike McCune on 2016-03-28 19:24:30 EDT --- This bug was accidentally moved from POST to MODIFIED via an error in automation, please see mmccune with any questions --- Additional comment from Worker Ant on 2017-06-25 12:25:15 EDT --- REVIEW: https://review.gluster.org/13772 (glfsheal: prevent background self-heals) posted (#2) for review on master by Ravishankar N (ravishankar) --- Additional comment from Worker Ant on 2017-06-30 07:08:37 EDT --- COMMIT: https://review.gluster.org/13772 committed in master by Jeff Darcy (jeff.us) ------ commit b4db625d0ccb4fdc6537ed9f6e8ebeaffd1c4873 Author: Ravishankar N <ravishankar> Date: Sun Jun 25 21:50:09 2017 +0530 glfsheal: prevent background self-heals Problem: For a file in gfid split-brain, the parent directory ('/' during testing) was detected as possibly undergoing heal instead of split-brain in `heal-info` output. Also, it was not being displayed in `info split-brain` output for the same reason. The problem was that when `glfsheal` was run, lookup on '/' triggered a background self-heal due to which processing of '/' during `heal info` failed to acquire locks with errno=EAGAIN. Fix: Set background-self-heal-count to zero while launching glfsheal. Change-Id: I153a7c75af71f213a4eefacf504a0f9806c528a5 BUG: 1318895 Signed-off-by: Ravishankar N <ravishankar> Reviewed-on: https://review.gluster.org/13772 CentOS-regression: Gluster Build System <jenkins.org> NetBSD-regression: NetBSD Build System <jenkins.org> Smoke: Gluster Build System <jenkins.org> Reviewed-by: Jeff Darcy <jeff.us>
REVIEW: https://review.gluster.org/17677 (glfsheal: prevent background self-heals) posted (#1) for review on release-3.9 by Ravishankar N (ravishankar)
REVIEW: https://review.gluster.org/17678 (glfsheal: prevent background self-heals) posted (#1) for review on release-3.8 by Ravishankar N (ravishankar)
COMMIT: https://review.gluster.org/17678 committed in release-3.8 by Niels de Vos (ndevos) ------ commit 3d5ef4e2cf31f611e3cbcd865c0367bab44a9552 Author: Ravishankar N <ravishankar> Date: Sun Jun 25 21:50:09 2017 +0530 glfsheal: prevent background self-heals Problem: For a file in gfid split-brain, the parent directory ('/' during testing) was detected as possibly undergoing heal instead of split-brain in `heal-info` output. Also, it was not being displayed in `info split-brain` output for the same reason. The problem was that when `glfsheal` was run, lookup on '/' triggered a background self-heal due to which processing of '/' during `heal info` failed to acquire locks with errno=EAGAIN. Fix: Set background-self-heal-count to zero while launching glfsheal. > Reviewed-on: https://review.gluster.org/13772 > CentOS-regression: Gluster Build System <jenkins.org> > NetBSD-regression: NetBSD Build System <jenkins.org> > Smoke: Gluster Build System <jenkins.org> > Reviewed-by: Jeff Darcy <jeff.us> (cherry picked from commit b4db625d0ccb4fdc6537ed9f6e8ebeaffd1c4873) Change-Id: I153a7c75af71f213a4eefacf504a0f9806c528a5 BUG: 1467272 Signed-off-by: Ravishankar N <ravishankar> Reviewed-on: https://review.gluster.org/17678 Smoke: Gluster Build System <jenkins.org> CentOS-regression: Gluster Build System <jenkins.org> Reviewed-by: Pranith Kumar Karampuri <pkarampu> Reviewed-by: Niels de Vos <ndevos>
This bug is getting closed because a release has been made available that should address the reported issue. In case the problem is still not fixed with glusterfs-3.8.14, please open a new bug report. glusterfs-3.8.14 has been announced on the Gluster mailinglists [1], packages for several distributions should become available in the near future. Keep an eye on the Gluster Users mailinglist [2] and the update infrastructure for your distribution. [1] http://lists.gluster.org/pipermail/announce/2017-July/000077.html [2] https://www.gluster.org/pipermail/gluster-users/