+++ This bug was initially created as a clone of Bug #1358976 +++ Description of problem: Problem: It is not guranteed that the self-heal daemon would apply the new option as soon as volume set is executed because all the command gurantees is that the process is notified of the change in volfile. Shd still needs to fetch volfile and reconfigure. If the next volume heal command comes even before the reconfigure happens, then the heal won't happen. Fix: Restart shd to make sure it has the option loaded with new value. Version-Release number of selected component (if applicable): How reproducible: Steps to Reproduce: 1. 2. 3. Actual results: Expected results: Additional info: --- Additional comment from Vijay Bellur on 2016-07-21 22:24:32 EDT --- REVIEW: http://review.gluster.org/14978 (tests: Fix spurious failures with split-brain-favorite-child-policy.t) posted (#1) for review on master by Pranith Kumar Karampuri (pkarampu) --- Additional comment from Vijay Bellur on 2016-07-22 17:32:46 EDT --- COMMIT: http://review.gluster.org/14978 committed in master by Jeff Darcy (jdarcy) ------ commit b1559c2d1cfcff76df5870563a84cc22c752cc58 Author: Pranith Kumar K <pkarampu> Date: Fri Jul 22 07:48:27 2016 +0530 tests: Fix spurious failures with split-brain-favorite-child-policy.t Problem: It is not guranteed that the self-heal daemon would apply the new option as soon as volume set is executed because all the command gurantees is that the process is notified of the change in volfile. Shd still needs to fetch volfile and reconfigure. If the next volume heal command comes even before the reconfigure happens, then the heal won't happen. Fix: Restart shd to make sure it has the option loaded with new value. BUG: 1358976 Change-Id: I3ed30ebbec17bd06caa632e79e9412564f431b19 Signed-off-by: Pranith Kumar K <pkarampu> Reviewed-on: http://review.gluster.org/14978 Smoke: Gluster Build System <jenkins.org> Reviewed-by: Krutika Dhananjay <kdhananj> Tested-by: Jeff Darcy <jdarcy> NetBSD-regression: NetBSD Build System <jenkins.org> CentOS-regression: Gluster Build System <jenkins.org> Reviewed-by: Jeff Darcy <jdarcy>
REVIEW: http://review.gluster.org/15022 (tests: Fix spurious failures with split-brain-favorite-child-policy.t) posted (#1) for review on release-3.8 by Pranith Kumar Karampuri (pkarampu)
COMMIT: http://review.gluster.org/15022 committed in release-3.8 by Pranith Kumar Karampuri (pkarampu) ------ commit 264c7496c875914fcaf8bded44d61e284633c719 Author: Pranith Kumar K <pkarampu> Date: Fri Jul 22 07:48:27 2016 +0530 tests: Fix spurious failures with split-brain-favorite-child-policy.t Problem: It is not guranteed that the self-heal daemon would apply the new option as soon as volume set is executed because all the command gurantees is that the process is notified of the change in volfile. Shd still needs to fetch volfile and reconfigure. If the next volume heal command comes even before the reconfigure happens, then the heal won't happen. Fix: Restart shd to make sure it has the option loaded with new value. >BUG: 1358976 >Change-Id: I3ed30ebbec17bd06caa632e79e9412564f431b19 >Signed-off-by: Pranith Kumar K <pkarampu> >Reviewed-on: http://review.gluster.org/14978 >Smoke: Gluster Build System <jenkins.org> >Reviewed-by: Krutika Dhananjay <kdhananj> >Tested-by: Jeff Darcy <jdarcy> >NetBSD-regression: NetBSD Build System <jenkins.org> >CentOS-regression: Gluster Build System <jenkins.org> >Reviewed-by: Jeff Darcy <jdarcy> BUG: 1360573 Change-Id: I09e097dbdc2cae659ad1617d336945eb804b09a5 Signed-off-by: Pranith Kumar K <pkarampu> Reviewed-on: http://review.gluster.org/15022 Smoke: Gluster Build System <jenkins.org> NetBSD-regression: NetBSD Build System <jenkins.org> CentOS-regression: Gluster Build System <jenkins.org> Reviewed-by: Ravishankar N <ravishankar>
This bug is getting closed because a release has been made available that should address the reported issue. In case the problem is still not fixed with glusterfs-3.8.2, please open a new bug report. glusterfs-3.8.2 has been announced on the Gluster mailinglists [1], packages for several distributions should become available in the near future. Keep an eye on the Gluster Users mailinglist [2] and the update infrastructure for your distribution. [1] http://www.gluster.org/pipermail/announce/2016-August/000058.html [2] https://www.gluster.org/pipermail/gluster-users/