+++ This bug was initially created as a clone of Bug #1216039 +++ Description of problem: We have seen below issue while testing open/lock states recovery after IP failover in nfs-ganesha HA setup - The cluster has not been put into grace before IP fails over. There seem to be a timing issue there. "pcs constraint colocation order" seem to have not completely solved it. --- Additional comment from Kaleb KEITHLEY on 2015-05-07 09:08:27 EDT --- yes, there's a race between setting grace and failing over the virt IP. See http://review.gluster.org/10490
REVIEW: http://review.gluster.org/10646 (common-ha: fix race between setting grace and virt IP fail-over) posted (#1) for review on master by Kaleb KEITHLEY (kkeithle)
REVIEW: http://review.gluster.org/10646 (common-ha: fix race between setting grace and virt IP fail-over) posted (#2) for review on master by Kaleb KEITHLEY (kkeithle)
COMMIT: http://review.gluster.org/10646 committed in master by Kaleb KEITHLEY (kkeithle) ------ commit 751c4583bbaa59ebfe492ab6ecfab3108711f4c5 Author: Kaleb S. KEITHLEY <kkeithle> Date: Thu May 7 09:22:04 2015 -0400 common-ha: fix race between setting grace and virt IP fail-over Also send stderr output of `pcs resource {create,delete} $node-dead_ip-1` to /dev/null to avoid flooding the logs Change-Id: I29d526429cc4d7521971cd5e2e69bfb64bfc5ca9 BUG: 1219485 Signed-off-by: Kaleb S. KEITHLEY <kkeithle> Reviewed-on: http://review.gluster.org/10646 Tested-by: NetBSD Build System <jenkins.org> Tested-by: Gluster Build System <jenkins.com> Reviewed-by: soumya k <skoduri> Reviewed-by: Meghana M <mmadhusu>
This bug was accidentally moved from POST to MODIFIED via an error in automation, please see mmccune with any questions
This bug is getting closed because a release has been made available that should address the reported issue. In case the problem is still not fixed with glusterfs-3.8.0, please open a new bug report. glusterfs-3.8.0 has been announced on the Gluster mailinglists [1], packages for several distributions should become available in the near future. Keep an eye on the Gluster Users mailinglist [2] and the update infrastructure for your distribution. [1] http://blog.gluster.org/2016/06/glusterfs-3-8-released/ [2] http://thread.gmane.org/gmane.comp.file-systems.gluster.user