Description of problem: Tomas Jelinek tells me in IRC there's no need to stop or remove nodes one by one if you use destroy --all Version-Release number of selected component (if applicable): How reproducible: Steps to Reproduce: 1. 2. 3. Actual results: Expected results: Additional info:
REVIEW: https://review.gluster.org/16737 (common-ha: no need to remove nodes one-by-one in teardown) posted (#2) for review on release-3.10 by Kaleb KEITHLEY (kkeithle)
COMMIT: https://review.gluster.org/16737 committed in release-3.10 by Kaleb KEITHLEY (kkeithle) ------ commit a355e57b2bb9aa6da2a66daf3206222cbf8b3b95 Author: Kaleb S. KEITHLEY <kkeithle> Date: Thu Feb 23 12:36:24 2017 -0500 common-ha: no need to remove nodes one-by-one in teardown `pcs cluster destroy --all` does all that's necessary, and prevents `pcs cluster setup ...` from failing the next time a cluster is set up This appears to happen when all the pacemaker and corosync files aren't deleted on the other nodes in the cluster. per Tomas Jelinek in IRC#cluster Change-Id: Iff24e3732f91f3b96a0b00b8199aa42446e60938 BUG: 1426323 Signed-off-by: Kaleb S. KEITHLEY <kkeithle> Reviewed-on: https://review.gluster.org/16737 Smoke: Gluster Build System <jenkins.org> NetBSD-regression: NetBSD Build System <jenkins.org> Reviewed-by: soumya k <skoduri> CentOS-regression: Gluster Build System <jenkins.org>
This bug is getting closed because a release has been made available that should address the reported issue. In case the problem is still not fixed with glusterfs-3.10.1, please open a new bug report. glusterfs-3.10.1 has been announced on the Gluster mailinglists [1], packages for several distributions should become available in the near future. Keep an eye on the Gluster Users mailinglist [2] and the update infrastructure for your distribution. [1] http://lists.gluster.org/pipermail/gluster-users/2017-April/030494.html [2] https://www.gluster.org/pipermail/gluster-users/