+++ This bug was initially created as a clone of Bug #1311874 +++ Description of problem: In a two node cluster if one of the node goes through a re installation (say N2) then once glusterd is restarted on this node the UUID is reinitialized. N1 to N2 handshake fails and peer status output on N2 still says peer rejected which is correct behaviour. However, if user tries to probe N1 from N2, the peer probe goes through which should have failed. Version-Release number of selected component (if applicable): How reproducible: Steps to Reproduce: 1. 2. 3. Actual results: Expected results: Additional info: --- Additional comment from Vijay Bellur on 2016-02-25 04:22:53 EST --- REVIEW: http://review.gluster.org/13519 (glusterd: reject peer probe from a reinstalled node) posted (#1) for review on master by Atin Mukherjee (amukherj) --- Additional comment from Vijay Bellur on 2016-02-25 22:50:39 EST --- REVIEW: http://review.gluster.org/13519 (glusterd: reject peer probe from a reinstalled node) posted (#2) for review on master by Atin Mukherjee (amukherj) --- Additional comment from Vijay Bellur on 2016-03-02 23:16:47 EST --- REVIEW: http://review.gluster.org/13519 (glusterd: reject peer probe from a reinstalled node) posted (#3) for review on master by Atin Mukherjee (amukherj) --- Additional comment from Vijay Bellur on 2016-03-04 07:10:31 EST --- COMMIT: http://review.gluster.org/13519 committed in master by Jeff Darcy (jdarcy) ------ commit 85b9e8ebb89ecadd30a364853e1e7c706dcce968 Author: Atin Mukherjee <amukherj> Date: Thu Feb 25 14:42:48 2016 +0530 glusterd: reject peer probe from a reinstalled node In a cluster if a node (say N1) goes through a OS reinstallation then probing some other node in the cluster from N1 doesn't fail as in gd_validate_mgmt_hndsk_req () uuid & hostname checks are done separately but there should be one more check where both the conditions should meet. Steps to create the problem - N1 probes N2 - bring down glusterd instance on N2 - remove /var/lib/glusterd/* from N2 - restart glusterd instance on N2 - execute gluster peer probe N1 from N2 Validations in gd_validate_mgmt_hndsk_req () has been improved to handle this special case Change-Id: I3ba5d8e243bae07a7a6743d01b019e7014d39171 BUG: 1311874 Signed-off-by: Atin Mukherjee <amukherj> Reviewed-on: http://review.gluster.org/13519 Smoke: Gluster Build System <jenkins.com> NetBSD-regression: NetBSD Build System <jenkins.org> CentOS-regression: Gluster Build System <jenkins.com> Reviewed-by: Jeff Darcy <jdarcy>
REVIEW: http://review.gluster.org/13619 (glusterd: reject peer probe from a reinstalled node) posted (#1) for review on release-3.7 by Atin Mukherjee (amukherj)
COMMIT: http://review.gluster.org/13619 committed in release-3.7 by Jeff Darcy (jdarcy) ------ commit 43652d54591e13234e1556e563866f7ecc2b56d6 Author: Atin Mukherjee <amukherj> Date: Thu Feb 25 14:42:48 2016 +0530 glusterd: reject peer probe from a reinstalled node Backport of http://review.gluster.org/13519 In a cluster if a node (say N1) goes through a OS reinstallation then probing some other node in the cluster from N1 doesn't fail as in gd_validate_mgmt_hndsk_req () uuid & hostname checks are done separately but there should be one more check where both the conditions should meet. Steps to create the problem - N1 probes N2 - bring down glusterd instance on N2 - remove /var/lib/glusterd/* from N2 - restart glusterd instance on N2 - execute gluster peer probe N1 from N2 Validations in gd_validate_mgmt_hndsk_req () has been improved to handle this special case Change-Id: I3ba5d8e243bae07a7a6743d01b019e7014d39171 BUG: 1315147 Signed-off-by: Atin Mukherjee <amukherj> Reviewed-on: http://review.gluster.org/13519 Smoke: Gluster Build System <jenkins.com> NetBSD-regression: NetBSD Build System <jenkins.org> CentOS-regression: Gluster Build System <jenkins.com> Reviewed-by: Jeff Darcy <jdarcy> Reviewed-on: http://review.gluster.org/13619
This bug is getting closed because a release has been made available that should address the reported issue. In case the problem is still not fixed with glusterfs-3.7.10, please open a new bug report. glusterfs-3.7.10 has been announced on the Gluster mailinglists [1], packages for several distributions should become available in the near future. Keep an eye on the Gluster Users mailinglist [2] and the update infrastructure for your distribution. [1] https://www.gluster.org/pipermail/gluster-users/2016-April/026164.html [2] http://thread.gmane.org/gmane.comp.file-systems.gluster.user