Description of problem: Following a change of UUID on a node (and a restart of Glusterd) still allows peer status to become "Connected" in the output of gluster peer status. Version-Release number of selected component (if applicable): glusterfs-3.4.0alpha-2.fc18.x86_64 glusterfs-fuse-3.4.0alpha-2.fc18.x86_64 glusterfs-server-3.4.0alpha-2.fc18.x86_64 How reproducible: Node 1) service glusterd start Node 2) service glusterd start Node 1) gluster peer probe node2 Node 1) gluster peer status Node 2) service glusterd stop Node 2) rm -f /var/lib/glusterd/glusterd.info Node 2) service glusterd start Node 2) cat /var/lib/glusterd/glusterd.info Node 1) gluster peer status You will notice that the UUID has been re-created on Node 2 but its still allowed to automatically reconnect with peers and in Node1 it still shows the old UUID.
> You will notice that the UUID has been re-created on Node 2 but its still allowed to automatically reconnect with peers. This is happening because the node2 was 'practically' empty even before deleting the /var/lib/glusterd. If node2 had some volume information, then the reconnect could have not happened. Do you still feel this behavior is a bug? > and in Node1 it still shows the old UUID. This is a bug for sure. Will fix this.
GlusterFS 3.7.0 has been released (http://www.gluster.org/pipermail/gluster-users/2015-May/021901.html), and the Gluster project maintains N-2 supported releases. The last two releases before 3.7 are still maintained, at the moment these are 3.6 and 3.5. This bug has been filed against the 3,4 release, and will not get fixed in a 3.4 version any more. Please verify if newer versions are affected with the reported problem. If that is the case, update the bug with a note, and update the version if you can. In case updating the version is not possible, leave a comment in this bug report with the version you tested, and set the "Need additional information the selected bugs from" below the comment box to "bugs". If there is no response by the end of the month, this bug will get automatically closed.
GlusterFS 3.4.x has reached end-of-life. If this bug still exists in a later release please reopen this and change the version or open a new bug.