Hide Forgot
Somehow my GlusterFS installation had gotten into a state where the peers are rejected, but connected. All peers but "joej-linux" have bricks in volumes. It appears that this is caused by a checksum difference in volumes. On joej-linux, I stopped glusterd, deleted the vols subfolders, and started glusterd. The vols folders were recreated and one of the "Peer Rejected" now says, "Peer in Cluster" What I expect is needed is a "force" option to gluster volume sync which would allow (an) existing volume(s) to be overwritten from the peer which has the volume options set correctly.
bug 865700 is for similar reason, hope should be enough to handle this issue?
Let us take a cluster of 2 nodes, namely Node1 and Node2. Let us say that the following events happened in the cluster, On Node1: ------- t0 - gluster peer probe Node2 t1 - gluster volume create vol Node1:brick1 Node2:brick2 t2 - gluster volume start vol On Node2: --------- t3 - glusterd dies On Node1: --------- t4 - gluster volume set vol write-behind off t5 - glusterd dies On Node2: --------- t6 - glusterd is restarted t7 - gluster volume set vol read-ahead off On Node1: --------- t8 - glusterd is restarted [At this point we have the glusterd peers in Rejected state] Let us assume that we want the volume to be in the state as perceived by Node1, On Node2: gluster volume sync Node1 $vol, for each $vol in the cluster. PS: The following patches fixes the issue, master: 1) http://review.gluster.com/4624 2) http://review.gluster.com/4815 (pending review) release-3.4: 1) http://review.gluster.com/4643
This should be fixed with 3.4 as mentioned in the previous comment. *** This bug has been marked as a duplicate of bug 950048 ***