Red Hat Bugzilla – Full Text Bug Listing
|Summary:||Add a method to resolve peers in rejected state due to volume checksum difference|
|Product:||[Community] GlusterFS||Reporter:||Joe Julian <joe>|
|Component:||glusterd||Assignee:||krishnan parthasarathi <kparthas>|
|Status:||CLOSED DUPLICATE||QA Contact:|
|Version:||3.1.7||CC:||gluster-bugs, ndevos, nsathyan, rwheeler|
|Fixed In Version:||Doc Type:||Enhancement|
|Doc Text:||Story Points:||---|
|Last Closed:||2014-10-21 09:48:56 EDT||Type:||---|
|oVirt Team:||---||RHEL 7.3 requirements from Atomic Host:|
Description Joe Julian 2011-09-29 18:58:08 EDT
Somehow my GlusterFS installation had gotten into a state where the peers are rejected, but connected. All peers but "joej-linux" have bricks in volumes. It appears that this is caused by a checksum difference in volumes. On joej-linux, I stopped glusterd, deleted the vols subfolders, and started glusterd. The vols folders were recreated and one of the "Peer Rejected" now says, "Peer in Cluster" What I expect is needed is a "force" option to gluster volume sync which would allow (an) existing volume(s) to be overwritten from the peer which has the volume options set correctly.
Comment 1 Amar Tumballi 2013-02-26 05:27:22 EST
bug 865700 is for similar reason, hope should be enough to handle this issue?
Comment 2 krishnan parthasarathi 2013-04-15 03:14:59 EDT
Let us take a cluster of 2 nodes, namely Node1 and Node2. Let us say that the following events happened in the cluster, On Node1: ------- t0 - gluster peer probe Node2 t1 - gluster volume create vol Node1:brick1 Node2:brick2 t2 - gluster volume start vol On Node2: --------- t3 - glusterd dies On Node1: --------- t4 - gluster volume set vol write-behind off t5 - glusterd dies On Node2: --------- t6 - glusterd is restarted t7 - gluster volume set vol read-ahead off On Node1: --------- t8 - glusterd is restarted [At this point we have the glusterd peers in Rejected state] Let us assume that we want the volume to be in the state as perceived by Node1, On Node2: gluster volume sync Node1 $vol, for each $vol in the cluster. PS: The following patches fixes the issue, master: 1) http://review.gluster.com/4624 2) http://review.gluster.com/4815 (pending review) release-3.4: 1) http://review.gluster.com/4643