Description of problem: The root cause is the way in which a new node is added to the cluster. Now we have N1(127.1.1.1) and N2(127.1.1.2) as two nodes in the cluster, each having a brick N1:B1 (127.1.1.1 : 49146) and N2:B2 (127.1.1.2 : 49147) Now lets peer probe N3(127.1.1.3) from N1 1) Friend request is sent from N1 to N3. N3 added N1 in the peerinfo list i.e N1 and its uuid say [UUID1] 2) N3 get the brick infos from N1 3) N3 tries to start the bricks 1) N3 tries to start the brick B1 and find its not a local brick, using the logic MY_UUID == brickinfuuid, which is false in this case, as the UUID of brickinfhostname (N1) is [UUID1] (as suggested by the peerinfo list) and MY_UUID is [UUID3]. Hence doesn't start it. 2) N3 tries to start the brick B2. Now the problem lies here. N3 uses glusterd_resolve_brick() to resolve the UUID of B2->hostname(N2). In glusterd_resolve_brick(), it cannot find N2 in the peerinfo list. Then it checks if N2 is a local loop back address. Since N2(127.1.1.2) starts with "127" it decides that its a local loop back address. Thus glusterd_resolve_brick() fills brickinfuuid with [UUID3]. Now as brickinfuuid == MY_UUID is true, N3 initiates the brick process B2 with -s 127.1.1.2 and *-posix.glusterd-uuid=[UUID3]. This process dies off immediately, But for a short amount of time it holds on to the --brick-port, say for example 49155 Actual results: Starting a process fails some of the times in our regression test framework Expected results: Additional info:
You can probably use the "transport.socket.bind-address" option in the glusterd.vol file for this? See http://review.gluster.org/8910 for some more details.
I will give it a try. Thanks for the information.