Red Hat Bugzilla – Bug 1284382
glusterd : Restarting glusterd simultaneously on all nodes might cause peer info file corruption.
Last modified: 2016-07-20 05:27:53 EDT
Does the problem persist with the latest bits? If not can we close this bug?
(In reply to Atin Mukherjee from comment #3)
> Does the problem persist with the latest bits? If not can we close this bug?
This issue was found in AWS ENV.
I verified locally using the 3.1.3. bits and using 9 node cluster, Issue reported is not reproduced ( repeated the steps mentioned in Desc section)
One more thing, there is a statement "before stopping the glusterd on the node, the node had received the probe request" in Description, i checked this condition as well, it worked as per the current release expectation.
probing a node part of cluster and node not part of cluster and having the stand alone volume is not allowed, will get the proper error message when tried.
i tried restarting glusterd multiple times with probe operation, restart worked successfully every time and probe failed with proper error message.
We can close this bug if we eliminate the AWS ENV otherwise we have to check the same thing in AWS using the latest bits.
I am closing this bug. If we get to hit this in AWS env, feel free to reopen :)