Description of problem: When an instance dies in GCP, it is recreated with the same, but different boot disk and different data drive. The other node see that node as Peer Rejected (Connected) Version-Release number of selected component (if applicable): Steps to Reproduce: 1. Create 3 node cluster 2. Recreate one of the instance (new boot and data disk) with the same name, IP 3. Run gluster peer status Actual results: On old nodes: Peer Rejected (Connected) On new node: no peers Expected results: Every peer should be connected Additional info: The UUID of the new node is changed. I tried editing glusterd.info with the new node UUID. I tried restarting all of the nodes, deleting data on the broken node.
This is not a bug rather a an expected behaviour. To bring back your cluster to a normal state, you can edit the UUID of the node in /var/lib/glusterd.info to old UUID and copy /var/lib/glusterd/ contents from a good node to re-created node. You also need to delete the information related to own node from /var/lib/glusterd/peers/ and add data related to the peer from which the data is copied. HTH, Sanju
Once the data is copied you need to restart the glusterd on re-created node.
Thank you so much Sanju, this has recovered the state!