Hide Forgot
Thanks Jianwei Hou for testing this functionality. One thing I would like to mention here is that, if we are making use of "Heketi", the peer probe is also supposed to be executed by Heketi. Looks like it was a manual attempt ( Step2 in "Steps to reproduce) in this specific case. Why I am bit concerned on, who did the peer probe is because of a problematic entry in the gluster peer status output. That said, in below output [1], I see '54.147.233.155' in 'other names' and in [2], the proper hostname has been recorded. I suspect something went wrong with the peer probe which cause the issue. [1] ``` From node1 [root@ip-172-18-0-191 ~]# gluster peer status Number of Peers: 2 Hostname: 54.147.233.155 Uuid: e52c284e-13b1-49f5-b20a-1b27eed427af State: Peer in Cluster (Connected) Other names: 54.147.233.155 ``` [2] ``` [root@ip-172-18-0-189 ~]# gluster peer status Number of Peers: 2 From node3 Hostname: 54.147.233.155 Uuid: e52c284e-13b1-49f5-b20a-1b27eed427af State: Peer in Cluster (Connected) Other names: ec2-54-147-233-155.compute-1.amazonaws.com ```
This time, I didn't probe the peers manually, but left them all to heketi. I only loaded the topology and later the peers were probed automatically. But still the issue exists. I googled it and found some clue, so I teared down the topology and updated topology.json's node.hostnames.storage to my instances' PRIVATE_IPs(previously I was using public ip). Load topology again and everything works! Closing as not a bug. Thank you!