Created attachment 704127 [details] engine logs Description of problem: --------------------------------------- Consider a cluster created via the Console, having a host (say host1) added to it, such that host1 already has a peer. Output of 'gluster peer status' on host1 - Number of Peers: 1 Hostname: 10.70.35.68 Uuid: 6b2235c3-543f-4e08-90c4-634069751124 State: Peer in Cluster (Disconnected) Now add another host (say host2), to this cluster, such that host2 also has a peer. Output of 'gluster peer status' on host2 - Number of Peers: 1 Hostname: 10.70.35.90 Uuid: 0131251d-7a9e-490f-a628-ce01a8216c1e State: Peer in Cluster (Disconnected) host2 after going through the process of installing and reboot, comes up. Then, host1 disappears from the UI with the following message on the Events log - "Detected server host1 removed from Cluster TestCluster, and removed it from engine DB." Version-Release number of selected component (if applicable): Red Hat Storage Console Version: 2.1.0-0.qa6.el6rhs How reproducible: Intermittent Steps to Reproduce: 1. Add host1 to the cluster such that it has a peer in the Disconnected state. 2. Add host2 such that it also has a peer in the Disconnected state. Actual results: After host2 comes UP, host1 gets disappeared from the UI. Expected results: host2 coming UP, should not result in host1 getting disappeared from the UI. Additional info:
Created attachment 704128 [details] vdsm logs from server 1
Created attachment 704129 [details] vdsm logs from server 2
Created attachment 704132 [details] gluster logs from server 1
Created attachment 704133 [details] gluster logs from server 2
Sent upstream patch for validating that the new server being added is not part of another cluster. http://gerrit.ovirt.org/13980
Verified as fixed in Red Hat Storage Console Version: 2.1.0-0.qa10.el6rhs. The Console now displays the following error message when trying to add a host that is part of another cluster - "Error while executing action: Server <IP> is already part of another cluster."