Description of problem: ----------------------------- Adding a RHS server to a cluster managed by RHSC succeeds, such that the glusterfs-generated UUID of the server is the same as that of another RHS server, already present in the cluster. The action should have failed because peer probe would fail, as the two servers have the same UUID. Version-Release number of selected component (if applicable): Red Hat Storage Console Version: 2.1.0-0.bb8.el6rhs How reproducible: Always Steps to Reproduce: 1. Add a server to a cluster managed by RHSC. 2. Add another server to the same cluster such that the UUID of the first server is the same as the one being added. Actual results: Add server succeeds although peer probe from server 1 to server 2 fails. Expected results: Add server should have failed with proper error message. Additional info:
Created attachment 782693 [details] engine logs
Created attachment 782694 [details] vdsm logs 1
Created attachment 782696 [details] vdsm logs 2
Retargeting for 2.1.z U2 (Corbett) release.
If the host status shown as "Non operational" or other then "UP" with a proper log message (in status bar) after adding the server, I don't think this would be an issue. Because add server doesn't mean it should successfully execute layered operation (gluster peer probe). If the server is a clone one and has common peer (uuid) id, its sufficient to convey proper message at the status bar. So that the admin can re-generate uuid for the peer and the host become UP automatically. However, if the engine shows the host status as "UP", its a bug.
Could you please provide few more details like: The uuid you are mentioning here is a server uuid or gluster uuid of the node. How did you received same uuid for both nodes?
Able to reproduce this issue with Red Hat Storage Console Version: 2.1.2-0.23.master.el6_5. The UUID I am referring to is the UUID generated by glusterfs. See below for the steps used to reproduce the issue - 1. Install the latest RHS build on a VM. Generate the glusterfs UUID using the following command - # gluster system uuid:: get 2. After generation of the uuid, clone this VM. The second VM will also get the same UUID as the first one. 3. Add the first VM to a cluster via the Console, wait for it to come up. 4. Add the second VM to the same cluster. The result observed is that the second VM also comes up. But checking for peer status on both the machines says that they have 0 peers. So ideally, the second server should not have come up.
Patch sent to upstream: http://gerrit.ovirt.org/#/c/21391
Performed the following steps - 1. Added a server to a cluster managed by RHSC, waited for it to come up. 2. Added another server to the same cluster, this server had the same gluster UUID as the first one ( achieved by means of cloning the VM ) The server was moved to Non-operational mode after installation. The following message was seen in the events log - "Gluster UUID of host server1-clone on Cluster test already exists." Marking as verified.
Please review the edited DocText and signoff.
Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory, and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. http://rhn.redhat.com/errata/RHEA-2014-0208.html
The needinfo request[s] on this closed bug have been removed as they have been unresolved for 1000 days