Bug 992899 - [RHSC] Console allows addition of a host to a cluster, that has the same UUID as that of a host that is already present in the cluster [NEEDINFO]
[RHSC] Console allows addition of a host to a cluster, that has the same UUID...
Status: CLOSED ERRATA
Product: Red Hat Gluster Storage
Classification: Red Hat
Component: rhsc (Show other bugs)
2.1
Unspecified Unspecified
medium Severity high
: ---
: RHGS 2.1.2
Assigned To: Timothy Asir
Shruti Sampat
: ZStream
Depends On:
Blocks:
  Show dependency treegraph
 
Reported: 2013-08-05 02:59 EDT by Shruti Sampat
Modified: 2015-05-15 14:17 EDT (History)
11 users (show)

See Also:
Fixed In Version: cb11
Doc Type: Bug Fix
Doc Text:
Previously, adding a host was allowed even if the glusterfs generated UUID of the host is the same as of an existing host. But, the peer status on both the hosts with same UUID was displayed was 0. Now, with this update, an error message displayed while adding host with same UUID.
Story Points: ---
Clone Of:
Environment:
Last Closed: 2014-02-25 02:34:23 EST
Type: Bug
Regression: ---
Mount Type: ---
Documentation: ---
CRM:
Verified Versions:
Category: ---
oVirt Team: ---
RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: ---
sharne: needinfo? (tjeyasin)


Attachments (Terms of Use)
engine logs (187.73 KB, text/x-log)
2013-08-05 03:11 EDT, Shruti Sampat
no flags Details
vdsm logs 1 (41.52 KB, text/x-log)
2013-08-05 03:11 EDT, Shruti Sampat
no flags Details
vdsm logs 2 (65.30 KB, text/x-log)
2013-08-05 03:12 EDT, Shruti Sampat
no flags Details

  None (edit)
Description Shruti Sampat 2013-08-05 02:59:23 EDT
Description of problem:
-----------------------------
Adding a RHS server to a cluster managed by RHSC succeeds, such that the glusterfs-generated UUID of the server is the same as that of another RHS server, already present in the cluster. 

The action should have failed because peer probe would fail, as the two servers have the same UUID.

Version-Release number of selected component (if applicable):
Red Hat Storage Console Version: 2.1.0-0.bb8.el6rhs 

How reproducible:
Always

Steps to Reproduce:
1. Add a server to a cluster managed by RHSC.
2. Add another server to the same cluster such that the UUID of the first server is the same as the one being added.

Actual results:
Add server succeeds although peer probe from server 1 to server 2 fails.

Expected results:
Add server should have failed with proper error message.

Additional info:
Comment 1 Shruti Sampat 2013-08-05 03:11:23 EDT
Created attachment 782693 [details]
engine logs
Comment 2 Shruti Sampat 2013-08-05 03:11:55 EDT
Created attachment 782694 [details]
vdsm logs 1
Comment 3 Shruti Sampat 2013-08-05 03:12:38 EDT
Created attachment 782696 [details]
vdsm logs 2
Comment 4 Scott Haines 2013-09-23 19:18:51 EDT
Retargeting for 2.1.z U2 (Corbett) release.
Comment 5 Timothy Asir 2013-10-31 02:17:47 EDT
If the host status shown as "Non operational" or other then "UP" with a proper log message (in status bar) after adding the server, I don't think this would be an issue. Because add server doesn't mean it should successfully execute layered operation (gluster peer probe). If the server is a clone one and has common peer (uuid) id, its sufficient to convey proper message at the status bar. So that the admin can re-generate uuid for the peer and the host become UP automatically.
However, if the engine shows the host status as "UP", its a bug.
Comment 6 Timothy Asir 2013-11-15 03:39:07 EST
Could you please provide few more details like:
The uuid you are mentioning here is a server uuid or gluster uuid of the node.
How did you received same uuid for both nodes?
Comment 7 Shruti Sampat 2013-11-15 07:25:08 EST
Able to reproduce this issue with Red Hat Storage Console Version: 2.1.2-0.23.master.el6_5. 

The UUID I am referring to is the UUID generated by glusterfs. See below for the  steps used to reproduce the issue - 

1. Install the latest RHS build on a VM. Generate the glusterfs UUID using the following command - 

# gluster system uuid:: get

2. After generation of the uuid, clone this VM. The second VM will also get the same UUID as the first one.

3. Add the first VM to a cluster via the Console, wait for it to come up.

4. Add the second VM to the same cluster.

The result observed is that the second VM also comes up. But checking for peer status on both the machines says that they have 0 peers. So ideally, the second server should not have come up.
Comment 9 Timothy Asir 2013-11-19 01:02:03 EST
Patch sent to upstream: http://gerrit.ovirt.org/#/c/21391
Comment 11 Shruti Sampat 2013-12-18 06:56:20 EST
Performed the following steps - 

1. Added a server to a cluster managed by RHSC, waited for it to come up.
2. Added another server to the same cluster, this server had the same gluster UUID as the first one ( achieved by means of cloning the VM )

The server was moved to Non-operational mode after installation. The following message was seen in the events log - 

"Gluster UUID of host server1-clone on Cluster test already exists."

Marking as verified.
Comment 12 Shalaka 2014-01-16 09:57:23 EST
Please review the edited DocText and signoff.
Comment 14 errata-xmlrpc 2014-02-25 02:34:23 EST
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

http://rhn.redhat.com/errata/RHEA-2014-0208.html

Note You need to log in before you can comment on or make changes to this bug.