Bug 992899
| Summary: | [RHSC] Console allows addition of a host to a cluster, that has the same UUID as that of a host that is already present in the cluster | ||||||||||
|---|---|---|---|---|---|---|---|---|---|---|---|
| Product: | [Red Hat Storage] Red Hat Gluster Storage | Reporter: | Shruti Sampat <ssampat> | ||||||||
| Component: | rhsc | Assignee: | Timothy Asir <tjeyasin> | ||||||||
| Status: | CLOSED ERRATA | QA Contact: | Shruti Sampat <ssampat> | ||||||||
| Severity: | high | Docs Contact: | |||||||||
| Priority: | medium | ||||||||||
| Version: | 2.1 | CC: | dpati, dtsang, knarra, mmahoney, pprakash, rhs-bugs, sabose, sdharane, sharne, ssampat, tjeyasin | ||||||||
| Target Milestone: | --- | Keywords: | ZStream | ||||||||
| Target Release: | RHGS 2.1.2 | ||||||||||
| Hardware: | Unspecified | ||||||||||
| OS: | Unspecified | ||||||||||
| Whiteboard: | |||||||||||
| Fixed In Version: | cb11 | Doc Type: | Bug Fix | ||||||||
| Doc Text: |
Previously, adding a host was allowed even if the glusterfs generated UUID of the host is the same as of an existing host. But, the peer status on both the hosts with same UUID was displayed was 0. Now, with this update, an error message displayed while adding host with same UUID.
|
Story Points: | --- | ||||||||
| Clone Of: | Environment: | ||||||||||
| Last Closed: | 2014-02-25 07:34:23 UTC | Type: | Bug | ||||||||
| Regression: | --- | Mount Type: | --- | ||||||||
| Documentation: | --- | CRM: | |||||||||
| Verified Versions: | Category: | --- | |||||||||
| oVirt Team: | --- | RHEL 7.3 requirements from Atomic Host: | |||||||||
| Cloudforms Team: | --- | Target Upstream Version: | |||||||||
| Embargoed: | |||||||||||
| Attachments: |
|
||||||||||
|
Description
Shruti Sampat
2013-08-05 06:59:23 UTC
Created attachment 782693 [details]
engine logs
Created attachment 782694 [details]
vdsm logs 1
Created attachment 782696 [details]
vdsm logs 2
Retargeting for 2.1.z U2 (Corbett) release. If the host status shown as "Non operational" or other then "UP" with a proper log message (in status bar) after adding the server, I don't think this would be an issue. Because add server doesn't mean it should successfully execute layered operation (gluster peer probe). If the server is a clone one and has common peer (uuid) id, its sufficient to convey proper message at the status bar. So that the admin can re-generate uuid for the peer and the host become UP automatically. However, if the engine shows the host status as "UP", its a bug. Could you please provide few more details like: The uuid you are mentioning here is a server uuid or gluster uuid of the node. How did you received same uuid for both nodes? Able to reproduce this issue with Red Hat Storage Console Version: 2.1.2-0.23.master.el6_5. The UUID I am referring to is the UUID generated by glusterfs. See below for the steps used to reproduce the issue - 1. Install the latest RHS build on a VM. Generate the glusterfs UUID using the following command - # gluster system uuid:: get 2. After generation of the uuid, clone this VM. The second VM will also get the same UUID as the first one. 3. Add the first VM to a cluster via the Console, wait for it to come up. 4. Add the second VM to the same cluster. The result observed is that the second VM also comes up. But checking for peer status on both the machines says that they have 0 peers. So ideally, the second server should not have come up. Patch sent to upstream: http://gerrit.ovirt.org/#/c/21391 Performed the following steps - 1. Added a server to a cluster managed by RHSC, waited for it to come up. 2. Added another server to the same cluster, this server had the same gluster UUID as the first one ( achieved by means of cloning the VM ) The server was moved to Non-operational mode after installation. The following message was seen in the events log - "Gluster UUID of host server1-clone on Cluster test already exists." Marking as verified. Please review the edited DocText and signoff. Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory, and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. http://rhn.redhat.com/errata/RHEA-2014-0208.html The needinfo request[s] on this closed bug have been removed as they have been unresolved for 1000 days |