Bug 916982 - [RHSC] After adding a host to a cluster which already has a node, via the Console, the host gets installed and then, the one that was already present gets removed from the cluster.
Summary: [RHSC] After adding a host to a cluster which already has a node, via the Con...
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: Red Hat Gluster Storage
Classification: Red Hat Storage
Component: rhsc
Version: 2.1
Hardware: All
OS: All
medium
medium
Target Milestone: ---
: ---
Assignee: Shubhendu Tripathi
QA Contact: Shruti Sampat
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2013-03-01 10:55 UTC by Shruti Sampat
Modified: 2014-01-31 01:56 UTC (History)
8 users (show)

Fixed In Version: qa10
Doc Type: Bug Fix
Doc Text:
Clone Of:
Environment:
Last Closed: 2014-01-31 01:56:50 UTC
Embargoed:


Attachments (Terms of Use)
engine logs (7.00 MB, text/x-log)
2013-03-01 10:55 UTC, Shruti Sampat
no flags Details
vdsm logs from server 1 (10.64 MB, text/x-log)
2013-03-01 10:57 UTC, Shruti Sampat
no flags Details
vdsm logs from server 2 (12.27 MB, text/x-log)
2013-03-01 10:59 UTC, Shruti Sampat
no flags Details
gluster logs from server 1 (18.79 MB, text/x-log)
2013-03-01 11:01 UTC, Shruti Sampat
no flags Details
gluster logs from server 2 (11.82 MB, text/x-log)
2013-03-01 11:07 UTC, Shruti Sampat
no flags Details


Links
System ID Private Priority Status Summary Last Updated
oVirt gerrit 13980 0 None None None Never

Description Shruti Sampat 2013-03-01 10:55:40 UTC
Created attachment 704127 [details]
engine logs

Description of problem:
---------------------------------------
Consider a cluster created via the Console, having a host (say host1) added to it, such that host1 already has a peer. 
Output of 'gluster peer status' on host1 - 

Number of Peers: 1

Hostname: 10.70.35.68
Uuid: 6b2235c3-543f-4e08-90c4-634069751124
State: Peer in Cluster (Disconnected)

Now add another host (say host2), to this cluster, such that host2 also has a peer. Output of 'gluster peer status' on host2 - 

Number of Peers: 1

Hostname: 10.70.35.90
Uuid: 0131251d-7a9e-490f-a628-ce01a8216c1e
State: Peer in Cluster (Disconnected)

host2 after going through the process of installing and reboot, comes up. Then, host1 disappears from the UI with the following message on the Events log - 

"Detected server host1 removed from Cluster TestCluster, and removed it from engine DB."

Version-Release number of selected component (if applicable):
Red Hat Storage Console Version: 2.1.0-0.qa6.el6rhs 

How reproducible:
Intermittent

Steps to Reproduce:
1. Add host1 to the cluster such that it has a peer in the Disconnected state.
2. Add host2 such that it also has a peer in the Disconnected state.
  
Actual results:
After host2 comes UP, host1 gets disappeared from the UI.

Expected results:
host2 coming UP, should not result in host1 getting disappeared from the UI. 

Additional info:

Comment 1 Shruti Sampat 2013-03-01 10:57:25 UTC
Created attachment 704128 [details]
vdsm logs from server 1

Comment 2 Shruti Sampat 2013-03-01 10:59:00 UTC
Created attachment 704129 [details]
vdsm logs from server 2

Comment 3 Shruti Sampat 2013-03-01 11:01:31 UTC
Created attachment 704132 [details]
gluster logs from server 1

Comment 4 Shruti Sampat 2013-03-01 11:07:16 UTC
Created attachment 704133 [details]
gluster logs from server 2

Comment 6 Shireesh 2013-04-17 11:47:44 UTC
Sent upstream patch for validating that the new server being added is not part of another cluster. http://gerrit.ovirt.org/13980

Comment 7 Shruti Sampat 2013-04-25 10:37:15 UTC
Verified as fixed in Red Hat Storage Console Version: 2.1.0-0.qa10.el6rhs.
The Console now displays the following error message when trying to add a host that is part of another cluster - 

"Error while executing action: Server <IP> is already part of another cluster."


Note You need to log in before you can comment on or make changes to this bug.