Bug 829342 - Gluster - Backend: Some nodes aren't added to "peer probe"
Summary: Gluster - Backend: Some nodes aren't added to "peer probe"
Keywords:
Status: CLOSED DUPLICATE of bug 811493
Alias: None
Product: Red Hat Gluster Storage
Classification: Red Hat Storage
Component: unclassified
Version: 2.0
Hardware: Unspecified
OS: Unspecified
high
high
Target Milestone: ---
: ---
Assignee: Vijay Bellur
QA Contact: Sudhir D
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2012-06-06 14:10 UTC by Daniel Paikov
Modified: 2013-07-04 07:57 UTC (History)
4 users (show)

Fixed In Version:
Doc Type: Bug Fix
Doc Text:
Clone Of:
Environment:
Last Closed: 2012-06-07 19:56:57 UTC
Embargoed:


Attachments (Terms of Use)
engine.log (38.08 KB, application/x-compressed-tar)
2012-06-06 14:10 UTC, Daniel Paikov
no flags Details
vdsm.log (145.48 KB, application/x-compressed-tar)
2012-06-06 14:20 UTC, Daniel Paikov
no flags Details

Description Daniel Paikov 2012-06-06 14:10:06 UTC
Created attachment 589912 [details]
engine.log

* 3 nodes were added to the gluster cluster using GUI, all nodes are Up.
* "gluster peer status" only shows 2 nodes (local + one peer).
* When trying to create a volume, the creation fails because one of the hosts isn't recognized as a peer.

The problematic host is named "node3" with IP 10.35.97.47.

Comment 1 Daniel Paikov 2012-06-06 14:20:20 UTC
Created attachment 589914 [details]
vdsm.log

Comment 3 Daniel Paikov 2012-06-06 14:28:24 UTC
My mistake, there is no "problematic host".

The host 10.35.64.205 shows itself and 10.35.97.162
The host 10.35.97.47 shows itself and 10.35.97.162
The host 10.35.97.162 shows itself and 10.35.97.47

Comment 4 Shireesh 2012-06-07 08:50:29 UTC
Assigning to Vijay as this looks related to GlusterFS.

Comment 5 Daniel Paikov 2012-06-07 09:33:10 UTC
10.35.97.162:
[root@localhost ~]# gluster peer status
Number of Peers: 1

Hostname: 10.35.97.47
Uuid: 2ae16b2e-77c2-4220-9fd5-1db417f4475b
State: Peer in Cluster (Connected)

10.35.97.47:
[root@dbotzer-reporting ~]# gluster peer status
Number of Peers: 1

Hostname: 10.35.97.162
Uuid: 2ae16b2e-77c2-4220-9fd5-1db417f4475b
State: Peer in Cluster (Connected)

10.35.64.205:
Number of Peers: 2

Hostname: 10.35.97.162
Uuid: 00000000-0000-0000-0000-000000000000
State: Connected to Peer (Connected)

Hostname: 10.35.97.47
Uuid: 2ae16b2e-77c2-4220-9fd5-1db417f4475b
State: Accepted peer request (Connected)

Comment 6 Niels de Vos 2012-06-07 11:02:28 UTC
(In reply to comment #5)
> 10.35.97.162:
> [root@localhost ~]# gluster peer status
> Number of Peers: 1
> 
> Hostname: 10.35.97.47
> Uuid: 2ae16b2e-77c2-4220-9fd5-1db417f4475b

> 10.35.97.47:
> [root@dbotzer-reporting ~]# gluster peer status
> Number of Peers: 1
> 
> Hostname: 10.35.97.162
> Uuid: 2ae16b2e-77c2-4220-9fd5-1db417f4475b
> State: Peer in Cluster (Connected)

Both servers seem to have the same UUID. How did you install these servers?

Comment 7 Daniel Paikov 2012-06-07 11:15:25 UTC
(In reply to comment #6)
> (In reply to comment #5)
> > 10.35.97.162:
> > [root@localhost ~]# gluster peer status
> > Number of Peers: 1
> > 
> > Hostname: 10.35.97.47
> > Uuid: 2ae16b2e-77c2-4220-9fd5-1db417f4475b
> 
> > 10.35.97.47:
> > [root@dbotzer-reporting ~]# gluster peer status
> > Number of Peers: 1
> > 
> > Hostname: 10.35.97.162
> > Uuid: 2ae16b2e-77c2-4220-9fd5-1db417f4475b
> > State: Peer in Cluster (Connected)
> 
> Both servers seem to have the same UUID. How did you install these servers?

They are VMs created from a template. This probably has something to do with that. How do I reset a server's UUID?

Comment 8 Niels de Vos 2012-06-07 11:18:51 UTC
The UUID for a server is located in /var/lib/glusterd/glusterd.info.

I tend to stop the glusterd servive and remove the whole /var/lib/glusterd directory before shutting VMs down and creating the template. When the glusterd service starts the first time, it will populate the directory.

Comment 9 Anand Avati 2012-06-07 19:56:57 UTC

*** This bug has been marked as a duplicate of bug 811493 ***


Note You need to log in before you can comment on or make changes to this bug.