Bug 829342

Summary: Gluster - Backend: Some nodes aren't added to "peer probe"
Product: [Red Hat Storage] Red Hat Gluster Storage Reporter: Daniel Paikov <dpaikov>
Component: unclassifiedAssignee: Vijay Bellur <vbellur>
Status: CLOSED DUPLICATE QA Contact: Sudhir D <sdharane>
Severity: high Docs Contact:
Priority: high    
Version: 2.0CC: aavati, gluster-bugs, hateya, ndevos
Target Milestone: ---   
Target Release: ---   
Hardware: Unspecified   
OS: Unspecified   
Whiteboard:
Fixed In Version: Doc Type: Bug Fix
Doc Text:
Story Points: ---
Clone Of: Environment:
Last Closed: 2012-06-07 19:56:57 UTC Type: Bug
Regression: --- Mount Type: ---
Documentation: --- CRM:
Verified Versions: Category: ---
oVirt Team: --- RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: --- Target Upstream Version:
Embargoed:
Attachments:
Description Flags
engine.log
none
vdsm.log none

Description Daniel Paikov 2012-06-06 14:10:06 UTC
Created attachment 589912 [details]
engine.log

* 3 nodes were added to the gluster cluster using GUI, all nodes are Up.
* "gluster peer status" only shows 2 nodes (local + one peer).
* When trying to create a volume, the creation fails because one of the hosts isn't recognized as a peer.

The problematic host is named "node3" with IP 10.35.97.47.

Comment 1 Daniel Paikov 2012-06-06 14:20:20 UTC
Created attachment 589914 [details]
vdsm.log

Comment 3 Daniel Paikov 2012-06-06 14:28:24 UTC
My mistake, there is no "problematic host".

The host 10.35.64.205 shows itself and 10.35.97.162
The host 10.35.97.47 shows itself and 10.35.97.162
The host 10.35.97.162 shows itself and 10.35.97.47

Comment 4 Shireesh 2012-06-07 08:50:29 UTC
Assigning to Vijay as this looks related to GlusterFS.

Comment 5 Daniel Paikov 2012-06-07 09:33:10 UTC
10.35.97.162:
[root@localhost ~]# gluster peer status
Number of Peers: 1

Hostname: 10.35.97.47
Uuid: 2ae16b2e-77c2-4220-9fd5-1db417f4475b
State: Peer in Cluster (Connected)

10.35.97.47:
[root@dbotzer-reporting ~]# gluster peer status
Number of Peers: 1

Hostname: 10.35.97.162
Uuid: 2ae16b2e-77c2-4220-9fd5-1db417f4475b
State: Peer in Cluster (Connected)

10.35.64.205:
Number of Peers: 2

Hostname: 10.35.97.162
Uuid: 00000000-0000-0000-0000-000000000000
State: Connected to Peer (Connected)

Hostname: 10.35.97.47
Uuid: 2ae16b2e-77c2-4220-9fd5-1db417f4475b
State: Accepted peer request (Connected)

Comment 6 Niels de Vos 2012-06-07 11:02:28 UTC
(In reply to comment #5)
> 10.35.97.162:
> [root@localhost ~]# gluster peer status
> Number of Peers: 1
> 
> Hostname: 10.35.97.47
> Uuid: 2ae16b2e-77c2-4220-9fd5-1db417f4475b

> 10.35.97.47:
> [root@dbotzer-reporting ~]# gluster peer status
> Number of Peers: 1
> 
> Hostname: 10.35.97.162
> Uuid: 2ae16b2e-77c2-4220-9fd5-1db417f4475b
> State: Peer in Cluster (Connected)

Both servers seem to have the same UUID. How did you install these servers?

Comment 7 Daniel Paikov 2012-06-07 11:15:25 UTC
(In reply to comment #6)
> (In reply to comment #5)
> > 10.35.97.162:
> > [root@localhost ~]# gluster peer status
> > Number of Peers: 1
> > 
> > Hostname: 10.35.97.47
> > Uuid: 2ae16b2e-77c2-4220-9fd5-1db417f4475b
> 
> > 10.35.97.47:
> > [root@dbotzer-reporting ~]# gluster peer status
> > Number of Peers: 1
> > 
> > Hostname: 10.35.97.162
> > Uuid: 2ae16b2e-77c2-4220-9fd5-1db417f4475b
> > State: Peer in Cluster (Connected)
> 
> Both servers seem to have the same UUID. How did you install these servers?

They are VMs created from a template. This probably has something to do with that. How do I reset a server's UUID?

Comment 8 Niels de Vos 2012-06-07 11:18:51 UTC
The UUID for a server is located in /var/lib/glusterd/glusterd.info.

I tend to stop the glusterd servive and remove the whole /var/lib/glusterd directory before shutting VMs down and creating the template. When the glusterd service starts the first time, it will populate the directory.

Comment 9 Anand Avati 2012-06-07 19:56:57 UTC

*** This bug has been marked as a duplicate of bug 811493 ***