Bug 1258434

Summary: gdeploy: peer probe issues during an add-brick operation with fresh hosts
Product: [Red Hat Storage] Red Hat Gluster Storage Reporter: Anush Shetty <ashetty>
Component: coreAssignee: Nandaja Varma <nvarma>
Status: CLOSED ERRATA QA Contact: Anush Shetty <ashetty>
Severity: unspecified Docs Contact:
Priority: unspecified    
Version: rhgs-3.1CC: rhs-bugs, smohan, storage-qa-internal, surs, vagarwal
Target Milestone: ---Keywords: ZStream
Target Release: RHGS 3.1.1   
Hardware: Unspecified   
OS: Unspecified   
Whiteboard:
Fixed In Version: Doc Type: Bug Fix
Doc Text:
Story Points: ---
Clone Of: Environment:
Last Closed: 2015-10-05 07:25:18 UTC Type: Bug
Regression: --- Mount Type: ---
Documentation: --- CRM:
Verified Versions: Category: ---
oVirt Team: --- RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: --- Target Upstream Version:
Embargoed:
Bug Depends On:    
Bug Blocks: 1251815    

Description Anush Shetty 2015-08-31 11:34:54 UTC
Description of problem: If bricks being added to a volume are part of fresh hosts, the add-brick operation will fail. gdeploy ends up creating a new cluster between two hosts passed for add-brick operation.


Version-Release number of selected component (if applicable): 
gdeploy-1.0-6.el7rhgs.noarch

How reproducible: Always


Steps to Reproduce:
1. Create a RHGS volume with two hosts.
2. Pass two new hosts for add-brick operation. 
3. Run gdeploy

Actual results:

Add-brick operation fails.

Expected results:

The two new hosts should be added to an existing cluster for the add-brick operation to succeed. 

Additional info:

volume information:

# gluster volume info
 
Volume Name: gluster_vol1
Type: Replicate
Volume ID: 6b5d2244-848d-46f7-a3ff-762c63da16a5
Status: Started
Number of Bricks: 1 x 2 = 2
Transport-type: tcp
Bricks:
Brick1: rhshdp03.lab.eng.blr.redhat.com:/b/g1/g1
Brick2: rhshdp04.lab.eng.blr.redhat.com:/gluster/brick1/brick1
Options Reconfigured:
performance.readdir-ahead: on

# gdeploy config file:

[hosts]
rhshdp03.lab.eng.blr.redhat.com
rhshdp04.lab.eng.blr.redhat.com
rhshdp05.lab.eng.blr.redhat.com
rhshdp06.lab.eng.blr.redhat.com

[mountpoints]
/gluster/brick1/
                                     

[rhshdp03.lab.eng.blr.redhat.com]
#devices=/dev/vdb                                                               mountpoints=/b/g1

[rhshdp04.lab.eng.blr.redhat.com]
mountpoints=/gluster/brick1

[rhshdp05.lab.eng.blr.redhat.com]
mountpoints=/gluster/brick1

[rhshdp06.lab.eng.blr.redhat.com]
mountpoints=/gluster/brick1

[peer]
manage=probe

[volume]
action=add-brick
volname=gluster_vol1         
bricks=rhshdp04.lab.eng.blr.redhat.com:/gluster/brick1/add_b1,rhshdp06.lab.eng.blr.redhat.com:/gluster/brick1/add_b2


On host3 after running gdeploy:

# gluster peer status
Number of Peers: 1

Hostname: rhshdp06.lab.eng.blr.redhat.com
Uuid: 0b743012-bf50-4f26-b748-5d3a55204495
State: Peer in Cluster (Connected

Comment 2 Nandaja Varma 2015-08-31 11:40:42 UTC
Fixed in the commit: https://github.com/gluster/gdeploy/commit/0f8846fe7353f7225d4730fbb535985b293371ec

Will be available in the next build.

Comment 3 Anush Shetty 2015-09-10 09:41:30 UTC
With volname passed in the format <hostname>:<volname>, verified with gdeploy-1.0-10.el6rhs.noarch

Comment 6 errata-xmlrpc 2015-10-05 07:25:18 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://rhn.redhat.com/errata/RHSA-2015-1845.html