Description of problem: If bricks being added to a volume are part of fresh hosts, the add-brick operation will fail. gdeploy ends up creating a new cluster between two hosts passed for add-brick operation. Version-Release number of selected component (if applicable): gdeploy-1.0-6.el7rhgs.noarch How reproducible: Always Steps to Reproduce: 1. Create a RHGS volume with two hosts. 2. Pass two new hosts for add-brick operation. 3. Run gdeploy Actual results: Add-brick operation fails. Expected results: The two new hosts should be added to an existing cluster for the add-brick operation to succeed. Additional info: volume information: # gluster volume info Volume Name: gluster_vol1 Type: Replicate Volume ID: 6b5d2244-848d-46f7-a3ff-762c63da16a5 Status: Started Number of Bricks: 1 x 2 = 2 Transport-type: tcp Bricks: Brick1: rhshdp03.lab.eng.blr.redhat.com:/b/g1/g1 Brick2: rhshdp04.lab.eng.blr.redhat.com:/gluster/brick1/brick1 Options Reconfigured: performance.readdir-ahead: on # gdeploy config file: [hosts] rhshdp03.lab.eng.blr.redhat.com rhshdp04.lab.eng.blr.redhat.com rhshdp05.lab.eng.blr.redhat.com rhshdp06.lab.eng.blr.redhat.com [mountpoints] /gluster/brick1/ [rhshdp03.lab.eng.blr.redhat.com] #devices=/dev/vdb mountpoints=/b/g1 [rhshdp04.lab.eng.blr.redhat.com] mountpoints=/gluster/brick1 [rhshdp05.lab.eng.blr.redhat.com] mountpoints=/gluster/brick1 [rhshdp06.lab.eng.blr.redhat.com] mountpoints=/gluster/brick1 [peer] manage=probe [volume] action=add-brick volname=gluster_vol1 bricks=rhshdp04.lab.eng.blr.redhat.com:/gluster/brick1/add_b1,rhshdp06.lab.eng.blr.redhat.com:/gluster/brick1/add_b2 On host3 after running gdeploy: # gluster peer status Number of Peers: 1 Hostname: rhshdp06.lab.eng.blr.redhat.com Uuid: 0b743012-bf50-4f26-b748-5d3a55204495 State: Peer in Cluster (Connected
Fixed in the commit: https://github.com/gluster/gdeploy/commit/0f8846fe7353f7225d4730fbb535985b293371ec Will be available in the next build.
With volname passed in the format <hostname>:<volname>, verified with gdeploy-1.0-10.el6rhs.noarch
Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory, and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://rhn.redhat.com/errata/RHSA-2015-1845.html