Bug 1263300 - RFE: gdeploy: add-brick should work on fresh hosts
RFE: gdeploy: add-brick should work on fresh hosts
Status: CLOSED ERRATA
Product: Red Hat Gluster Storage
Classification: Red Hat
Component: gdeploy (Show other bugs)
3.1
Unspecified Unspecified
unspecified Severity urgent
: ---
: RHGS 3.1.3
Assigned To: Nandaja Varma
Jonathan Holloway
: FutureFeature, ZStream
Depends On:
Blocks: 1299184
  Show dependency treegraph
 
Reported: 2015-09-15 09:58 EDT by Anush Shetty
Modified: 2016-06-23 01:28 EDT (History)
6 users (show)

See Also:
Fixed In Version: gdeploy-2.0-6
Doc Type: Enhancement
Doc Text:
Story Points: ---
Clone Of:
Environment:
Last Closed: 2016-06-23 01:28:36 EDT
Type: Bug
Regression: ---
Mount Type: ---
Documentation: ---
CRM:
Verified Versions:
Category: ---
oVirt Team: ---
RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: ---


Attachments (Terms of Use)

  None (edit)
Description Anush Shetty 2015-09-15 09:58:26 EDT
Description of problem: When the bricks to be added through add-brick operation to a volume also need a fresh backend setup, gdeploy fails.

gdeploy add-brick operations works only when there is an existing brick mounts on the hosts. 

Version-Release number of selected component (if applicable): gdeploy-1.0-11.el7rhs.noarch


How reproducible: Always

Steps to Reproduce:
1. Create a config file for setting up bricks and doing an add-brick:

[hosts]
rhshdp03.lab.eng.blr.redhat.com
rhshdp04.lab.eng.blr.redhat.com
rhshdp05.lab.eng.blr.redhat.com
rhshdp06.lab.eng.blr.redhat.com

[devices]                                                         /dev/vdc                                                                                                                                                        
[mountpoints]
/gluster1/brick1/

[brick_dirs]
/gluster1/brick1/s1

[peer]
manage=probe

[tune-profile]
none

[volume]
action=add-brick
volname=rhshdp03.lab.eng.blr.redhat.com:vol2
replica=yes
replica_count=2
bricks=rhshdp05.lab.eng.blr.redhat.com:/gluster1/brick1/s1,rhshdp06.lab.eng.blr.redhat.com:/gluster1/brick1/s1

2. Run gdeploy: gdeploy -c gluster.conf

Actual results:

Backend setup succeeds, but add-brick fails

TASK: [Creates a Trusted Storage Pool] ****************************************
FATAL: no hosts matched or all hosts have already failed -- aborting


PLAY RECAP ********************************************************************
           to retry, use: --limit @/root/ansible_playbooks.retry

rhshdp03.lab.eng.blr.redhat.com : ok=0    changed=0    unreachable=0    failed=1
rhshdp04.lab.eng.blr.redhat.com : ok=0    changed=0    unreachable=0    failed=1
rhshdp05.lab.eng.blr.redhat.com : ok=11   changed=11   unreachable=0    failed=0
rhshdp06.lab.eng.blr.redhat.com : ok=11   changed=11   unreachable=0    failed=0
Comment 2 Sachidananda Urs 2015-09-15 10:18:02 EDT
This is a configuration error! 

[hosts]
rhshdp03.lab.eng.blr.redhat.com
rhshdp04.lab.eng.blr.redhat.com
rhshdp05.lab.eng.blr.redhat.com
rhshdp06.lab.eng.blr.redhat.com

[devices]                                                         
/dev/vdc                                                                                                                                                        

[mountpoints]
/gluster1/brick1/

[brick_dirs]
/gluster1/brick1/s1


In the above case gdeploy tries to create backend on all the hosts. And it fails.
This is a expected behavior.
Comment 3 Nandaja Varma 2015-09-15 11:22:52 EDT
The configuration in the discussion does not make much sense and gdeploy is getting confused.
As of now, we do not support this back-end setup+add-brick functionality. It needs to be done separately.
Comment 6 Nandaja Varma 2015-09-16 03:35:52 EDT
How I would write configuration for this particular use case is:

[hosts]
rhshdp05.lab.eng.blr.redhat.com
rhshdp06.lab.eng.blr.redhat.com

[devices]                                                         /dev/vdc                                                                                                                                                        
[mountpoints]
/gluster1/brick1/

[brick_dirs]
/gluster1/brick1/s1

[peer]
manage=probe

[tune-profile]
none

[volume]
action=add-brick
volname=rhshdp03.lab.eng.blr.redhat.com:vol2
replica=yes
replica_count=2
bricks=rhshdp05.lab.eng.blr.redhat.com:/gluster1/brick1/s1,rhshdp06.lab.eng.blr.redhat.com:/gluster1/brick1/s1

Remove the two extra hosts(the ones that do not need a back-end setup) from the 'hosts' section.
Comment 7 Anush Shetty 2015-09-16 04:18:15 EDT
(In reply to Nandaja Varma from comment #6)
> How I would write configuration for this particular use case is:
> 
> [hosts]
> rhshdp05.lab.eng.blr.redhat.com
> rhshdp06.lab.eng.blr.redhat.com
> 
> [devices]                                                         /dev/vdc  
> 
> [mountpoints]
> /gluster1/brick1/
> 
> [brick_dirs]
> /gluster1/brick1/s1
> 
> [peer]
> manage=probe
> 
> [tune-profile]
> none
> 
> [volume]
> action=add-brick
> volname=rhshdp03.lab.eng.blr.redhat.com:vol2
> replica=yes
> replica_count=2
> bricks=rhshdp05.lab.eng.blr.redhat.com:/gluster1/brick1/s1,rhshdp06.lab.eng.
> blr.redhat.com:/gluster1/brick1/s1
> 
> Remove the two extra hosts(the ones that do not need a back-end setup) from
> the 'hosts' section.

This fails here:

TASK: [Creates a Trusted Storage Pool] ****************************************
FATAL: no hosts matched or all hosts have already failed -- aborting


PLAY RECAP ********************************************************************
           to retry, use: --limit @/root/ansible_playbooks.retry

rhshdp03.lab.eng.blr.redhat.com : ok=0    changed=0    unreachable=0    failed=1
rhshdp05.lab.eng.blr.redhat.com : ok=11   changed=11   unreachable=0    failed=0
rhshdp06.lab.eng.blr.redhat.com : ok=11   changed=11   unreachable=0    failed=0
Comment 8 Nandaja Varma 2015-09-23 06:14:35 EDT
This is fixed in the released build.
Comment 9 Sachidananda Urs 2015-11-06 11:36:12 EST
Fixed in branch: https://github.com/gluster/gdeploy/tree/1.1
Comment 14 errata-xmlrpc 2016-06-23 01:28:36 EDT
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHEA-2016:1250

Note You need to log in before you can comment on or make changes to this bug.