Bug 1263300 - RFE: gdeploy: add-brick should work on fresh hosts
Summary: RFE: gdeploy: add-brick should work on fresh hosts
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: Red Hat Gluster Storage
Classification: Red Hat Storage
Component: gdeploy
Version: rhgs-3.1
Hardware: Unspecified
OS: Unspecified
unspecified
urgent
Target Milestone: ---
: RHGS 3.1.3
Assignee: Nandaja Varma
QA Contact: Jonathan Holloway
URL:
Whiteboard:
Depends On:
Blocks: 1299184
TreeView+ depends on / blocked
 
Reported: 2015-09-15 13:58 UTC by Anush Shetty
Modified: 2016-06-23 05:28 UTC (History)
6 users (show)

Fixed In Version: gdeploy-2.0-6
Doc Type: Enhancement
Doc Text:
Clone Of:
Environment:
Last Closed: 2016-06-23 05:28:36 UTC
Embargoed:


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
Red Hat Product Errata RHEA-2016:1250 0 normal SHIPPED_LIVE gdeploy update for Red Hat Gluster Storage 3.1 update 3 2016-06-23 09:11:59 UTC

Description Anush Shetty 2015-09-15 13:58:26 UTC
Description of problem: When the bricks to be added through add-brick operation to a volume also need a fresh backend setup, gdeploy fails.

gdeploy add-brick operations works only when there is an existing brick mounts on the hosts. 

Version-Release number of selected component (if applicable): gdeploy-1.0-11.el7rhs.noarch


How reproducible: Always

Steps to Reproduce:
1. Create a config file for setting up bricks and doing an add-brick:

[hosts]
rhshdp03.lab.eng.blr.redhat.com
rhshdp04.lab.eng.blr.redhat.com
rhshdp05.lab.eng.blr.redhat.com
rhshdp06.lab.eng.blr.redhat.com

[devices]                                                         /dev/vdc                                                                                                                                                        
[mountpoints]
/gluster1/brick1/

[brick_dirs]
/gluster1/brick1/s1

[peer]
manage=probe

[tune-profile]
none

[volume]
action=add-brick
volname=rhshdp03.lab.eng.blr.redhat.com:vol2
replica=yes
replica_count=2
bricks=rhshdp05.lab.eng.blr.redhat.com:/gluster1/brick1/s1,rhshdp06.lab.eng.blr.redhat.com:/gluster1/brick1/s1

2. Run gdeploy: gdeploy -c gluster.conf

Actual results:

Backend setup succeeds, but add-brick fails

TASK: [Creates a Trusted Storage Pool] ****************************************
FATAL: no hosts matched or all hosts have already failed -- aborting


PLAY RECAP ********************************************************************
           to retry, use: --limit @/root/ansible_playbooks.retry

rhshdp03.lab.eng.blr.redhat.com : ok=0    changed=0    unreachable=0    failed=1
rhshdp04.lab.eng.blr.redhat.com : ok=0    changed=0    unreachable=0    failed=1
rhshdp05.lab.eng.blr.redhat.com : ok=11   changed=11   unreachable=0    failed=0
rhshdp06.lab.eng.blr.redhat.com : ok=11   changed=11   unreachable=0    failed=0

Comment 2 Sachidananda Urs 2015-09-15 14:18:02 UTC
This is a configuration error! 

[hosts]
rhshdp03.lab.eng.blr.redhat.com
rhshdp04.lab.eng.blr.redhat.com
rhshdp05.lab.eng.blr.redhat.com
rhshdp06.lab.eng.blr.redhat.com

[devices]                                                         
/dev/vdc                                                                                                                                                        

[mountpoints]
/gluster1/brick1/

[brick_dirs]
/gluster1/brick1/s1


In the above case gdeploy tries to create backend on all the hosts. And it fails.
This is a expected behavior.

Comment 3 Nandaja Varma 2015-09-15 15:22:52 UTC
The configuration in the discussion does not make much sense and gdeploy is getting confused.
As of now, we do not support this back-end setup+add-brick functionality. It needs to be done separately.

Comment 6 Nandaja Varma 2015-09-16 07:35:52 UTC
How I would write configuration for this particular use case is:

[hosts]
rhshdp05.lab.eng.blr.redhat.com
rhshdp06.lab.eng.blr.redhat.com

[devices]                                                         /dev/vdc                                                                                                                                                        
[mountpoints]
/gluster1/brick1/

[brick_dirs]
/gluster1/brick1/s1

[peer]
manage=probe

[tune-profile]
none

[volume]
action=add-brick
volname=rhshdp03.lab.eng.blr.redhat.com:vol2
replica=yes
replica_count=2
bricks=rhshdp05.lab.eng.blr.redhat.com:/gluster1/brick1/s1,rhshdp06.lab.eng.blr.redhat.com:/gluster1/brick1/s1

Remove the two extra hosts(the ones that do not need a back-end setup) from the 'hosts' section.

Comment 7 Anush Shetty 2015-09-16 08:18:15 UTC
(In reply to Nandaja Varma from comment #6)
> How I would write configuration for this particular use case is:
> 
> [hosts]
> rhshdp05.lab.eng.blr.redhat.com
> rhshdp06.lab.eng.blr.redhat.com
> 
> [devices]                                                         /dev/vdc  
> 
> [mountpoints]
> /gluster1/brick1/
> 
> [brick_dirs]
> /gluster1/brick1/s1
> 
> [peer]
> manage=probe
> 
> [tune-profile]
> none
> 
> [volume]
> action=add-brick
> volname=rhshdp03.lab.eng.blr.redhat.com:vol2
> replica=yes
> replica_count=2
> bricks=rhshdp05.lab.eng.blr.redhat.com:/gluster1/brick1/s1,rhshdp06.lab.eng.
> blr.redhat.com:/gluster1/brick1/s1
> 
> Remove the two extra hosts(the ones that do not need a back-end setup) from
> the 'hosts' section.

This fails here:

TASK: [Creates a Trusted Storage Pool] ****************************************
FATAL: no hosts matched or all hosts have already failed -- aborting


PLAY RECAP ********************************************************************
           to retry, use: --limit @/root/ansible_playbooks.retry

rhshdp03.lab.eng.blr.redhat.com : ok=0    changed=0    unreachable=0    failed=1
rhshdp05.lab.eng.blr.redhat.com : ok=11   changed=11   unreachable=0    failed=0
rhshdp06.lab.eng.blr.redhat.com : ok=11   changed=11   unreachable=0    failed=0

Comment 8 Nandaja Varma 2015-09-23 10:14:35 UTC
This is fixed in the released build.

Comment 9 Sachidananda Urs 2015-11-06 16:36:12 UTC
Fixed in branch: https://github.com/gluster/gdeploy/tree/1.1

Comment 14 errata-xmlrpc 2016-06-23 05:28:36 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHEA-2016:1250


Note You need to log in before you can comment on or make changes to this bug.