Bug 1446092 - Ansible 2.3 specific modifications to fix the error in playbooks
Summary: Ansible 2.3 specific modifications to fix the error in playbooks
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: Red Hat Gluster Storage
Classification: Red Hat Storage
Component: gdeploy
Version: rhgs-3.3
Hardware: x86_64
OS: Linux
unspecified
urgent
Target Milestone: ---
: RHGS 3.3.0
Assignee: Sachidananda Urs
QA Contact: Manisha Saini
URL:
Whiteboard:
Depends On:
Blocks: 1417151
TreeView+ depends on / blocked
 
Reported: 2017-04-27 08:58 UTC by Sachidananda Urs
Modified: 2017-09-21 04:49 UTC (History)
5 users (show)

Fixed In Version: gdeploy-2.0.2-5
Doc Type: If docs needed, set a value
Doc Text:
Clone Of:
Environment:
Last Closed: 2017-09-21 04:49:50 UTC
Embargoed:


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
Red Hat Product Errata RHBA-2017:2777 0 normal SHIPPED_LIVE gdeploy bug fix and enhancement update for RHEL7 2017-09-21 08:23:08 UTC

Description Sachidananda Urs 2017-04-27 08:58:42 UTC
Description of problem:

Upon upgrade to ansible 2.3, couple of playbooks throw error if default values are not set. This bug is to track those fixes.

Version-Release number of selected component (if applicable):


How reproducible:
Always

Steps to Reproduce:
1. Update to ansible 2.3, with gdeploy-2.0.1
2. Create a config with [backend-setup] section and run.

Actual results:

Error is thrown stating `wipefs' variable is not set.

Expected results:

Should run without any issues.

Additional info:

If any such further incompatibilities are noted, please update this bug.

Comment 4 Sachidananda Urs 2017-05-04 06:44:14 UTC
Commit: https://github.com/gluster/gdeploy/commit/8394634 should resolve the issue.

Comment 5 Manisha Saini 2017-07-04 10:19:21 UTC
Verified this with 


# rpm -qa | grep gdeploy
gdeploy-2.0.2-12.el7rhgs.noarch

# rpm -qa | grep ansible
ansible-2.3.0.0-3.el7.noarch


Ran the following config file for backend setup followed by volume creation and ganesha setup. Works fine with this build

[hosts]
dhcp42-125.lab.eng.blr.redhat.com
dhcp42-127.lab.eng.blr.redhat.com
dhcp42-129.lab.eng.blr.redhat.com
dhcp42-119.lab.eng.blr.redhat.com
#dhcp42-117.lab.eng.blr.redhat.com
#dhcp42-114.lab.eng.blr.redhat.com
#dhcp42-107.lab.eng.blr.redhat.com
#dhcp42-88.lab.eng.blr.redhat.com



[backend-setup]
devices=/dev/sdb,/dev/sdc,/dev/sdd
vgs=vg1,vg2,vg3
pools=pool1,pool2,pool3
lvs=lv1,lv2,lv3
mountpoints=/gluster/brick1,/gluster/brick2,/gluster/brick3
brick_dirs=/gluster/brick1/1,/gluster/brick2/1,/gluster/brick3/1

[firewalld]
action=add
ports=111/tcp,2049/tcp,54321/tcp,5900/tcp,5900-6923/tcp,5666/tcp,16514/tcp
services=glusterfs,nlm,nfs,rpc-bind,high-availability,mountd,rquota


[volume1]
action=create
volname=ganeshavol1
transport=tcp
replica_count=2
force=yes
brick_dirs=/gluster/brick1/1,/gluster/brick2/1,/gluster/brick3/1

[volume2]
action=create
volname=ganeshavol4
transport=tcp
replica_count=2
force=yes
brick_dirs=/gluster/brick1/4,/gluster/brick2/4,/gluster/brick3/4

[volume3]
action=create
volname=ganeshavol5
transport=tcp
replica_count=2
force=yes
brick_dirs=/gluster/brick1/5,/gluster/brick2/5,/gluster/brick3/5


[volume3]
action=create
volname=ganeshavol3
transport=tcp
replica_count=2
force=yes
brick_dirs=/gluster/brick1/3,/gluster/brick2/3,/gluster/brick3/3

[nfs-ganesha]
action=create-cluster
ha-name=ganesha-ha-360
cluster-nodes=dhcp42-125.lab.eng.blr.redhat.com,dhcp42-127.lab.eng.blr.redhat.com,dhcp42-129.lab.eng.blr.redhat.com,dhcp42-119.lab.eng.blr.redhat.com
vip=10.70.42.40,10.70.42.41,10.70.42.42,10.70.42.43

Comment 7 errata-xmlrpc 2017-09-21 04:49:50 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHBA-2017:2777


Note You need to log in before you can comment on or make changes to this bug.