Bug 1631806
| Summary: | deployment through colonizer fails due to wrong playbook path in colonizer.py | ||||||
|---|---|---|---|---|---|---|---|
| Product: | [Red Hat Storage] Red Hat Gluster Storage | Reporter: | Nag Pavan Chilakam <nchilaka> | ||||
| Component: | gluster-colonizer | Assignee: | Ramakrishna Reddy Yekulla <rreddy> | ||||
| Status: | CLOSED ERRATA | QA Contact: | Nag Pavan Chilakam <nchilaka> | ||||
| Severity: | urgent | Docs Contact: | |||||
| Priority: | unspecified | ||||||
| Version: | rhgs-3.4 | CC: | dblack, nchilaka, rcyriac, rhs-bugs, rreddy, sanandpa, sankarshan, ssaha | ||||
| Target Milestone: | --- | Keywords: | Regression, ZStream | ||||
| Target Release: | RHGS 3.4.z Async Update | ||||||
| Hardware: | Unspecified | ||||||
| OS: | Unspecified | ||||||
| Whiteboard: | |||||||
| Fixed In Version: | gluster-colonizer-1.2-3 | Doc Type: | No Doc Update | ||||
| Doc Text: |
undefined
|
Story Points: | --- | ||||
| Clone Of: | Environment: | ||||||
| Last Closed: | 2018-11-05 11:06:45 UTC | Type: | Bug | ||||
| Regression: | --- | Mount Type: | --- | ||||
| Documentation: | --- | CRM: | |||||
| Verified Versions: | Category: | --- | |||||
| oVirt Team: | --- | RHEL 7.3 requirements from Atomic Host: | |||||
| Cloudforms Team: | --- | Target Upstream Version: | |||||
| Embargoed: | |||||||
| Attachments: |
|
||||||
|
Description
Nag Pavan Chilakam
2018-09-21 15:23:47 UTC
Updated the title since this issue is not isolated to the media config. The problem is in the gluster-colonizer.py script, where a downstream change is required to support a path change for the ansible playbooks. Created attachment 1486247 [details]
playbook path downstream fix.patch
Definitely a build regression. Patch did not apply in the build. I have fixed the issue. checked latest colonizer and the path
has been corrected
[root@dhcp46-80 ~]# rpm -qa|grep colonizer
gluster-colonizer-1.2-3.el7rhgs.noarch
playbook path is now "playbook_path = g1_path + "playbook/ansible/""
[root@dhcp46-80 ~]# cat /usr/bin/gluster-colonizer.py |grep path
g1_path = oem_id['flavor']['g1_path']
branding_file = "%sbranding.yml" % g1_path
playbook_path = g1_path + "playbook/ansible/"
run_ansible_playbook(playbook_path + '/g1-key-dist.yml', False, True, True, True)
run_ansible_playbook(playbook_path + '/g1-bootstrap.yml')
flavor_path = g1_path + 'oemid/' + oem_id['flavor']['node']['flavor_path']
# Add custom module path and import flavor module
sys.path.insert(0, flavor_path)
run_ansible_playbook(flavor_path +
run_ansible_playbook(g1_path + 'oemid/' +
playbook_args = playbook_path + '/g1-reset.yml --user ansible --extra-vars="{cache_devices: ' + str(cache_devices) + ',arbiter: ' + str('yes' if str(oem_id['flavor']['arbiter_size_factor']) != "None" else 'no') + ',backend_configuration: ' + str( backend_configuration ) + '}"'
customizationFile = flavor_path + oem_id['flavor']['node']['customization_file_name']
if not os.path.isfile(customizationFile):
playbook_args = playbook_path + '/g1-deploy.yml --extra-vars="{cache_devices: ' + str(
run_ansible_playbook(playbook_path + "/g1-key-dist.yml")
playbook_args = playbook_path + '/g1-smb-ad.yml --extra-vars="{'
run_ansible_playbook(playbook_path + '/g1-smb-ad-restart-services.yml', continue_on_fail=True)
run_ansible_playbook(playbook_path + "/g1-root-pw.yml" + " --extra-vars=\"{root_password_hashed: " + re.sub('\$', '\\\$', root_password_hashed) + "}\"", continue_on_fail=True)
playbook_args = playbook_path + '/g1-post-install.yml --extra-vars="{'
playbook_args = playbook_path + '/g1-perf-test.yml --extra-vars="{default_volname: ' + str(
hence moving to verified
Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory, and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://access.redhat.com/errata/RHBA-2018:3465 |