Bug 1618678
Summary: | Installation fails with error: 'dict object' has no attribute 'rgw_hostname' | ||||||||||||||
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
Product: | [Red Hat Storage] Red Hat Ceph Storage | Reporter: | shilpa <smanjara> | ||||||||||||
Component: | Ceph-Ansible | Assignee: | Sébastien Han <shan> | ||||||||||||
Status: | CLOSED ERRATA | QA Contact: | shilpa <smanjara> | ||||||||||||
Severity: | urgent | Docs Contact: | Aron Gunn <agunn> | ||||||||||||
Priority: | high | ||||||||||||||
Version: | 3.1 | CC: | agunn, aschoen, ceph-eng-bugs, fhubik, gael_rehault, gfidente, gmeno, hgurav, hnallurv, jbrier, johfulto, kdreyer, nthomas, sankarshan, shan, tserlin, vashastr | ||||||||||||
Target Milestone: | rc | Keywords: | Automation, AutomationBlocker, Regression | ||||||||||||
Target Release: | 3.1 | ||||||||||||||
Hardware: | Unspecified | ||||||||||||||
OS: | Unspecified | ||||||||||||||
Whiteboard: | |||||||||||||||
Fixed In Version: | RHEL: ceph-ansible-3.1.0-0.1.rc21.el7cp Ubuntu: ceph-ansible_3.1.0~rc21-2redhat1 | Doc Type: | Bug Fix | ||||||||||||
Doc Text: |
.Ceph installation no longer fails when trying to deploy the Object Gateway
When deploying the Ceph Object Gateway using Ansible, the `rgw_hostname` variable was not being set on the Object Gateway node, but was incorrectly set on the Ceph Monitor node. In this release, the `rgw_hostname` variable is set properly and applied to the Ceph Object Gateway node.
|
Story Points: | --- | ||||||||||||
Clone Of: | Environment: | ||||||||||||||
Last Closed: | 2018-09-26 18:23:45 UTC | Type: | Bug | ||||||||||||
Regression: | --- | Mount Type: | --- | ||||||||||||
Documentation: | --- | CRM: | |||||||||||||
Verified Versions: | Category: | --- | |||||||||||||
oVirt Team: | --- | RHEL 7.3 requirements from Atomic Host: | |||||||||||||
Cloudforms Team: | --- | Target Upstream Version: | |||||||||||||
Embargoed: | |||||||||||||||
Bug Depends On: | |||||||||||||||
Bug Blocks: | 1578730, 1584264 | ||||||||||||||
Attachments: |
|
Description
shilpa
2018-08-17 09:57:47 UTC
RGW installation is blocked in scenario where ansible_hostname != ansible_fqdn in task "ceph-defaults : get current cluster status (if already running)" with message - "msg": "The conditional check 'ceph_release_num[ceph_release] >= ceph_release_num.luminous' failed. The error was: error while evaluating conditional (ceph_release_num[ceph_release] >= ceph_release_num.luminous): 'dict object' has no attribute u'dummy'\n\nThe error appears to have been in '/usr/share/ceph-ansible/roles/ceph-defaults/tasks/facts.yml': line 219, column 7, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n- block:\n - name: get current cluster status (if already running)\n ^ here\n" } Regards, Vasishta Shastry QE, Ceph I can not confirm this issue being fixed as part of OSP14 deployment, where we have ceph-ansible-3.1.0.0-0.rc19.1.el7.noarch available on UC. I am attaching whole ansible.log from mistral, where it is apparent that last failure can be related. Created attachment 1477591 [details]
/var/lib/mistral/xyz/ansible.log
(For clarity, Filip did some early testing before ceph-ansible-3.1.0rc19 is available in Brew.) Since that build does not fix this for Filip, we'll need more investigation here. *** Bug 1619736 has been marked as a duplicate of this bug. *** Is this better now? it looks like we're still seeing the issue with 3.1.3 build; attaching ansible.log and inventory.yaml Created attachment 1483383 [details]
ansible.log
Created attachment 1483384 [details]
inventory.yml
I can confirm this issue not being fixed in ceph-ansible-3.1.0-0.1.rc21.el7cp as part of OpenStack director 14 deployment. openstack-mistral-executor contains this package, but ceph-ansible fails as part of mistral post-deployment on this error: $ tail -n 300 /var/lib/mistral/config-download-latest/ansible.log ... "Conditional result was False\"}\n\nTASK [ceph-config : ensure /etc/ceph exists] ***********************************\ntask path: /usr/share/ceph-ansible/roles/ceph-config/tasks/main.yml:76\nFriday 14 September 2018 10:43:50 -0400 (0:00:00.050) 0:01:04.055 ****** \nchanged: [controller-2] => {\"changed\": true, \"gid\": 167, \"group\": \"167\", \"mode\": \"0755\", \"owner\": \"167\", \"path\": \"/etc/ceph\", \"secontext\": \"unconfined_u:object_r:etc_t:s0\", \"size\": 6, \"state\": \"directory\", \"uid\": 167}\n\nTASK [ceph-config : generate ceph.conf configuration file] *********************\ntask path: /usr/share/ceph-ansible/roles/ceph-config/tasks/main.yml:84\nFriday 14 September 2018 10:43:51 -0400 (0:00:00.457) 0:01:04.513 ****** \nfatal: [controller-2]: FAILED! => {\"msg\": \"'dict object' has no attribute 'rgw_hostname'\"}\n\nPLAY RECAP *********************************************************************\nceph-0 : ok=2 changed=0 unreachable=0 failed=0 \nceph-1 : ok=2 changed=0 unreachable=0 failed=0 \nceph-2 : ok=2 changed=0 unreachable=0 failed=0 \ncompute-0 : ok=2 changed=0 unreachable=0 failed=0 \ncontroller-0 : ok=2 changed=0 unreachable=0 failed=0 \ncontroller-1 : ok=2 changed=0 unreachable=0 failed=0 \ncontroller-2 : ok=43 changed=4 unreachable=0 failed=1 This is HA deployment (3 controllers) with 3 ceph nodes, ceph rgw, lowmem and mds features enabled. Attaching both /var/lib/mistral/config-download-latest/ansible.log and /var/lib/mistral/config-download-latest/ceph-ansible/inventory.yml. Created attachment 1483944 [details]
/var/lib/mistral/config-download-latest/ansible.log
Created attachment 1483945 [details]
/var/lib/mistral/config-download-latest/ceph-ansible/inventory.yml
Also, if our automation/CI is not wrong, this is still not fixed in newest ceph-ansible-3.1.3-1.el7cp.noarch.rpm build. *** Bug 1622505 has been marked as a duplicate of this bug. *** Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory, and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://access.redhat.com/errata/RHBA-2018:2819 |