Bug 1618678 - Installation fails with error: 'dict object' has no attribute 'rgw_hostname'
Summary: Installation fails with error: 'dict object' has no attribute 'rgw_hostname'
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: Red Hat Ceph Storage
Classification: Red Hat Storage
Component: Ceph-Ansible
Version: 3.1
Hardware: Unspecified
OS: Unspecified
high
urgent
Target Milestone: rc
: 3.1
Assignee: Sébastien Han
QA Contact: shilpa
Aron Gunn
URL:
Whiteboard:
: 1619736 (view as bug list)
Depends On:
Blocks: 1578730 1584264
TreeView+ depends on / blocked
 
Reported: 2018-08-17 09:57 UTC by shilpa
Modified: 2018-09-26 18:24 UTC (History)
17 users (show)

Fixed In Version: RHEL: ceph-ansible-3.1.0-0.1.rc21.el7cp Ubuntu: ceph-ansible_3.1.0~rc21-2redhat1
Doc Type: Bug Fix
Doc Text:
.Ceph installation no longer fails when trying to deploy the Object Gateway When deploying the Ceph Object Gateway using Ansible, the `rgw_hostname` variable was not being set on the Object Gateway node, but was incorrectly set on the Ceph Monitor node. In this release, the `rgw_hostname` variable is set properly and applied to the Ceph Object Gateway node.
Clone Of:
Environment:
Last Closed: 2018-09-26 18:23:45 UTC
Embargoed:


Attachments (Terms of Use)
/var/lib/mistral/xyz/ansible.log (6.40 MB, text/plain)
2018-08-21 15:03 UTC, Filip Hubík
no flags Details
ansible.log (165.36 KB, text/plain)
2018-09-14 17:55 UTC, Giulio Fidente
no flags Details
inventory.yml (1.86 KB, text/plain)
2018-09-14 17:56 UTC, Giulio Fidente
no flags Details
/var/lib/mistral/config-download-latest/ansible.log (6.42 MB, text/plain)
2018-09-17 09:28 UTC, Filip Hubík
no flags Details
/var/lib/mistral/config-download-latest/ceph-ansible/inventory.yml (1.86 KB, text/plain)
2018-09-17 09:28 UTC, Filip Hubík
no flags Details


Links
System ID Private Priority Status Summary Last Updated
Github ceph ceph-ansible pull 3015 0 None closed roles: ceph-defaults: Handle missing 'ceph' binary failures 2020-12-11 12:44:39 UTC
Github ceph ceph-ansible pull 3054 0 None closed defaults: fix rgw_hostname 2020-12-11 12:44:39 UTC
Red Hat Product Errata RHBA-2018:2819 0 None None None 2018-09-26 18:24:47 UTC

Description shilpa 2018-08-17 09:57:47 UTC
Description of problem:
Playbook fails at the following stage:

TASK [ceph-config : generate ceph configuration file

Could have been introduced as a result of: 

https://github.com/ceph/ceph-ansible/commit/97cf08e89729db35264d0fb2c55ac01941761b9d

Version-Release number of selected component (if applicable):

3.1.0-0.1.rc18.el7cp 

How reproducible:
Always


Actual results:

INFO:teuthology.orchestra.run.clara012.stdout:TASK [ceph-config : generate ceph configuration file: c1.conf] *****************
INFO:teuthology.orchestra.run.clara012.stdout:task path: /home/ubuntu/ceph-ansible/roles/ceph-config/tasks/main.yml:12
INFO:teuthology.orchestra.run.clara012.stdout:Friday 17 August 2018  06:36:51 +0000 (0:00:00.351)       0:05:00.432 *********

INFO:teuthology.orchestra.run.clara012.stdout:fatal: [clara012.ceph.redhat.com]: FAILED! => {}
 MSG:
'dict object' has no attribute 'rgw_hostname'


INFO:teuthology.orchestra.run.clara012.stdout:PLAY RECAP *********************************************************************
INFO:teuthology.orchestra.run.clara012.stdout:clara012.ceph.redhat.com   : ok=44   changed=12   unreachable=0    failed=1
 INFO:teuthology.orchestra.run.clara012.stdout:pluto004.ceph.redhat.com   : ok=1    changed=0    unreachable=0    failed=0


Expected results:
The same cluster configuration works with rc17 version. 

Additional info:

Config parameters:

    ceph_ansible:
      rhbuild: '3.1'
      vars:
        ceph_conf_overrides:
          global:
            mon_max_pg_per_osd: 1024
            osd default pool size: 2
            osd pool default pg num: 64
            osd pool default pgp num: 64
        ceph_origin: distro
        ceph_repository: rhcs
        ceph_stable: true
        ceph_stable_release: luminous
        ceph_stable_rh_storage: true
        ceph_test: true
        journal_size: 1024
        osd_auto_discovery: true
        osd_scenario: collocated

Comment 4 Vasishta 2018-08-17 14:01:05 UTC
RGW installation is blocked in scenario where ansible_hostname != ansible_fqdn in task "ceph-defaults : get current cluster status (if already running)" with message - 

"msg": "The conditional check 'ceph_release_num[ceph_release] >= ceph_release_num.luminous' failed. The error was: error while evaluating conditional (ceph_release_num[ceph_release] >= ceph_release_num.luminous): 'dict object' has no attribute u'dummy'\n\nThe error appears to have been in '/usr/share/ceph-ansible/roles/ceph-defaults/tasks/facts.yml': line 219, column 7, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n- block:\n    - name: get current cluster status (if already running)\n      ^ here\n"
}


Regards,
Vasishta Shastry
QE, Ceph

Comment 6 Filip Hubík 2018-08-21 15:02:41 UTC
I can not confirm this issue being fixed as part of OSP14 deployment, where we have ceph-ansible-3.1.0.0-0.rc19.1.el7.noarch available on UC.

I am attaching whole ansible.log from mistral, where it is apparent that last failure can be related.

Comment 7 Filip Hubík 2018-08-21 15:03:32 UTC
Created attachment 1477591 [details]
/var/lib/mistral/xyz/ansible.log

Comment 8 Ken Dreyer (Red Hat) 2018-08-21 18:53:21 UTC
(For clarity, Filip did some early testing before ceph-ansible-3.1.0rc19 is available in Brew.)

Since that build does not fix this for Filip, we'll need more investigation here.

Comment 14 Giulio Fidente 2018-09-03 16:44:30 UTC
*** Bug 1619736 has been marked as a duplicate of this bug. ***

Comment 17 Sébastien Han 2018-09-05 09:00:28 UTC
Is this better now?

Comment 20 Giulio Fidente 2018-09-14 17:54:25 UTC
it looks like we're still seeing the issue with 3.1.3 build; attaching ansible.log and inventory.yaml

Comment 21 Giulio Fidente 2018-09-14 17:55:15 UTC
Created attachment 1483383 [details]
ansible.log

Comment 22 Giulio Fidente 2018-09-14 17:56:24 UTC
Created attachment 1483384 [details]
inventory.yml

Comment 23 Filip Hubík 2018-09-17 09:26:44 UTC
I can confirm this issue not being fixed in ceph-ansible-3.1.0-0.1.rc21.el7cp as part of OpenStack director 14 deployment.

openstack-mistral-executor contains this package, but ceph-ansible fails as part of mistral post-deployment on this error:

$ tail -n 300 /var/lib/mistral/config-download-latest/ansible.log
...
"Conditional result was False\"}\n\nTASK [ceph-config : ensure /etc/ceph exists] ***********************************\ntask path: /usr/share/ceph-ansible/roles/ceph-config/tasks/main.yml:76\nFriday 14 September 2018  10:43:50 -0400 (0:00:00.050)       0:01:04.055 ****** \nchanged: [controller-2] => {\"changed\": true, \"gid\": 167, \"group\": \"167\", \"mode\": \"0755\", \"owner\": \"167\", \"path\": \"/etc/ceph\", \"secontext\": \"unconfined_u:object_r:etc_t:s0\", \"size\": 6, \"state\": \"directory\", \"uid\": 167}\n\nTASK [ceph-config : generate ceph.conf configuration file] *********************\ntask path: /usr/share/ceph-ansible/roles/ceph-config/tasks/main.yml:84\nFriday 14 September 2018  10:43:51 -0400 (0:00:00.457)       0:01:04.513 ****** \nfatal: [controller-2]: FAILED! => {\"msg\": \"'dict object' has no attribute 'rgw_hostname'\"}\n\nPLAY RECAP *********************************************************************\nceph-0                     : ok=2    changed=0    unreachable=0    failed=0   \nceph-1                     : ok=2    changed=0    unreachable=0    failed=0   \nceph-2                     : ok=2    changed=0    unreachable=0    failed=0   \ncompute-0                  : ok=2    changed=0    unreachable=0    failed=0   \ncontroller-0               : ok=2    changed=0    unreachable=0    failed=0   \ncontroller-1               : ok=2    changed=0    unreachable=0    failed=0   \ncontroller-2               : ok=43   changed=4    unreachable=0    failed=1

This is HA deployment (3 controllers) with 3 ceph nodes, ceph rgw, lowmem and mds features enabled.

Attaching both /var/lib/mistral/config-download-latest/ansible.log and /var/lib/mistral/config-download-latest/ceph-ansible/inventory.yml.

Comment 24 Filip Hubík 2018-09-17 09:28:01 UTC
Created attachment 1483944 [details]
/var/lib/mistral/config-download-latest/ansible.log

Comment 25 Filip Hubík 2018-09-17 09:28:36 UTC
Created attachment 1483945 [details]
/var/lib/mistral/config-download-latest/ceph-ansible/inventory.yml

Comment 26 Filip Hubík 2018-09-17 09:31:24 UTC
Also, if our automation/CI is not wrong, this is still not fixed in newest ceph-ansible-3.1.3-1.el7cp.noarch.rpm build.

Comment 27 Giulio Fidente 2018-09-17 10:14:47 UTC
*** Bug 1622505 has been marked as a duplicate of this bug. ***

Comment 32 errata-xmlrpc 2018-09-26 18:23:45 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHBA-2018:2819


Note You need to log in before you can comment on or make changes to this bug.