Bugzilla will be upgraded to version 5.0 on a still to be determined date in the near future. The original upgrade date has been delayed.
Bug 1618678 - Installation fails with error: 'dict object' has no attribute 'rgw_hostname'
Installation fails with error: 'dict object' has no attribute 'rgw_hostname'
Status: CLOSED ERRATA
Product: Red Hat Ceph Storage
Classification: Red Hat
Component: Ceph-Ansible (Show other bugs)
3.1
Unspecified Unspecified
high Severity urgent
: rc
: 3.1
Assigned To: leseb
shilpa
Aron Gunn
: Automation, AutomationBlocker, Regression
: 1619736 (view as bug list)
Depends On:
Blocks: 1578730 1584264
  Show dependency treegraph
 
Reported: 2018-08-17 05:57 EDT by shilpa
Modified: 2018-09-26 14:24 EDT (History)
17 users (show)

See Also:
Fixed In Version: RHEL: ceph-ansible-3.1.0-0.1.rc21.el7cp Ubuntu: ceph-ansible_3.1.0~rc21-2redhat1
Doc Type: Bug Fix
Doc Text:
.Ceph installation no longer fails when trying to deploy the Object Gateway When deploying the Ceph Object Gateway using Ansible, the `rgw_hostname` variable was not being set on the Object Gateway node, but was incorrectly set on the Ceph Monitor node. In this release, the `rgw_hostname` variable is set properly and applied to the Ceph Object Gateway node.
Story Points: ---
Clone Of:
Environment:
Last Closed: 2018-09-26 14:23:45 EDT
Type: Bug
Regression: ---
Mount Type: ---
Documentation: ---
CRM:
Verified Versions:
Category: ---
oVirt Team: ---
RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: ---


Attachments (Terms of Use)
/var/lib/mistral/xyz/ansible.log (6.40 MB, text/plain)
2018-08-21 11:03 EDT, Filip Hubík
no flags Details
ansible.log (165.36 KB, text/plain)
2018-09-14 13:55 EDT, Giulio Fidente
no flags Details
inventory.yml (1.86 KB, text/plain)
2018-09-14 13:56 EDT, Giulio Fidente
no flags Details
/var/lib/mistral/config-download-latest/ansible.log (6.42 MB, text/plain)
2018-09-17 05:28 EDT, Filip Hubík
no flags Details
/var/lib/mistral/config-download-latest/ceph-ansible/inventory.yml (1.86 KB, text/plain)
2018-09-17 05:28 EDT, Filip Hubík
no flags Details


External Trackers
Tracker ID Priority Status Summary Last Updated
Github ceph/ceph-ansible/pull/3015 None None None 2018-08-20 04:53 EDT
Github ceph/ceph-ansible/pull/3054 None None None 2018-08-22 11:46 EDT
Red Hat Product Errata RHBA-2018:2819 None None None 2018-09-26 14:24 EDT

  None (edit)
Description shilpa 2018-08-17 05:57:47 EDT
Description of problem:
Playbook fails at the following stage:

TASK [ceph-config : generate ceph configuration file

Could have been introduced as a result of: 

https://github.com/ceph/ceph-ansible/commit/97cf08e89729db35264d0fb2c55ac01941761b9d

Version-Release number of selected component (if applicable):

3.1.0-0.1.rc18.el7cp 

How reproducible:
Always


Actual results:

INFO:teuthology.orchestra.run.clara012.stdout:TASK [ceph-config : generate ceph configuration file: c1.conf] *****************
INFO:teuthology.orchestra.run.clara012.stdout:task path: /home/ubuntu/ceph-ansible/roles/ceph-config/tasks/main.yml:12
INFO:teuthology.orchestra.run.clara012.stdout:Friday 17 August 2018  06:36:51 +0000 (0:00:00.351)       0:05:00.432 *********

INFO:teuthology.orchestra.run.clara012.stdout:fatal: [clara012.ceph.redhat.com]: FAILED! => {}
 MSG:
'dict object' has no attribute 'rgw_hostname'


INFO:teuthology.orchestra.run.clara012.stdout:PLAY RECAP *********************************************************************
INFO:teuthology.orchestra.run.clara012.stdout:clara012.ceph.redhat.com   : ok=44   changed=12   unreachable=0    failed=1
 INFO:teuthology.orchestra.run.clara012.stdout:pluto004.ceph.redhat.com   : ok=1    changed=0    unreachable=0    failed=0


Expected results:
The same cluster configuration works with rc17 version. 

Additional info:

Config parameters:

    ceph_ansible:
      rhbuild: '3.1'
      vars:
        ceph_conf_overrides:
          global:
            mon_max_pg_per_osd: 1024
            osd default pool size: 2
            osd pool default pg num: 64
            osd pool default pgp num: 64
        ceph_origin: distro
        ceph_repository: rhcs
        ceph_stable: true
        ceph_stable_release: luminous
        ceph_stable_rh_storage: true
        ceph_test: true
        journal_size: 1024
        osd_auto_discovery: true
        osd_scenario: collocated
Comment 4 Vasishta 2018-08-17 10:01:05 EDT
RGW installation is blocked in scenario where ansible_hostname != ansible_fqdn in task "ceph-defaults : get current cluster status (if already running)" with message - 

"msg": "The conditional check 'ceph_release_num[ceph_release] >= ceph_release_num.luminous' failed. The error was: error while evaluating conditional (ceph_release_num[ceph_release] >= ceph_release_num.luminous): 'dict object' has no attribute u'dummy'\n\nThe error appears to have been in '/usr/share/ceph-ansible/roles/ceph-defaults/tasks/facts.yml': line 219, column 7, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n- block:\n    - name: get current cluster status (if already running)\n      ^ here\n"
}


Regards,
Vasishta Shastry
QE, Ceph
Comment 6 Filip Hubík 2018-08-21 11:02:41 EDT
I can not confirm this issue being fixed as part of OSP14 deployment, where we have ceph-ansible-3.1.0.0-0.rc19.1.el7.noarch available on UC.

I am attaching whole ansible.log from mistral, where it is apparent that last failure can be related.
Comment 7 Filip Hubík 2018-08-21 11:03 EDT
Created attachment 1477591 [details]
/var/lib/mistral/xyz/ansible.log
Comment 8 Ken Dreyer (Red Hat) 2018-08-21 14:53:21 EDT
(For clarity, Filip did some early testing before ceph-ansible-3.1.0rc19 is available in Brew.)

Since that build does not fix this for Filip, we'll need more investigation here.
Comment 14 Giulio Fidente 2018-09-03 12:44:30 EDT
*** Bug 1619736 has been marked as a duplicate of this bug. ***
Comment 17 leseb 2018-09-05 05:00:28 EDT
Is this better now?
Comment 20 Giulio Fidente 2018-09-14 13:54:25 EDT
it looks like we're still seeing the issue with 3.1.3 build; attaching ansible.log and inventory.yaml
Comment 21 Giulio Fidente 2018-09-14 13:55 EDT
Created attachment 1483383 [details]
ansible.log
Comment 22 Giulio Fidente 2018-09-14 13:56 EDT
Created attachment 1483384 [details]
inventory.yml
Comment 23 Filip Hubík 2018-09-17 05:26:44 EDT
I can confirm this issue not being fixed in ceph-ansible-3.1.0-0.1.rc21.el7cp as part of OpenStack director 14 deployment.

openstack-mistral-executor contains this package, but ceph-ansible fails as part of mistral post-deployment on this error:

$ tail -n 300 /var/lib/mistral/config-download-latest/ansible.log
...
"Conditional result was False\"}\n\nTASK [ceph-config : ensure /etc/ceph exists] ***********************************\ntask path: /usr/share/ceph-ansible/roles/ceph-config/tasks/main.yml:76\nFriday 14 September 2018  10:43:50 -0400 (0:00:00.050)       0:01:04.055 ****** \nchanged: [controller-2] => {\"changed\": true, \"gid\": 167, \"group\": \"167\", \"mode\": \"0755\", \"owner\": \"167\", \"path\": \"/etc/ceph\", \"secontext\": \"unconfined_u:object_r:etc_t:s0\", \"size\": 6, \"state\": \"directory\", \"uid\": 167}\n\nTASK [ceph-config : generate ceph.conf configuration file] *********************\ntask path: /usr/share/ceph-ansible/roles/ceph-config/tasks/main.yml:84\nFriday 14 September 2018  10:43:51 -0400 (0:00:00.457)       0:01:04.513 ****** \nfatal: [controller-2]: FAILED! => {\"msg\": \"'dict object' has no attribute 'rgw_hostname'\"}\n\nPLAY RECAP *********************************************************************\nceph-0                     : ok=2    changed=0    unreachable=0    failed=0   \nceph-1                     : ok=2    changed=0    unreachable=0    failed=0   \nceph-2                     : ok=2    changed=0    unreachable=0    failed=0   \ncompute-0                  : ok=2    changed=0    unreachable=0    failed=0   \ncontroller-0               : ok=2    changed=0    unreachable=0    failed=0   \ncontroller-1               : ok=2    changed=0    unreachable=0    failed=0   \ncontroller-2               : ok=43   changed=4    unreachable=0    failed=1

This is HA deployment (3 controllers) with 3 ceph nodes, ceph rgw, lowmem and mds features enabled.

Attaching both /var/lib/mistral/config-download-latest/ansible.log and /var/lib/mistral/config-download-latest/ceph-ansible/inventory.yml.
Comment 24 Filip Hubík 2018-09-17 05:28 EDT
Created attachment 1483944 [details]
/var/lib/mistral/config-download-latest/ansible.log
Comment 25 Filip Hubík 2018-09-17 05:28 EDT
Created attachment 1483945 [details]
/var/lib/mistral/config-download-latest/ceph-ansible/inventory.yml
Comment 26 Filip Hubík 2018-09-17 05:31:24 EDT
Also, if our automation/CI is not wrong, this is still not fixed in newest ceph-ansible-3.1.3-1.el7cp.noarch.rpm build.
Comment 27 Giulio Fidente 2018-09-17 06:14:47 EDT
*** Bug 1622505 has been marked as a duplicate of this bug. ***
Comment 32 errata-xmlrpc 2018-09-26 14:23:45 EDT
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHBA-2018:2819

Note You need to log in before you can comment on or make changes to this bug.