Bugzilla will be upgraded to version 5.0. The upgrade date is tentatively scheduled for 2 December 2018, pending final testing and feedback.
Bug 1416010 - [ceph-ansible] config file generation fails on ipv6 nodes with message dict object' has no attribute 'address'
[ceph-ansible] config file generation fails on ipv6 nodes with message dict o...
Status: CLOSED ERRATA
Product: Red Hat Storage Console
Classification: Red Hat
Component: ceph-ansible (Show other bugs)
2
Unspecified Unspecified
unspecified Severity unspecified
: ---
: 2
Assigned To: leseb
ceph-qe-bugs
:
Depends On:
Blocks:
  Show dependency treegraph
 
Reported: 2017-01-24 06:01 EST by Vasishta
Modified: 2017-03-14 11:54 EDT (History)
10 users (show)

See Also:
Fixed In Version: ceph-ansible-2.1.4-1.el7scon
Doc Type: If docs needed, set a value
Doc Text:
Story Points: ---
Clone Of:
Environment:
Last Closed: 2017-03-14 11:54:02 EDT
Type: Bug
Regression: ---
Mount Type: ---
Documentation: ---
CRM:
Verified Versions:
Category: ---
oVirt Team: ---
RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: ---


Attachments (Terms of Use)
config file generation fails - ceph-ansible log (25.82 KB, text/plain)
2017-01-24 06:01 EST, Vasishta
no flags Details
log file of ansible-playbook with -vv option (140.84 KB, text/plain)
2017-02-02 19:39 EST, Vasishta
no flags Details


External Trackers
Tracker ID Priority Status Summary Last Updated
Red Hat Product Errata RHSA-2017:0515 normal SHIPPED_LIVE Important: ansible and ceph-ansible security, bug fix, and enhancement update 2017-04-18 17:12:31 EDT

  None (edit)
Description Vasishta 2017-01-24 06:01:40 EST
Created attachment 1243876 [details]
config file generation fails - ceph-ansible log

Description of problem:
During cluster creation using iso, config file generation fails with message dict object' has no attribute 'address' on nodes with only ipv6 enabled.


Version-Release number of selected component (if applicable):
ceph-ansible-2.1.3-1.el7scon.noarch

How reproducible:
always

Steps to Reproduce:
1. With all required configuration to get a cluster up, disable ipv4 for all nodes.
2. Mention monitor address in /etc/hosts/ansible file.
3. run ansible-playbook site.yml

Actual results [ ipv4 disabled for magna042 and magna033]:

TASK [ceph.ceph-common : generate ceph configuration file: 22.conf] ************
fatal: [magna033]: FAILED! => {"failed": true, "msg": "{{ ansible_default_ipv4.address }}: 'dict object' has no attribute 'address'"}
fatal: [magna042]: FAILED! => {"failed": true, "msg": "{{ ansible_default_ipv4.address }}: 'dict object' has no attribute 'address'"}
changed: [magna030]
 --------------------------------------------------------------

TASK [ceph-mon : wait for 22.client.admin.keyring exists] **********************
fatal: [magna030]: FAILED! => {"changed": false, "elapsed": 300, "failed": true, "msg": "Timeout when waiting for file /etc/ceph/22.client.admin.keyring"}


Expected results:
Cluster should get created

Additional info:
Comment 2 Andrew Schoen 2017-01-24 07:27:56 EST
Vasishta,

Did you try this with `monitor_interface` set or `monitor_address`? I do not believe we currently support ipv6 when using `monitor_interface`. If you were using `monitor_interface` could you please try again with `monitor_address`?

Sharing your hosts file and group vars would help with debugging this, would you please do that as well?
Comment 3 Vasishta 2017-01-24 07:52:14 EST
Hi Andrew,

I didn't try 'monitor_interface' as it's not supported. Initially I tried mentioning only one 'monitor_address', later I tried mentioning address of all three monitors separated by ','. After these two attempts I tried mentioning  address in /etc/ansible/hosts.
Comment 6 Andrew Schoen 2017-01-24 10:14:33 EST
Ok, I think I know what's going on here. Even if you use `monitor_address` and set it to an ipv6 address the template is hardcoded to look for ipv4 addresses from the facts ansible gives us from the system, which causes the template to fail to render.

I've opened https://github.com/ceph/ceph-ansible/pull/1247 to address this upstream. My proposed solution would require you to set another config option when using ipv6, `ip_version: ipv6`.
Comment 7 Ken Dreyer (Red Hat) 2017-01-24 11:40:45 EST
To fix this issue, we're going to backport this to the upstream stable-2.1 branch and ship ceph-ansible v2.1.4.
Comment 11 Vasishta 2017-02-02 12:49:26 EST
Hi,

I'm able to see same issue in ceph-ansible-2.1.6-1.el7scon.noarch

$ sudo rpm -qa |grep ceph-ansible
ceph-ansible-2.1.6-1.el7scon.noarch

Moving back to ASSIGNED state. 
Please let me know if there are any concerns.


Regards,
Vasishta
Comment 12 Andrew Schoen 2017-02-02 12:58:26 EST
Vasishta,

Could you please confirm that you've set "ip_version: ipv6" in your group_vars/all.yml?

Thanks,
Andrew
Comment 13 Vasishta 2017-02-02 13:44:54 EST
Hi Andrew,

Sorry, I didn't set "ip_version: ipv6".

However setting "ip_version: ipv6" in group_var/all.yml did not make any difference apart from config file generated in single ipv4 node having an extra line "ms bind ipv6" in global section.

Please let me know if there is anything that I should try.

Regards,
Vasishta
Comment 14 Andrew Schoen 2017-02-02 13:51:53 EST
Vasishta,

Did setting that avoid the error in the 'generate ceph configuration file' task? Could you share a ceph.conf from one of the nodes and the location and contents of the inventory and group_vars directory you used?

Thanks,
Andrew
Comment 16 Andrew Schoen 2017-02-02 15:28:12 EST
Vasishta,

You have monitor_address set in /etc/ansible/group_vars/all.yml. This isn't necessary because you have it set per host in /etc/ansible/hosts. Can you comment that out and try again?

This time please use the -vv flag and share your log here.

Thanks,
Andrew
Comment 17 Vasishta 2017-02-02 19:39 EST
Created attachment 1247321 [details]
log file of ansible-playbook with -vv option

Andrew,

Sorry for that duplication.
I tried after commenting it and the result was same.

Please find ansible log in the latest attachment. I've used -vv option as required. 


Regards,
Vasishta
Comment 18 Andrew Schoen 2017-02-02 20:39:03 EST
Vasishta,

The error in the log is similar but different. Could you set 'radosgw_civetweb_bind_ip' to something besides the default of "{{ ansible_default_ipv4.address }}" in /etc/ansible/group_vars/all.yml and try again? Because you have disabled ipv4 on these nodes this value is no longer valid and throws that error you see.

Thanks,
Andrew
Comment 19 Vasishta 2017-02-08 01:55:23 EST
Hi Andrew,

Setting 'radosgw_civetweb_bind_ip' a non default value (ansible_default_ipv6.address) worked for me. But I'm not sure whether that value was appropriate or not as I couldn't find any variable by name ansible_default_ipv6.address anywhere. 

Can you please suggest what should be the appropriate value to be set ?

As setting 'ip_version' & 'radosgw_civetweb_bind_ip' needs to be documented, I'll change this bug as doc bug once I get to know the appropriate value to be set for radosgw_civetweb_bind_ip.


Thanks,
Vasishta
Comment 20 Andrew Schoen 2017-02-08 10:14:24 EST
(In reply to Vasishta from comment #19)
> Hi Andrew,
> 
> Setting 'radosgw_civetweb_bind_ip' a non default value
> (ansible_default_ipv6.address) worked for me. But I'm not sure whether that
> value was appropriate or not as I couldn't find any variable by name
> ansible_default_ipv6.address anywhere. 

ansible_default_ipv6.address was correct in this case. What that will do is give you the default ipv6 address for the node that's been discovered by ansible. The other option there would be to define 'radosgw_civetweb_bind_ip' per host in /etc/ansible/hosts.

Are you still seeing the original issue here, the failed rendering of the ceph.conf template?

Thanks,
Andrew
Comment 21 Gregory Meno 2017-02-08 10:40:49 EST
Since we're still working on checking the config with QE I'm going to move the state to ON_QA to reflect that
Comment 22 Gregory Meno 2017-02-08 10:41:16 EST
Since we're still working on checking the config with QE I'm going to move the state to ON_QA to reflect that
Comment 23 Vasishta 2017-02-08 10:50:37 EST
As I mentioned in Comment 19, 
It's working fine, I'm not seeing the original issue.
So moving this bug to verified state and filing a separate doc bug.

Regards,
Vasishta
Comment 25 errata-xmlrpc 2017-03-14 11:54:02 EDT
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHSA-2017:0515

Note You need to log in before you can comment on or make changes to this bug.