Bug 1416010 - [ceph-ansible] config file generation fails on ipv6 nodes with message dict object' has no attribute 'address'
Summary: [ceph-ansible] config file generation fails on ipv6 nodes with message dict o...
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: Red Hat Storage Console
Classification: Red Hat Storage
Component: ceph-ansible
Version: 2
Hardware: Unspecified
OS: Unspecified
unspecified
unspecified
Target Milestone: ---
: 2
Assignee: Sébastien Han
QA Contact: ceph-qe-bugs
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2017-01-24 11:01 UTC by Vasishta
Modified: 2017-03-14 15:54 UTC (History)
10 users (show)

Fixed In Version: ceph-ansible-2.1.4-1.el7scon
Doc Type: If docs needed, set a value
Doc Text:
Clone Of:
Environment:
Last Closed: 2017-03-14 15:54:02 UTC
Embargoed:


Attachments (Terms of Use)
config file generation fails - ceph-ansible log (25.82 KB, text/plain)
2017-01-24 11:01 UTC, Vasishta
no flags Details
log file of ansible-playbook with -vv option (140.84 KB, text/plain)
2017-02-03 00:39 UTC, Vasishta
no flags Details


Links
System ID Private Priority Status Summary Last Updated
Red Hat Product Errata RHSA-2017:0515 0 normal SHIPPED_LIVE Important: ansible and ceph-ansible security, bug fix, and enhancement update 2017-04-18 21:12:31 UTC

Description Vasishta 2017-01-24 11:01:40 UTC
Created attachment 1243876 [details]
config file generation fails - ceph-ansible log

Description of problem:
During cluster creation using iso, config file generation fails with message dict object' has no attribute 'address' on nodes with only ipv6 enabled.


Version-Release number of selected component (if applicable):
ceph-ansible-2.1.3-1.el7scon.noarch

How reproducible:
always

Steps to Reproduce:
1. With all required configuration to get a cluster up, disable ipv4 for all nodes.
2. Mention monitor address in /etc/hosts/ansible file.
3. run ansible-playbook site.yml

Actual results [ ipv4 disabled for magna042 and magna033]:

TASK [ceph.ceph-common : generate ceph configuration file: 22.conf] ************
fatal: [magna033]: FAILED! => {"failed": true, "msg": "{{ ansible_default_ipv4.address }}: 'dict object' has no attribute 'address'"}
fatal: [magna042]: FAILED! => {"failed": true, "msg": "{{ ansible_default_ipv4.address }}: 'dict object' has no attribute 'address'"}
changed: [magna030]
 --------------------------------------------------------------

TASK [ceph-mon : wait for 22.client.admin.keyring exists] **********************
fatal: [magna030]: FAILED! => {"changed": false, "elapsed": 300, "failed": true, "msg": "Timeout when waiting for file /etc/ceph/22.client.admin.keyring"}


Expected results:
Cluster should get created

Additional info:

Comment 2 Andrew Schoen 2017-01-24 12:27:56 UTC
Vasishta,

Did you try this with `monitor_interface` set or `monitor_address`? I do not believe we currently support ipv6 when using `monitor_interface`. If you were using `monitor_interface` could you please try again with `monitor_address`?

Sharing your hosts file and group vars would help with debugging this, would you please do that as well?

Comment 3 Vasishta 2017-01-24 12:52:14 UTC
Hi Andrew,

I didn't try 'monitor_interface' as it's not supported. Initially I tried mentioning only one 'monitor_address', later I tried mentioning address of all three monitors separated by ','. After these two attempts I tried mentioning  address in /etc/ansible/hosts.

Comment 6 Andrew Schoen 2017-01-24 15:14:33 UTC
Ok, I think I know what's going on here. Even if you use `monitor_address` and set it to an ipv6 address the template is hardcoded to look for ipv4 addresses from the facts ansible gives us from the system, which causes the template to fail to render.

I've opened https://github.com/ceph/ceph-ansible/pull/1247 to address this upstream. My proposed solution would require you to set another config option when using ipv6, `ip_version: ipv6`.

Comment 7 Ken Dreyer (Red Hat) 2017-01-24 16:40:45 UTC
To fix this issue, we're going to backport this to the upstream stable-2.1 branch and ship ceph-ansible v2.1.4.

Comment 11 Vasishta 2017-02-02 17:49:26 UTC
Hi,

I'm able to see same issue in ceph-ansible-2.1.6-1.el7scon.noarch

$ sudo rpm -qa |grep ceph-ansible
ceph-ansible-2.1.6-1.el7scon.noarch

Moving back to ASSIGNED state. 
Please let me know if there are any concerns.


Regards,
Vasishta

Comment 12 Andrew Schoen 2017-02-02 17:58:26 UTC
Vasishta,

Could you please confirm that you've set "ip_version: ipv6" in your group_vars/all.yml?

Thanks,
Andrew

Comment 13 Vasishta 2017-02-02 18:44:54 UTC
Hi Andrew,

Sorry, I didn't set "ip_version: ipv6".

However setting "ip_version: ipv6" in group_var/all.yml did not make any difference apart from config file generated in single ipv4 node having an extra line "ms bind ipv6" in global section.

Please let me know if there is anything that I should try.

Regards,
Vasishta

Comment 14 Andrew Schoen 2017-02-02 18:51:53 UTC
Vasishta,

Did setting that avoid the error in the 'generate ceph configuration file' task? Could you share a ceph.conf from one of the nodes and the location and contents of the inventory and group_vars directory you used?

Thanks,
Andrew

Comment 16 Andrew Schoen 2017-02-02 20:28:12 UTC
Vasishta,

You have monitor_address set in /etc/ansible/group_vars/all.yml. This isn't necessary because you have it set per host in /etc/ansible/hosts. Can you comment that out and try again?

This time please use the -vv flag and share your log here.

Thanks,
Andrew

Comment 17 Vasishta 2017-02-03 00:39:23 UTC
Created attachment 1247321 [details]
log file of ansible-playbook with -vv option

Andrew,

Sorry for that duplication.
I tried after commenting it and the result was same.

Please find ansible log in the latest attachment. I've used -vv option as required. 


Regards,
Vasishta

Comment 18 Andrew Schoen 2017-02-03 01:39:03 UTC
Vasishta,

The error in the log is similar but different. Could you set 'radosgw_civetweb_bind_ip' to something besides the default of "{{ ansible_default_ipv4.address }}" in /etc/ansible/group_vars/all.yml and try again? Because you have disabled ipv4 on these nodes this value is no longer valid and throws that error you see.

Thanks,
Andrew

Comment 19 Vasishta 2017-02-08 06:55:23 UTC
Hi Andrew,

Setting 'radosgw_civetweb_bind_ip' a non default value (ansible_default_ipv6.address) worked for me. But I'm not sure whether that value was appropriate or not as I couldn't find any variable by name ansible_default_ipv6.address anywhere. 

Can you please suggest what should be the appropriate value to be set ?

As setting 'ip_version' & 'radosgw_civetweb_bind_ip' needs to be documented, I'll change this bug as doc bug once I get to know the appropriate value to be set for radosgw_civetweb_bind_ip.


Thanks,
Vasishta

Comment 20 Andrew Schoen 2017-02-08 15:14:24 UTC
(In reply to Vasishta from comment #19)
> Hi Andrew,
> 
> Setting 'radosgw_civetweb_bind_ip' a non default value
> (ansible_default_ipv6.address) worked for me. But I'm not sure whether that
> value was appropriate or not as I couldn't find any variable by name
> ansible_default_ipv6.address anywhere. 

ansible_default_ipv6.address was correct in this case. What that will do is give you the default ipv6 address for the node that's been discovered by ansible. The other option there would be to define 'radosgw_civetweb_bind_ip' per host in /etc/ansible/hosts.

Are you still seeing the original issue here, the failed rendering of the ceph.conf template?

Thanks,
Andrew

Comment 21 Christina Meno 2017-02-08 15:40:49 UTC
Since we're still working on checking the config with QE I'm going to move the state to ON_QA to reflect that

Comment 22 Christina Meno 2017-02-08 15:41:16 UTC
Since we're still working on checking the config with QE I'm going to move the state to ON_QA to reflect that

Comment 23 Vasishta 2017-02-08 15:50:37 UTC
As I mentioned in Comment 19, 
It's working fine, I'm not seeing the original issue.
So moving this bug to verified state and filing a separate doc bug.

Regards,
Vasishta

Comment 25 errata-xmlrpc 2017-03-14 15:54:02 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHSA-2017:0515


Note You need to log in before you can comment on or make changes to this bug.