Created attachment 1243876 [details] config file generation fails - ceph-ansible log Description of problem: During cluster creation using iso, config file generation fails with message dict object' has no attribute 'address' on nodes with only ipv6 enabled. Version-Release number of selected component (if applicable): ceph-ansible-2.1.3-1.el7scon.noarch How reproducible: always Steps to Reproduce: 1. With all required configuration to get a cluster up, disable ipv4 for all nodes. 2. Mention monitor address in /etc/hosts/ansible file. 3. run ansible-playbook site.yml Actual results [ ipv4 disabled for magna042 and magna033]: TASK [ceph.ceph-common : generate ceph configuration file: 22.conf] ************ fatal: [magna033]: FAILED! => {"failed": true, "msg": "{{ ansible_default_ipv4.address }}: 'dict object' has no attribute 'address'"} fatal: [magna042]: FAILED! => {"failed": true, "msg": "{{ ansible_default_ipv4.address }}: 'dict object' has no attribute 'address'"} changed: [magna030] -------------------------------------------------------------- TASK [ceph-mon : wait for 22.client.admin.keyring exists] ********************** fatal: [magna030]: FAILED! => {"changed": false, "elapsed": 300, "failed": true, "msg": "Timeout when waiting for file /etc/ceph/22.client.admin.keyring"} Expected results: Cluster should get created Additional info:
Vasishta, Did you try this with `monitor_interface` set or `monitor_address`? I do not believe we currently support ipv6 when using `monitor_interface`. If you were using `monitor_interface` could you please try again with `monitor_address`? Sharing your hosts file and group vars would help with debugging this, would you please do that as well?
Hi Andrew, I didn't try 'monitor_interface' as it's not supported. Initially I tried mentioning only one 'monitor_address', later I tried mentioning address of all three monitors separated by ','. After these two attempts I tried mentioning address in /etc/ansible/hosts.
Ok, I think I know what's going on here. Even if you use `monitor_address` and set it to an ipv6 address the template is hardcoded to look for ipv4 addresses from the facts ansible gives us from the system, which causes the template to fail to render. I've opened https://github.com/ceph/ceph-ansible/pull/1247 to address this upstream. My proposed solution would require you to set another config option when using ipv6, `ip_version: ipv6`.
To fix this issue, we're going to backport this to the upstream stable-2.1 branch and ship ceph-ansible v2.1.4.
Hi, I'm able to see same issue in ceph-ansible-2.1.6-1.el7scon.noarch $ sudo rpm -qa |grep ceph-ansible ceph-ansible-2.1.6-1.el7scon.noarch Moving back to ASSIGNED state. Please let me know if there are any concerns. Regards, Vasishta
Vasishta, Could you please confirm that you've set "ip_version: ipv6" in your group_vars/all.yml? Thanks, Andrew
Hi Andrew, Sorry, I didn't set "ip_version: ipv6". However setting "ip_version: ipv6" in group_var/all.yml did not make any difference apart from config file generated in single ipv4 node having an extra line "ms bind ipv6" in global section. Please let me know if there is anything that I should try. Regards, Vasishta
Vasishta, Did setting that avoid the error in the 'generate ceph configuration file' task? Could you share a ceph.conf from one of the nodes and the location and contents of the inventory and group_vars directory you used? Thanks, Andrew
Vasishta, You have monitor_address set in /etc/ansible/group_vars/all.yml. This isn't necessary because you have it set per host in /etc/ansible/hosts. Can you comment that out and try again? This time please use the -vv flag and share your log here. Thanks, Andrew
Created attachment 1247321 [details] log file of ansible-playbook with -vv option Andrew, Sorry for that duplication. I tried after commenting it and the result was same. Please find ansible log in the latest attachment. I've used -vv option as required. Regards, Vasishta
Vasishta, The error in the log is similar but different. Could you set 'radosgw_civetweb_bind_ip' to something besides the default of "{{ ansible_default_ipv4.address }}" in /etc/ansible/group_vars/all.yml and try again? Because you have disabled ipv4 on these nodes this value is no longer valid and throws that error you see. Thanks, Andrew
Hi Andrew, Setting 'radosgw_civetweb_bind_ip' a non default value (ansible_default_ipv6.address) worked for me. But I'm not sure whether that value was appropriate or not as I couldn't find any variable by name ansible_default_ipv6.address anywhere. Can you please suggest what should be the appropriate value to be set ? As setting 'ip_version' & 'radosgw_civetweb_bind_ip' needs to be documented, I'll change this bug as doc bug once I get to know the appropriate value to be set for radosgw_civetweb_bind_ip. Thanks, Vasishta
(In reply to Vasishta from comment #19) > Hi Andrew, > > Setting 'radosgw_civetweb_bind_ip' a non default value > (ansible_default_ipv6.address) worked for me. But I'm not sure whether that > value was appropriate or not as I couldn't find any variable by name > ansible_default_ipv6.address anywhere. ansible_default_ipv6.address was correct in this case. What that will do is give you the default ipv6 address for the node that's been discovered by ansible. The other option there would be to define 'radosgw_civetweb_bind_ip' per host in /etc/ansible/hosts. Are you still seeing the original issue here, the failed rendering of the ceph.conf template? Thanks, Andrew
Since we're still working on checking the config with QE I'm going to move the state to ON_QA to reflect that
As I mentioned in Comment 19, It's working fine, I'm not seeing the original issue. So moving this bug to verified state and filing a separate doc bug. Regards, Vasishta
Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory, and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://access.redhat.com/errata/RHSA-2017:0515