Bug 1655131

Summary: ceph-ansible generating wrong ceph.conf for rgw node in an IPv6 environment
Product: [Red Hat Storage] Red Hat Ceph Storage Reporter: Sidhant Agrawal <sagrawal>
Component: Ceph-AnsibleAssignee: Noah Watkins <nwatkins>
Status: CLOSED ERRATA QA Contact: Sidhant Agrawal <sagrawal>
Severity: urgent Docs Contact:
Priority: urgent    
Version: 3.2CC: aschoen, ceph-eng-bugs, gabrioux, gmeno, hnallurv, nthomas, nwatkins, sankarshan, tserlin
Target Milestone: rcKeywords: Regression
Target Release: 3.2   
Hardware: Unspecified   
OS: Unspecified   
Whiteboard:
Fixed In Version: RHEL: ceph-ansible-3.2.0-0.1.rc7.el7cp Ubuntu: ceph-ansible_3.2.0~rc7-2redhat1 Doc Type: If docs needed, set a value
Doc Text:
Story Points: ---
Clone Of: Environment:
Last Closed: 2019-01-03 19:02:28 UTC Type: Bug
Regression: --- Mount Type: ---
Documentation: --- CRM:
Verified Versions: Category: ---
oVirt Team: --- RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: --- Target Upstream Version:
Embargoed:

Description Sidhant Agrawal 2018-11-30 17:21:02 UTC
Description of problem:
In an ipv6 environment on Ubuntu, if all.yml for ansible playbook is configured with radosgw_interface: <interface>, then after the successful completion of ansible playbook, the ceph.conf generated for the rgw node is incorrect 
Instead of IPv6 address of rgw node, it contains  port=[0.0.0.0]:8080

Version-Release number of selected component (if applicable):
ansible  2.6.5-2redhat1
ceph-ansible  3.2.0~rc5-2redhat1
ceph version 12.2.8-39redhat1xenial

How reproducible:
Always

Steps to Reproduce:
1.Install rgw node using ansible playbook with all.yml containing radosgw_interface: <interface>
2.Check the ceph.conf on rgw node

Actual results:
The ceph.conf for rgw node contains [0.0.0.0]:8080 instead of actual IPv6 address of rgw node

Expected results:
The ceph.conf for rgw node should contain IPv6 address of rgw node

Additional info:
The ceph.conf is generated correctly when radosgw_address parameter is used instead of radosgw_interface parameter in all.yml

Comment 5 Noah Watkins 2018-11-30 23:56:42 UTC
potential fix here, but haven't been able to test it. waiting on a review / CI to run https://github.com/ceph/ceph-ansible/pull/3404

Comment 8 Noah Watkins 2018-12-03 19:00:09 UTC
@Thomas thanks, i was thinking it just needed to be merged. in the future, should the ticket not go into POST until after the fix is in a tag?

Comment 9 tserlin 2018-12-03 19:23:09 UTC
(In reply to Noah Watkins from comment #8)
> @Thomas thanks, i was thinking it just needed to be merged. in the future,
> should the ticket not go into POST until after the fix is in a tag?

No, as long as there's an upstream fix somewhere, leaving the BZ in POST is fine.

Thomas

Comment 16 errata-xmlrpc 2019-01-03 19:02:28 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHBA-2019:0020