Bug 1635370 - TLS everywhere is not compatible with routed spine/leaf
Summary: TLS everywhere is not compatible with routed spine/leaf
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: Red Hat OpenStack
Classification: Red Hat
Component: python-novajoin
Version: 14.0 (Rocky)
Hardware: Unspecified
OS: Unspecified
high
high
Target Milestone: ga
: 14.0 (Rocky)
Assignee: Ade Lee
QA Contact: Pavan
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2018-10-02 18:27 UTC by Dan Sneddon
Modified: 2019-12-16 16:04 UTC (History)
12 users (show)

Fixed In Version: python-novajoin-1.0.22-1.el7ost
Doc Type: If docs needed, set a value
Doc Text:
Clone Of:
Environment:
Last Closed: 2019-01-11 11:53:35 UTC
Target Upstream Version:
Embargoed:


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
OpenStack gerrit 607492 0 'None' MERGED Use config-drive to get metadata if available 2020-07-10 09:57:50 UTC
OpenStack gerrit 608378 0 'None' MERGED Fix - Invalid ipaotp returned if host in cache 2020-07-10 09:57:50 UTC
Red Hat Product Errata RHEA-2019:0045 0 None None None 2019-01-11 11:53:44 UTC

Internal Links: 1740401

Description Dan Sneddon 2018-10-02 18:27:10 UTC
Description of problem:
We are receiving reports that routed spine/leaf TLS anywhere deployments are failing because the Nova Metadata link-local address (169.254.169.254) is not usable in routed spine/leaf deployments. It appears that TLS everywhere is the only component that still uses the Nova Metadata link-local address. Deployments which do not use TLS everywhere succeed without special handling for this address.

Version-Release number of selected component (if applicable):
Rocky/OSP 14

How reproducible:
100%

Steps to Reproduce:
1. Configure deployment to use routed spine/leaf
2. Configure deployment to use TLS everywhere
3. Deploy

Actual results:
Deployment hangs because the overcloud nodes cannot obtain the password for the CA

Expected results:
It was expected that the deployment would succeed.

The tripleo-heat-templates for OSP Director were modified for Rocky with the understanding that the Nova Metadata link-local IP (169.254.169.254) was no longer required. Services can access the Nova Metadata directly by accessing http://<undercloud_IP>:8775.

Additional info:
The Nova Metadata link-local address duplicates functionality in AWS EC2 which allows a node to obtain configuration information when it first boots without knowing anything about the local network environment. In previous versions of OSP when using routed spine/leaf it was required to create a route on the overcloud node that would forward traffic destined for 169.254.169.254 to the local router. The routing infrastructure then needs a route pointing 169.254.169.254 to the IP of the undercloud.

Unfortunately many operators don't have the option of adding this route to the infrastructure network due to policy or separation of duties. In other cases, there are several different undercloud hosts, making it difficult or impossible to route the address to each of them as needed. This also violates network RFCs as the 169.254.x.x address space is never supposed to cross a router boundary.

The intended purpose of the link-local address space is for communication between nodes on the same subnet broadcast domain (VLAN), without any assigned addresses. The same address space is duplicated on every broadcast domain, so it is not intended to be used for routed communications.

Comment 22 errata-xmlrpc 2019-01-11 11:53:35 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHEA-2019:0045

Comment 23 Bob Fournier 2019-11-11 17:35:38 UTC
Should we backport this to OSP-13?

Comment 24 Ade Lee 2019-12-16 16:04:44 UTC
This should definitely be backported to OPS-13, but I think it already has been.


Note You need to log in before you can comment on or make changes to this bug.