Description of problem: We need to create a TripleO service that creates a container with the Free Range Routing (FRRouting or FRR) software in order to enable BGP routing for OpenStack overcloud nodes. Version-Release number of selected component (if applicable): 17 Expected results: This BZ will track the implementation of a new TrpileO service that creates a container for running FRR daemons. Specifically the FRR daemons that we need are the management daemon frr-watchfrr that starts and monitors the other daemons, the vtysh daemon that presents the configuration UI, the zebra daemon that reads and writes routes to and from the kernel, the BGP daemon, and optionally the BFD (bi-directional forwarding) daemon that enables route viability detection. There will be some basic configuration parameters supplied by the installer, and the configuration will be placed on the server using Ansible. A route filter will be configured that defines that any host addresses on the node with a /32 (IPv4) or /128 (IPv6) subnet mask will be redistributed into BGP and advertised to BGP peers (routers) running on the network infrastructure. Additional info: There is an upstream spec that describes the requirement for the TripleO service here: https://review.opendev.org/758249
There are several use cases for BGP, but all of them depend on having a TripleO service that installs and configures the FRR container. We may enable these use cases separately, and it is possible that we will not implement all use cases. These use cases include: * High-availability for virtual IP addresses managed by Pacemaker using BGP instead of ARP for routing traffic to API endpoints. * Advertising Neutron floating IP addresses using BGP so that floating IP addresses hosted by Neutron routers can be portable across routed networks. Either network controllers or compute nodes would advertise the floating IP depending on whether DVR is used in the Neutron configuration. * Advertising Neutron provider IP addresses for VM instances running on compute nodes using BGP. In this model it is not necessary to trunk a VLAN directly to the compute node hosting the instance, instead BGP will be used to route traffic destined to the VM instance IP to the compute node. * BGP VPN that allows traffic to be routed to a particular datacenter gateway node for a given network. In this model either VXLAN VNIs or MPLS labels can be advertised via BGP routes. * Advertise reachability to tenant networks via networking gateway nodes using BGP. These nodes may be compute nodes hosting Neutron routers if DVR is used, or network controllers where Neutron routers are hosted. * Advertise reachability to tenant networks running in per-tenant IP namespaces and separate virtual routing and forwarding (VRF) instances. Implementation details for individual use cases will be tracked in separate BZs.