Bug 1826364

Summary: [RFE] [P1] OVN - SRIOV routing on VLAN Tenant networks
Product: Red Hat OpenStack Reporter: Daniel Alvarez Sanchez <dalvarez>
Component: openstack-neutronAssignee: Jakub Libosvar <jlibosva>
Status: CLOSED CURRENTRELEASE QA Contact: Fiorella Yanac <fyanac>
Severity: high Docs Contact:
Priority: high    
Version: 17.1 (Wallaby)CC: aaustin, bmv, broose, chrisw, ekuris, gurpsing, hakhande, jlibosva, kholtz, mariel, njohnston, pgrist, ralonsoh, rsafrono, scohen, skaplons, supadhya
Target Milestone: z4Keywords: FutureFeature, TestOnly, Triaged
Target Release: 17.1   
Hardware: Unspecified   
OS: Unspecified   
Whiteboard:
Fixed In Version: Doc Type: No Doc Update
Doc Text:
Story Points: ---
Clone Of: Environment:
Last Closed: 2024-12-03 15:27:49 UTC Type: Bug
Regression: --- Mount Type: ---
Documentation: --- CRM:
Verified Versions: Category: ---
oVirt Team: --- RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: --- Target Upstream Version:
Embargoed:
Bug Depends On: 1766930, 1888567, 2009222, 2101937, 2102017    
Bug Blocks:    

Description Daniel Alvarez Sanchez 2020-04-21 14:09:53 UTC
Right now, the SRIOV support with ML2/OVN is limited to:


1) SRIOV ports on provider networks with external DHCP
2) SRIOV ports on provider networks with OVN DHCP and OVN Metadata service
3) SRIOV ports on VLAN tenant networks and E/W Neutron routing


This BZ is to track the implementation of a 4th scenario that covers:

4) SRIOV ports on VLAN tenant networks and N/S Neutron routing with and without FIPs


There are two ways of achieving this (possibly more) but let me explain why it doesn't work right now.


SRIOV ports are mapped into OVN 'external' ports that are all scheduled into one controller (or network node). Example:


CH1: compute node where SRIOV VM1 (192.168.1.10 - FIP: 10.0.0.10) is running
CH2: chassis where OVN external port is bound to
CH3: chassis where gateway port is bound to
CH4: chassis on the provider network - external

PING from CH4 to VM1:
CH4 -> CH3 -> CH2 -> CH1
When an external node CH4 pings the FIP of the VM, the traffic will go to CH3 which will perform the NAT and route the traffic to CH1 which will send it to the SRIOV NIC at CH1.


As the ICMP request is delivered to the VM, the VM will try to resolve the router interface IP (e.g 192.168.1.1) and will send an ARP broadcast request on the VLAN tenant network.

Right now, this ARP packet will be unanswered because:

* There are flows to drop the ARP packet from the external port VM for the router IP on all chassis except the chassis claiming the external port, so ideally CH2 would reply. However,
* Router ports have the 'reside-on-redirect-chassis' that will make the VLAN traffic centralized [0], meaning that only the chassis hosting the gateway port (CH3 in our example) would reply to it.

In this context we have two possibilities to get this working:

1) Co-locating external and gateway ports. This is non trivial as it may require moving things around that would cause dataplane disruption.

For example: when the external port is first created, it'll be scheduled on CH1 (no gateways involved yet). However, if the network that it belongs to is later attached to a router with a gateway, it may require moving the external port to achieve that co-location with the gateway port. Moving the external port can create disruption as DHCP/metadata will be unavailable for a certain window of time until everything settles.
This time window is unknown and clearly depends on factors such as how many ports need to be moved.

In this scenario, the packet flow in the example above would go this way:

Echo request: CH4 -> CH3 (gateway & external port) -> CH1
Echo reply: CH1 -> CH3 (gateway & external port) -> CH4


2) Supporting distributed traffic on VLAN tenant networks: Tracked here [1]
In this case, there's no need to co-locate things as routing can happen automatically where the external port is bound. This eliminates the burden explained at 1).


Option number 2) seems the more reasonable and efficient way of achieving N/S routing for SRIOV ports on ML2/OVN. Hence I'm marking this bug as dependent on [1] and TestOnly for validation.


[0] https://opendev.org/openstack/networking-ovn/src/tag/7.1.0/networking_ovn/common/ovn_client.py#L1406
[1] https://bugzilla.redhat.com/show_bug.cgi?id=1766930

Comment 2 Daniel Alvarez Sanchez 2020-04-22 12:24:29 UTC
After further testing, this BZ should address routing in general as E/W traffic would suffer from the same limitation as the router IP is not gonna be resolved in the chassis that hosts the external port.
So to sum-up, E/W and N/S routing are not working with the current implementation.

Comment 3 Daniel Alvarez Sanchez 2020-04-22 12:52:20 UTC
E(In reply to Daniel Alvarez Sanchez from comment #2)
> After further testing, this BZ should address routing in general as E/W
> traffic would suffer from the same limitation as the router IP is not gonna
> be resolved in the chassis that hosts the external port.
> So to sum-up, E/W and N/S routing are not working with the current
> implementation.

Again another correction (sigh):

* E/W will *not* work if the router has a gateway due to the 'reside-on-redirect-chassis' limitation imposed by it.
* E/W *will work* if the router has not a gateway.

Comment 4 Lucas Alvares Gomes 2020-09-09 10:12:08 UTC
*** Bug 1508449 has been marked as a duplicate of this bug. ***

Comment 10 Ihar Hrachyshka 2022-04-19 14:23:03 UTC
*** Bug 1996633 has been marked as a duplicate of this bug. ***