Bug 1916092

Summary: [OVN][RFE] Add support for VXLAN tunnels for interchassis communication
Product: Red Hat OpenStack Reporter: Kamil Sambor <ksambor>
Component: python-networking-ovnAssignee: OSP Team <rhos-maint>
Status: CLOSED CURRENTRELEASE QA Contact: Eduardo Olivares <eolivare>
Severity: high Docs Contact:
Priority: high    
Version: 16.2 (Train)CC: amuller, apevec, avishnoi, bcafarel, chrisw, ctrautma, ekuris, gregraka, ihrachys, jishi, lhh, majopela, ralongi, scohen, skaplons
Target Milestone: z3Keywords: FutureFeature, Reopened, TestOnly, Triaged
Target Release: ---   
Hardware: Unspecified   
OS: Unspecified   
Whiteboard:
Fixed In Version: python-networking-ovn-7.4.2-2.20210524095236.47ebdf7.el8ost Doc Type: Enhancement
Doc Text:
Starting in Red Hat OpenStack Platform (RHOSP) 16.2.3, the Modular Layer 2 mechanism driver with Open Virtual Networking (OVN) supports the VXLAN tunneling protocol. You can now migrate from ML2/OVS to ML2/OVN and continue using VXLAN tunneling. For more information, see the link:https://access.redhat.com/documentation/en-us/red_hat_openstack_platform/16.2/html-single/migrating_the_networking_service_to_the_ml2ovn_mechanism_driver/index[Migrating the Networking Service to the ML2/OVN Mechanism Driver] guide.
Story Points: ---
Clone Of: 1881697 Environment:
Last Closed: 2022-05-13 10:37:48 UTC Type: ---
Regression: --- Mount Type: ---
Documentation: --- CRM:
Verified Versions: Category: ---
oVirt Team: --- RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: --- Target Upstream Version:
Embargoed:
Bug Depends On: 1841526, 1881697, 2128735    
Bug Blocks: 1881704, 1916091    

Description Kamil Sambor 2021-01-14 07:55:10 UTC
+++ This bug was initially created as a clone of Bug #1881697 +++

+++ This bug was initially created as a clone of Bug #1841526 +++

Description of problem: right now VXLAN is not supported by OVN for regular inter-chassis communication. There is VXLAN support for "vtep" ovsdb schema compliant external switches (aka ramp switches). This RFE is to extend core OVN to support VXLAN for regular in-cluster traffic.

Initial upstream discussion: https://www.mail-archive.com/ovs-discuss@openvswitch.org/msg06771.html
Demo implementation: https://patchwork.ozlabs.org/project/openvswitch/patch/20200320050711.247351-1-ihrachys@redhat.com/

Related links:
https://blog.russellbryant.net/2017/05/30/ovn-geneve-vs-vxlan-does-it-matter/

Core OVN work is tracked in bug 1841526. This bug is to adjust Neutron to support the new network type for OVN driver. There are two elements to it:

1) allow VXLAN type (as simple as https://review.opendev.org/734889);
2) read max_tunid from NB_Global object and use it to adjust vni_ranges used by VXLAN type driver.

Since max_tunid is written in the latest OVN only, its presence can be used as a tag that VXLAN support is present.

Comment 1 Kamil Sambor 2021-02-23 10:46:25 UTC

*** This bug has been marked as a duplicate of bug 1881697 ***

Comment 2 OSP Team 2022-04-25 10:36:20 UTC
According to our records, this should be resolved by python-networking-ovn-7.4.2-2.20220113214853.el8ost.  This build is available now.