Description of problem:
When users have only a single NIC available on their nodes, they can use this NIC for both the default network and for a secondary L2 network. To allow that we configure a linux bridge on top of the NIC and move the original IP of the NIC on top of the bridge. That way, this network can be still used by the default SDN while also utilized for secondary L2 connections.
With a recent change of the gateway mode in OVN Kubernetes, the default NIC of a host is now attached to an OVS bridge "br-ex". Due to this, users and unable to use their primary network for bridging and providing L2 connectivity for their VMs/Pods. This kind of topology was possible in 4.5 and this bug will break upgrade and use for some of our users.
Find more info and suggested solutions in a bug that was opened on OpenShift Virtualization to track this regression https://bugzilla.redhat.com/show_bug.cgi?id=1885605. Opening this new BZ to track the resolution on OVN Kubernetes side.
Version-Release number of selected component (if applicable):
Steps to Reproduce:
1. Deploy cluster with OVN Kubernetes
2. Try to configure Linux bridge on top of the default NIC
This fails since the NIC is now attached to OVN Kubernetes' br-ex.
User should be able to reconfigure the default interface. E.g. attach it to a Linux bridge to allow L2 connections to the default network work Pods and VMs.
Shared gw fix is merged into 4.7. Testing local gateway fix that will be required for 4.6 backport:
Note, the solution to this bug will be to allow other applications on the host to attach a port to br-ex (shared bridge) and traffic will flow normally over it like a regular L2 bridge. OVN-K8S will still take the NIC and move it onto the br-ex bridge at install time. Then CNV or any other application can simply attach to br-ex with an OVS patch port (if connectint to another OVS bridge) or create a veth pair to attach a linux bridge or something else.
@rbrattai Can you help looking at this?
Verified on 4.7.0-0.nightly-2020-12-03-083300 on OpenStack
Created veth pair, attached to Linux bridge, tcpdumped and saw MDNS traffic from all the other nodes in the cluster.
ip link add v1 type veth peer v2
ip l s v1 up
ip l s v2 up
ip link add name br-0 type bridge
ip link set br-0 up
ip link set v2 master br-0
ovs-vsctl add-port br-ex v1
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.
For information on the advisory (Moderate: OpenShift Container Platform 4.7.0 security, bug fix, and enhancement update), and where to find the updated
files, follow the link below.
If the solution does not work for you, open a new bug report.
The needinfo request[s] on this closed bug have been removed as they have been unresolved for 500 days