The FDP team is no longer accepting new bugs in Bugzilla. Please report your issues under FDP project in Jira. Thanks.
Bug 1846018 - [RFE][OVN] Support of vlan transparency
Summary: [RFE][OVN] Support of vlan transparency
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: Red Hat Enterprise Linux Fast Datapath
Classification: Red Hat
Component: OVN
Version: FDP 20.D
Hardware: Unspecified
OS: Unspecified
high
high
Target Milestone: ---
: ---
Assignee: Ihar Hrachyshka
QA Contact: ying xu
URL:
Whiteboard:
Depends On:
Blocks: 1843935 1846019 1898082
TreeView+ depends on / blocked
 
Reported: 2020-06-10 15:19 UTC by Slawek Kaplonski
Modified: 2020-12-28 06:51 UTC (History)
10 users (show)

Fixed In Version: ovn2.13-20.09.0-12.el7 ovn2.13-20.09.0-12.el8
Doc Type: If docs needed, set a value
Doc Text:
Clone Of:
: 1898082 (view as bug list)
Environment:
Last Closed: 2020-12-01 15:07:02 UTC
Target Upstream Version:
Embargoed:


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
Red Hat Product Errata RHBA-2020:5308 0 None None None 2020-12-01 15:07:50 UTC

Description Slawek Kaplonski 2020-06-10 15:19:24 UTC
Currently OVN doesn't support any way to configure port as vlan transparent. It is available in OpenStack Neutron [1] but we can't use it with ovn driver for now.

I did some testing locally and such vlan tagged traffic was blocked by the OF rule:

    cookie=0x0, duration=17.580s, table=8, n_packets=6, n_bytes=444, idle_age=2, priority=100,metadata=0x2,vlan_tci=0x1000/0x1000 actions=drop

So we need to have some way to tell northd that it shouldn't match on vlan_tci
at all in case when neutron network has got vlan_transparency set to True.

There is also u/s mail thread started for that, see https://mail.openvswitch.org/pipermail/ovs-discuss/2020-June/050174.html

[1] https://specs.openstack.org/openstack/neutron-specs/specs/kilo/nfv-vlan-trunks.html

Comment 1 Daniel Alvarez Sanchez 2020-06-11 15:34:47 UTC
I'm not a core OVN expert and there's surely better ways of doing so but I thought of setting some option from the CMS into the Logical Switch and skip this [0] depending on that knob.
This has the disadvantage of not supporting 'VLAN transparency' for trunks/subports as they come tagged and we cannot ignore the tag.

For that, we could mimic the 'vlan-limit' approach in OVS [1] for OVN Logical switches and have QinQ support that we can leverage to achieve VLAN transparency.

Just some ideas for when this gets implemented.

[0] https://github.com/ovn-org/ovn/blob/74d90c2223d0a8c123823fb849b4c2de58c296e4/controller/physical.c#L563
[1] https://developers.redhat.com/blog/2017/06/06/open-vswitch-overview-of-802-1ad-qinq-support/#:~:text=Enabling%20QinQ%20support%20in%20OVS,compatibility%20with%20older%20OVS%20releases.

Comment 5 Ihar Hrachyshka 2020-11-11 14:27:33 UTC
The OVN patch: https://patchwork.ozlabs.org/project/ovn/patch/20201110023449.194642-1-ihrachys@redhat.com/ (already in upstream master).

Comment 8 ying xu 2020-11-19 02:00:00 UTC
verified on version:
# rpm -qa|grep ovn
ovn2.13-central-20.09.0-12.el8fdp.x86_64
ovn2.13-host-20.09.0-12.el8fdp.x86_64
ovn2.13-20.09.0-12.el8fdp.x86_64


 server:
               ovn-nbctl ls-add ls
                ovn-nbctl lsp-add ls vm1
                ovn-nbctl lsp-set-addresses vm1 "00:00:00:00:00:01"
                ovn-nbctl lsp-add ls vm2
                ovn-nbctl lsp-set-addresses vm2 "00:00:00:00:00:02"
                ovn-nbctl lsp-add ls vm3
                ovn-nbctl lsp-set-addresses vm3 "00:00:00:00:00:03"
                ovn-nbctl lsp-add ls vm4
                ovn-nbctl lsp-set-addresses vm4 "00:00:00:00:00:04"
                rlRun "ovn-nbctl set logical_switch ls other_config:vlan-passthru=true"


                ip netns add vm1
                ovs-vsctl add-port br-int vm1 -- set interface vm1 type=internal
                ip link set vm1 netns vm1
                ip netns exec vm1 ip link set vm1 address 00:00:00:00:00:01
                ip netns exec vm1 ip link set vm1 up
                ip netns exec vm1 ip link set lo up
                ovs-vsctl set Interface vm1 external_ids:iface-id=vm1
                ip netns exec vm1 ip link add link vm1 name vm1.5 type vlan id 5
                ip netns exec vm1 ip link set vm1.5 up
                ip netns exec vm1 ip addr add 42.42.42.15/24 dev vm1.5
                 ip netns exec vm1 ip addr add 2000::15/64 dev vm1.5

                ip netns add vm3
                ovs-vsctl add-port br-int vm3 -- set interface vm3 type=internal
                ip link set vm3 netns vm3
                ip netns exec vm3 ip link set vm3 address 00:00:00:00:00:03
                ip netns exec vm3 ip link set vm3 up
                ip netns exec vm3 ip link set lo up
                ovs-vsctl set Interface vm3 external_ids:iface-id=vm3
                ip netns exec vm3 ip link add link vm3 name vm3.5 type vlan id 10
                ip netns exec vm3 ip link set vm3.5 up
                ip netns exec vm3 ip addr add 42.42.42.35/24 dev vm3.5

client:
                ip netns add vm2
                ovs-vsctl add-port br-int vm2 -- set interface vm2 type=internal
                ip link set vm2 netns vm2
                ip netns exec vm2 ip link set vm2 address 00:00:00:00:00:02
                ip netns exec vm2 ip link set vm2 up
                ip netns exec vm2 ip link set lo up
                ovs-vsctl set Interface vm2 external_ids:iface-id=vm2
                ip netns exec vm2 ip link add link vm2 name vm2.5 type vlan id 5
                ip netns exec vm2 ip link set vm2.5 up
                ip netns exec vm2 ip addr add 42.42.42.25/24 dev vm2.5
                ip netns exec vm2 ip addr add 2000::25/64

                ip netns add vm4
                ovs-vsctl add-port br-int vm4 -- set interface vm4 type=internal
                ip link set vm4 netns vm4
                ip netns exec vm4 ip link set vm4 address 00:00:00:00:00:04
                ip netns exec vm4 ip link set vm4 up
                ip netns exec vm4 ip link set lo up
                ovs-vsctl set Interface vm4 external_ids:iface-id=vm4
                ip netns exec vm4 ip link add link vm4 name vm4.5 type vlan id 10
                ip netns exec vm4 ip link set vm4.5 up
                ip netns exec vm4 ip addr add 42.42.42.45/24 dev vm4.5

vm1:ip netns exec vm1 ping6 2000::25
PING 2000::25(2000::25) 56 data bytes
64 bytes from 2000::25: icmp_seq=1 ttl=64 time=2.66 ms
64 bytes from 2000::25: icmp_seq=2 ttl=64 time=0.352 ms
^C
--- 2000::25 ping statistics ---
2 packets transmitted, 2 received, 0% packet loss, time 3ms
rtt min/avg/max/mdev = 0.352/1.506/2.660/1.154 ms

vm4:# ip netns exec vm4 ping 42.42.42.35
PING 42.42.42.35 (42.42.42.35) 56(84) bytes of data.
64 bytes from 42.42.42.35: icmp_seq=1 ttl=64 time=2.85 ms
64 bytes from 42.42.42.35: icmp_seq=2 ttl=64 time=0.325 ms
64 bytes from 42.42.42.35: icmp_seq=3 ttl=64 time=0.333 ms
64 bytes from 42.42.42.35: icmp_seq=4 ttl=64 time=0.337 ms
^C
--- 42.42.42.35 ping statistics ---
4 packets transmitted, 4 received, 0% packet loss, time 60ms
rtt min/avg/max/mdev = 0.325/0.961/2.851/1.091 ms


if set "ovn-nbctl set logical_switch ls other_config:vlan-passthru=false",
ping fail.
# ip netns exec vm1 ping 42.42.42.25
PING 42.42.42.25 (42.42.42.25) 56(84) bytes of data.
^C
--- 42.42.42.25 ping statistics ---
2 packets transmitted, 0 received, 100% packet loss, time 20ms


set "ovn-nbctl set logical_switch ls other_config:vlan-passthru=true", and send a packet with 2 tags(use scapy)
#!/usr/bin/python

import sys
from scapy.all import *

mac="00:00:00:00:00:01"
a=Ether(dst="00:00:00:00:00:02")/Dot1Q(vlan=5)/Dot1Q(vlan=10)/IP(dst="225.0.0.1")/ICMP()
a.show()
sendp(a, iface="vm1.5")

# ip netns exec vm1 /usr/bin/python3 ./vlan.py 
###[ Ethernet ]### 
  dst       = 00:00:00:00:00:02
  src       = 00:00:00:00:00:00
  type      = n_802_1Q
###[ 802.1Q ]### 
     prio      = 0
     id        = 0
     vlan      = 5
     type      = n_802_1Q
###[ 802.1Q ]### 
        prio      = 0
        id        = 0
        vlan      = 10
        type      = IPv4
###[ IP ]### 
           version   = 4
           ihl       = None
           tos       = 0x0
           len       = None
           id        = 1
           flags     = 
           frag      = 0
           ttl       = 64
           proto     = icmp
           chksum    = None
           src       = 0.0.0.0
           dst       = 225.0.0.1
           \options   \
###[ ICMP ]### 
              type      = echo-request
              code      = 0
              chksum    = None
              id        = 0x0
              seq       = 0x0

.
Sent 1 packets.


capture on vm2:
]# ip netns exec vm2 tcpdump -i vm2.5 -nnle&
[1] 2069875
[root@dell-per740-54 dhcp]# dropped privs to tcpdump
tcpdump: verbose output suppressed, use -v or -vv for full protocol decode
listening on vm2.5, link-type EN10MB (Ethernet), capture size 262144 bytes

[root@dell-per740-54 dhcp]# 20:58:21.324194 00:00:00:00:00:00 > 00:00:00:00:00:02, ethertype 802.1Q (0x8100), length 50: vlan 5, p 0, ethertype 802.1Q, vlan 10, p 0, ethertype IPv4, 0.0.0.0 > 225.0.0.1: ICMP echo request, id 0, seq 0, length 8

Comment 10 errata-xmlrpc 2020-12-01 15:07:02 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory (ovn2.13 bug fix and enhancement update), and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHBA-2020:5308


Note You need to log in before you can comment on or make changes to this bug.