Bug 1723365 - OVS not handling properly multicast traffic
Summary: OVS not handling properly multicast traffic
Keywords:
Status: CLOSED NOTABUG
Alias: None
Product: Red Hat OpenStack
Classification: Red Hat
Component: openvswitch
Version: 13.0 (Queens)
Hardware: Unspecified
OS: Unspecified
unspecified
unspecified
Target Milestone: ---
: ---
Assignee: Open vSwitch development team
QA Contact: Roee Agiman
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2019-06-24 11:28 UTC by Miguel Angel Nieto
Modified: 2019-06-25 13:21 UTC (History)
3 users (show)

Fixed In Version:
Doc Type: If docs needed, set a value
Doc Text:
Clone Of:
Environment:
Last Closed: 2019-06-25 13:21:58 UTC
Target Upstream Version:
Embargoed:


Attachments (Terms of Use)

Description Miguel Angel Nieto 2019-06-24 11:28:08 UTC
Description of problem:
Trying to configure a setup like the following one in which I configure an igmp querier in the physical switch and igmp snooping in ovs br-int.
https://gist.github.com/djoreilly/a22ca4f38396e8867215fca0ad67fa28

The scenario includes:
* VLAN as tenant network
* 3 vms in the same compute using iperf for sending/receiving multicast traffic
* configured in the security group igmp and udp traffic

igmp queries arrives to br-link0 and goes to "normal mode", as igmp snooping is not configured in br-link0, igmp queries should be forwarded to the other ports and arrive to br-int, but it is not arriving to br-int

Version-Release number of selected component (if applicable):
2019-06-20.1(overcloud)

How reproducible:
Deploy the following scenario ospd-13-vxlan-dpdk-sriov-ctlplane-dataplane-bonding-hybrid with the following modifications:
* disable sriov
* configure ovs bonding with nic9 and nic10 so that we connect compute to the performance switch in which we can modify the configuration
* configure an igmp querier in the physical switch for the tenant network
* configure igmp snooping in ovs br-int


Actual results:
with ovs-tcpdump I can see that the igmp-queries are arriving to the port in br-link0

[root@overcloud-computeovsdpdksriov-0 heat-admin]# ovs-tcpdump -i dpdkbond0 -nne | grep "igmp query v3"
tcpdump: verbose output suppressed, use -v or -vv for full protocol decode
listening on midpdkbond0, link-type EN10MB (Ethernet), capture size 262144 bytes
10:33:41.795397 d0:07:ca:34:e9:13 > 01:00:5e:00:00:01, ethertype 802.1Q (0x8100), length 64: vlan 655, p 0, ethertype IPv4, 10.20.155.5 > 224.0.0.1: igmp query v3 [max resp time 1.0s]
10:33:43.805355 d0:07:ca:34:e9:13 > 01:00:5e:00:00:01, ethertype 802.1Q (0x8100), length 64: vlan 655, p 0, ethertype IPv4, 10.20.155.5 > 224.0.0.1: igmp query v3 [max resp time 1.0s]
10:33:45.806349 d0:07:ca:34:e9:13 > 01:00:5e:00:00:01, ethertype 802.1Q (0x8100), length 64: vlan 655, p 0, ethertype IPv4, 10.20.155.5 > 224.0.0.1: igmp query v3 [max resp time 1.0s]
10:33:47.807323 d0:07:ca:34:e9:13 > 01:00:5e:00:00:01, ethertype 802.1Q (0x8100), length 64: vlan 655, p 0, ethertype IPv4, 10.20.155.5 > 224.0.0.1: igmp query v3 [max resp time 1.0s]
10:33:49.808312 d0:07:ca:34:e9:13 > 01:00:5e:00:00:01, ethertype 802.1Q (0x8100), length 64: vlan 655, p 0, ethertype IPv4, 10.20.155.5 > 224.0.0.1: igmp query v3 [max resp time 1.0s]

Having a look to openflow rules we see that packets goes to normal. 
Every 10.0s: ./of_traffic_rules br-link0; ./of_traffic_rules br-int                                                                                                                        Mon Jun 24 10:46:34 2019

********************* br-link0
n_packets=1280   n_bytes=87932    table=0    idle_age=0  priority=0 actions=NORMAL
********************* br-int
n_packets=52   n_bytes=4452    table=0    idle_age=0  priority=2 in_port=2 actions=drop

Packets coming from br-link should match the following rule. There are some packets that have match this rule as the packet counter is not 0, but most of the igmp queries are lost. It is being sent 1 query each 2 seconds.
 cookie=0x786f879e6678976f, duration=8543.945s, table=0, n_packets=1438, n_bytes=149972, idle_age=2, priority=3,in_port=2,dl_vlan=655 actions=mod_vlan_vid:1,resubmit(,60)


In vms, I am using this commands for sending/receiving multicast
/tmp/iperf -c 226.1.1.1 -u -t 1000
/tmp/iperf -s -u -B 226.1.1.1

As with igmp queries (to 224.0.0.22), I can see igmp replies from vms, but they are not arriving to the physical switch

ovs, does not detect any message:
ovs-appctl mdb/show br-int
 port  VLAN  GROUP                Age

mac tables:
[root@overcloud-computeovsdpdksriov-0 heat-admin]# ovs-appctl fdb/show br-int
 port  VLAN  MAC                Age
   12     1  fa:16:3e:bd:3b:bd  207
   11     1  fa:16:3e:c6:c3:12  195
   13     1  fa:16:3e:17:90:9e  165
[root@overcloud-computeovsdpdksriov-0 heat-admin]# ovs-appctl fdb/show br-link0 | grep 655
    3   655  fa:16:3e:bd:3b:bd  212
    3   655  fa:16:3e:c6:c3:12  200
    3   655  fa:16:3e:17:90:9e  170
    1   655  fa:16:3e:40:54:8b  170




this is my ovs config in compute
74949488-1100-457f-9be4-62000d8d5cd1
    Manager "ptcp:6640:127.0.0.1"
        is_connected: true
    Bridge br-int
        Controller "tcp:127.0.0.1:6633"
            is_connected: true
        fail_mode: secure
        Port br-int
            Interface br-int
                type: internal
        Port patch-tun
            Interface patch-tun
                type: patch
                options: {peer=patch-int}
        Port "vhu37e5f0b9-6a"
            tag: 1
            Interface "vhu37e5f0b9-6a"
                type: dpdkvhostuserclient
                options: {vhost-server-path="/var/lib/vhost_sockets/vhu37e5f0b9-6a"}
        Port "vhub95671f5-42"
            tag: 1
            Interface "vhub95671f5-42"
                type: dpdkvhostuserclient
                options: {vhost-server-path="/var/lib/vhost_sockets/vhub95671f5-42"}
        Port "int-br-link0"
            Interface "int-br-link0"
                type: patch
                options: {peer="phy-br-link0"}
        Port "vhu6d2191a0-3c"
            tag: 1
            Interface "vhu6d2191a0-3c"
                type: dpdkvhostuserclient
                options: {vhost-server-path="/var/lib/vhost_sockets/vhu6d2191a0-3c"}
    Bridge br-tun
        Controller "tcp:127.0.0.1:6633"
            is_connected: true
        fail_mode: secure
        Port br-tun
            Interface br-tun
                type: internal
        Port patch-int
            Interface patch-int
                type: patch
                options: {peer=patch-tun}
        Port "vxlan-0a0a716f"
            Interface "vxlan-0a0a716f"
                type: vxlan
                options: {df_default="true", in_key=flow, local_ip="10.10.113.104", out_key=flow, remote_ip="10.10.113.111"}
    Bridge "br-link0"
        Controller "tcp:127.0.0.1:6633"
            is_connected: true
        fail_mode: secure
        Port "br-link0"
            tag: 513
            Interface "br-link0"
                type: internal
        Port "phy-br-link0"
            Interface "phy-br-link0"
                type: patch
                options: {peer="int-br-link0"}
        Port "dpdkbond0"
            Interface "dpdk1"
                type: dpdk
                options: {dpdk-devargs="0000:05:00.3", n_rxq="2"}
            Interface "dpdk0"
                type: dpdk
                options: {dpdk-devargs="0000:05:00.2", n_rxq="2"}
    ovs_version: "2.9.0"

Comment 1 Miguel Angel Nieto 2019-06-24 15:07:59 UTC
I have changed ovs configuration removing bonding and it works properly, so the problem is the dpdk bond. With this configuration it is working
    Bridge "br-link0"
        Controller "tcp:127.0.0.1:6633"
            is_connected: true
        fail_mode: secure
        Port "dpdk0"
            Interface "dpdk0"
                type: dpdk
                options: {dpdk-devargs="0000:05:00.2"}
        Port "dpdk1"
            Interface "dpdk1"
                type: dpdk
                options: {dpdk-devargs="0000:05:00.3"}
        Port "phy-br-link0"
            Interface "phy-br-link0"
                type: patch
                options: {peer="int-br-link0"}
        Port "br-link0"
            tag: 513
            Interface "br-link0"
                type: internal


ovs-appctl mdb/show br-int
 port  VLAN  GROUP                Age
    6     1  226.1.1.1           1
    1     1  querier               0

In the physical switch
root> show igmp snooping membership    
Instance: default-switch

Vlan: vlan655

Learning-Domain: default
Interface: xe-0/0/16.0, Groups: 1
    Group: 226.1.1.1
        Group mode: Exclude
        Source: 0.0.0.0
        Last reported by: 10.20.155.107
        Group timeout:       7 Type: Dynamic
Interface: xe-0/0/17.0, Groups: 0

Comment 2 Miguel Angel Nieto 2019-06-25 13:21:58 UTC
There is no problem with bond, igmp queries were being dropped because they were arriving through the not active port in the bonding. It is the expected behaviour. If I change the active port with the following command, then it works properly
ovs-appctl bond/set-active-slave dpdkbond0 dpdk0


Note You need to log in before you can comment on or make changes to this bug.