Bug 1568133 - [Netvirt] DPDK VxLAN - Multicast traffic test has failed
Summary: [Netvirt] DPDK VxLAN - Multicast traffic test has failed
Keywords:
Status: CLOSED DUPLICATE of bug 1550663
Alias: None
Product: Red Hat OpenStack
Classification: Red Hat
Component: opendaylight
Version: 13.0 (Queens)
Hardware: x86_64
OS: Linux
high
high
Target Milestone: beta
: 13.0 (Queens)
Assignee: Victor Pickard
QA Contact: Itzik Brown
URL:
Whiteboard: odl_netvirt
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2018-04-16 19:33 UTC by Ziv Greenberg
Modified: 2018-10-24 12:37 UTC (History)
15 users (show)

Fixed In Version:
Doc Type: If docs needed, set a value
Doc Text:
Clone Of:
Environment:
N/A
Last Closed: 2018-04-26 07:39:15 UTC
Target Upstream Version:
Embargoed:
zgreenbe: needinfo-
zgreenbe: needinfo-


Attachments (Terms of Use)

Description Ziv Greenberg 2018-04-16 19:33:49 UTC
Description of problem:
As part of our NFV regression testing with ODL, I have encountered an issue with the multicast testing.

My setup configured as the following below:
Two instances, first one as a traffic-runner and the second one as a listener1. Both instances are using security groups:

overcloud) [stack@undercloud-0 ~]$ openstack security group rule list tempest-TestDpdkScenarios-459677 --long
+--------------------------------------+-------------+-----------+------------+-----------+-----------+-----------------------+
| ID                                   | IP Protocol | IP Range  | Port Range | Direction | Ethertype | Remote Security Group |
+--------------------------------------+-------------+-----------+------------+-----------+-----------+-----------------------+
| 4013bd79-5851-4093-9abd-972d9aabe3d2 | icmp        | 0.0.0.0/0 |            | ingress   | IPv4      | None                  |
| 69199122-2e01-4543-8f64-e3b23dfec524 | udp         | 0.0.0.0/0 | 1:65535    | ingress   | IPv4      | None                  |
| 73824165-dcfb-4dd4-b0e0-225cc2fbdda8 | udp         | 0.0.0.0/0 | 1:65535    | egress    | IPv4      | None                  |
| 799955db-8da4-4af1-99d8-a8e8d99a6b61 | None        | None      |            | egress    | IPv4      | None                  |
| 954409c9-3b45-4510-84dd-ec3a8766dcc2 | None        | None      |            | egress    | IPv6      | None                  |
| fce7ed19-aabb-49ad-8786-ea853f92eecb | tcp         | 0.0.0.0/0 | 22:22      | ingress   | IPv4      | None                  |
+--------------------------------------+-------------+-----------+------------+-----------+-----------+-----------------------+
(overcloud) [stack@undercloud-0 ~]$ openstack security group rule list tempest-TestDpdkScenarios-435140210 --long
+--------------------------------------+-------------+-----------+------------+-----------+-----------+-----------------------+
| ID                                   | IP Protocol | IP Range  | Port Range | Direction | Ethertype | Remote Security Group |
+--------------------------------------+-------------+-----------+------------+-----------+-----------+-----------------------+
| 2c888467-776a-4d79-9bbd-6c1e0aaa931d | udp         | 0.0.0.0/0 | 1:65535    | egress    | IPv4      | None                  |
| 40fd2186-3634-43ec-9d06-2b816888ae59 | None        | None      |            | egress    | IPv6      | None                  |
| 485d3dc7-0fd9-4cc1-b485-608a1571a4b2 | tcp         | 0.0.0.0/0 | 22:22      | ingress   | IPv4      | None                  |
| 802637e9-e2dd-4cc1-87c7-cac3ab32d783 | None        | None      |            | egress    | IPv4      | None                  |
| ce5a2a64-2fe4-4fe9-b9cb-d9a509b19ac0 | udp         | 0.0.0.0/0 | 1:65535    | ingress   | IPv4      | None                  |
| f2064134-1b67-4010-a64f-aa28b461046b | icmp        | 0.0.0.0/0 |            | ingress   | IPv4      | None                  |
+--------------------------------------+-------------+-----------+------------+-----------+-----------+-----------------------+

For the multicast traffic generator, I've been using an "iperf" tool:

[root@traffic-runner ~]# iperf -c 226.94.1.1 -u -T 32 -t 3 -i 1
------------------------------------------------------------
Client connecting to 226.94.1.1, UDP port 5001
Sending 1470 byte datagrams, IPG target: 11215.21 us (kalman adjust)
Setting multicast TTL to 32
UDP buffer size:  208 KByte (default)
------------------------------------------------------------
[  3] local 10.35.185.36 port 43341 connected with 226.94.1.1 port 5001
[ ID] Interval       Transfer     Bandwidth
[  3]  0.0- 1.0 sec   131 KBytes  1.07 Mbits/sec
[  3]  1.0- 2.0 sec   128 KBytes  1.05 Mbits/sec
[  3]  2.0- 3.0 sec   128 KBytes  1.05 Mbits/sec
[  3]  0.0- 3.0 sec   386 KBytes  1.05 Mbits/sec
[  3] Sent 269 datagrams


[root@listener1 ~]# iperf -s -u -B 226.94.1.1 -i 1
------------------------------------------------------------
Server listening on UDP port 5001
Binding to local address 226.94.1.1
Joining multicast group  226.94.1.1
Receiving 1470 byte datagrams
UDP buffer size:  208 KByte (default)
------------------------------------------------------------

We can see that no multicast traffic has been received in the listener1 instance above.

Also, I have seen ovs packet flow drops on the compute node:
ovs-ofctl dump-flows br-int --protocols=OpenFlow13 | grep drop                                                                                                                                            

 cookie=0x8000004, duration=382021.722s, table=22, n_packets=0, n_bytes=0, priority=42,ip,metadata=0x30d42/0xfffffe,nw_dst=10.35.185.31 actions=drop
 cookie=0x8030000, duration=382918.197s, table=51, n_packets=0, n_bytes=0, priority=15,dl_dst=01:80:c2:00:00:00/ff:ff:ff:ff:ff:f0 actions=drop
 cookie=0x8800004, duration=382310.234s, table=55, n_packets=0, n_bytes=0, priority=10,tun_id=0x4,metadata=0x40000000000/0xfffff0000000000 actions=drop
 cookie=0x88000d6, duration=5686.579s, table=55, n_packets=312, n_bytes=411180, priority=10,tun_id=0xd6,metadata=0xd60000000000/0xfffff0000000000 actions=drop
 cookie=0x88000d7, duration=5683.896s, table=55, n_packets=11, n_bytes=1578, priority=10,tun_id=0xd7,metadata=0xd70000000000/0xfffff0000000000 actions=drop
 cookie=0x88000d8, duration=5632.232s, table=55, n_packets=70, n_bytes=4140, priority=10,tun_id=0xd8,metadata=0xd80000000000/0xfffff0000000000 actions=drop
 cookie=0x88000d9, duration=5629.594s, table=55, n_packets=10, n_bytes=1500, priority=10,tun_id=0xd9,metadata=0xd90000000000/0xfffff0000000000 actions=drop
 cookie=0x8220000, duration=382918.197s, table=81, n_packets=1646, n_bytes=99416, priority=0 actions=drop
 cookie=0x6900000, duration=5687.429s, table=210, n_packets=0, n_bytes=0, priority=63010,udp,metadata=0xd60000000000/0xfffff0000000000,tp_src=67,tp_dst=68 actions=drop
 cookie=0x6900000, duration=5687.428s, table=210, n_packets=0, n_bytes=0, priority=63010,udp6,metadata=0xd60000000000/0xfffff0000000000,tp_src=547,tp_dst=546 actions=drop
 cookie=0x6900000, duration=5687.427s, table=210, n_packets=0, n_bytes=0, priority=63020,icmp6,metadata=0xd60000000000/0xfffff0000000000,icmp_type=134,icmp_code=0 actions=drop
 cookie=0x6900000, duration=5684.776s, table=210, n_packets=0, n_bytes=0, priority=63010,udp,metadata=0xd70000000000/0xfffff0000000000,tp_src=67,tp_dst=68 actions=drop
 cookie=0x6900000, duration=5684.776s, table=210, n_packets=0, n_bytes=0, priority=63010,udp6,metadata=0xd70000000000/0xfffff0000000000,tp_src=547,tp_dst=546 actions=drop
 cookie=0x6900000, duration=5684.776s, table=210, n_packets=0, n_bytes=0, priority=63020,icmp6,metadata=0xd70000000000/0xfffff0000000000,icmp_type=134,icmp_code=0 actions=drop
 cookie=0x6900000, duration=5632.096s, table=210, n_packets=0, n_bytes=0, priority=63010,udp,metadata=0xd80000000000/0xfffff0000000000,tp_src=67,tp_dst=68 actions=drop
 cookie=0x6900000, duration=5632.084s, table=210, n_packets=0, n_bytes=0, priority=63010,udp6,metadata=0xd80000000000/0xfffff0000000000,tp_src=547,tp_dst=546 actions=drop
 cookie=0x6900000, duration=5632.084s, table=210, n_packets=0, n_bytes=0, priority=63020,icmp6,metadata=0xd80000000000/0xfffff0000000000,icmp_type=134,icmp_code=0 actions=drop
 cookie=0x6900000, duration=5629.552s, table=210, n_packets=0, n_bytes=0, priority=63010,udp,metadata=0xd90000000000/0xfffff0000000000,tp_src=67,tp_dst=68 actions=drop
 cookie=0x6900000, duration=5629.552s, table=210, n_packets=0, n_bytes=0, priority=63010,udp6,metadata=0xd90000000000/0xfffff0000000000,tp_src=547,tp_dst=546 actions=drop
 cookie=0x6900000, duration=5629.552s, table=210, n_packets=0, n_bytes=0, priority=63020,icmp6,metadata=0xd90000000000/0xfffff0000000000,icmp_type=134,icmp_code=0 actions=drop
 cookie=0x6900000, duration=382918.120s, table=210, n_packets=16, n_bytes=736, priority=63009,arp actions=drop
 cookie=0x6900000, duration=382918.120s, table=210, n_packets=246, n_bytes=22204, priority=61009,ipv6 actions=drop
 cookie=0x6900000, duration=382918.120s, table=210, n_packets=0, n_bytes=0, priority=61009,ip actions=drop
 cookie=0x6900000, duration=382918.120s, table=210, n_packets=0, n_bytes=0, priority=0 actions=drop
 cookie=0x6900000, duration=382918.120s, table=212, n_packets=0, n_bytes=0, priority=0 actions=drop
 cookie=0x6900001, duration=5687.403s, table=214, n_packets=4, n_bytes=360, priority=62020,ct_state=+inv+trk,metadata=0xd60000000000/0xfffff0000000000 actions=drop
 cookie=0x6900001, duration=5684.750s, table=214, n_packets=2, n_bytes=180, priority=62020,ct_state=+inv+trk,metadata=0xd70000000000/0xfffff0000000000 actions=drop
 cookie=0x6900001, duration=5632.053s, table=214, n_packets=4, n_bytes=360, priority=62020,ct_state=+inv+trk,metadata=0xd80000000000/0xfffff0000000000 actions=drop
 cookie=0x6900001, duration=5629.521s, table=214, n_packets=2, n_bytes=180, priority=62020,ct_state=+inv+trk,metadata=0xd90000000000/0xfffff0000000000 actions=drop
 cookie=0x6900001, duration=5687.402s, table=214, n_packets=0, n_bytes=0, priority=50,metadata=0xd60000000000/0xfffff0000000000 actions=drop
 cookie=0x6900001, duration=5684.745s, table=214, n_packets=0, n_bytes=0, priority=50,metadata=0xd70000000000/0xfffff0000000000 actions=drop
 cookie=0x6900001, duration=5632.049s, table=214, n_packets=0, n_bytes=0, priority=50,metadata=0xd80000000000/0xfffff0000000000 actions=drop
 cookie=0x6900001, duration=5629.519s, table=214, n_packets=0, n_bytes=0, priority=50,metadata=0xd90000000000/0xfffff0000000000 actions=drop
 cookie=0x6900000, duration=382918.120s, table=214, n_packets=0, n_bytes=0, priority=0 actions=drop
 cookie=0x6900000, duration=382918.120s, table=217, n_packets=0, n_bytes=0, priority=0 actions=drop
 cookie=0x8000007, duration=382310.981s, table=220, n_packets=0, n_bytes=0, priority=10,reg6=0x400,metadata=0x1/0x1 actions=drop
 cookie=0x8000007, duration=382310.946s, table=220, n_packets=0, n_bytes=0, priority=10,reg6=0x300,metadata=0x1/0x1 actions=drop
 cookie=0x6900000, duration=382918.120s, table=240, n_packets=2759, n_bytes=2909556, priority=0 actions=drop
 cookie=0x6900000, duration=382918.120s, table=242, n_packets=0, n_bytes=0, priority=0 actions=drop
 cookie=0x6900001, duration=5687.448s, table=244, n_packets=18, n_bytes=1692, priority=62020,ct_state=+inv+trk,reg6=0xd600/0xfffff00 actions=drop
 cookie=0x6900001, duration=5684.776s, table=244, n_packets=0, n_bytes=0, priority=62020,ct_state=+inv+trk,reg6=0xd700/0xfffff00 actions=drop
 cookie=0x6900001, duration=5632.139s, table=244, n_packets=9, n_bytes=846, priority=62020,ct_state=+inv+trk,reg6=0xd800/0xfffff00 actions=drop
 cookie=0x6900001, duration=5629.552s, table=244, n_packets=0, n_bytes=0, priority=62020,ct_state=+inv+trk,reg6=0xd900/0xfffff00 actions=drop
 cookie=0x6900001, duration=5687.445s, table=244, n_packets=3, n_bytes=1026, priority=50,reg6=0xd600/0xfffff00 actions=drop
 cookie=0x6900001, duration=5684.776s, table=244, n_packets=3, n_bytes=1026, priority=50,reg6=0xd700/0xfffff00 actions=drop
 cookie=0x6900001, duration=5632.136s, table=244, n_packets=1, n_bytes=342, priority=50,reg6=0xd800/0xfffff00 actions=drop
 cookie=0x6900001, duration=5629.552s, table=244, n_packets=3, n_bytes=1026, priority=50,reg6=0xd900/0xfffff00 actions=drop
 cookie=0x6900000, duration=382918.120s, table=244, n_packets=0, n_bytes=0, priority=0 actions=drop
 cookie=0x6900000, duration=382918.120s, table=247, n_packets=0, n_bytes=0, priority=0 actions=drop


It is also important to mention that running the same multicast test without the security groups, was successful:

[root@traffic-runner ~]# iperf -c 226.94.1.1 -u -T 32 -t 3 -i 1
------------------------------------------------------------
Client connecting to 226.94.1.1, UDP port 5001
Sending 1470 byte datagrams, IPG target: 11215.21 us (kalman adjust)
Setting multicast TTL to 32
UDP buffer size:  208 KByte (default)
------------------------------------------------------------
[  3] local 10.35.185.43 port 44441 connected with 226.94.1.1 port 5001
[ ID] Interval       Transfer     Bandwidth
[  3]  0.0- 1.0 sec   131 KBytes  1.07 Mbits/sec
[  3]  1.0- 2.0 sec   128 KBytes  1.05 Mbits/sec
[  3]  2.0- 3.0 sec   128 KBytes  1.05 Mbits/sec
[  3]  0.0- 3.0 sec   386 KBytes  1.05 Mbits/sec
[  3] Sent 269 datagrams
[root@traffic-runner ~]#

[root@listener1 ~]# iperf -s -u -B 226.94.1.1 -i 1
------------------------------------------------------------
Server listening on UDP port 5001
Binding to local address 226.94.1.1
Joining multicast group  226.94.1.1
Receiving 1470 byte datagrams
UDP buffer size:  208 KByte (default)
------------------------------------------------------------
[  3] local 226.94.1.1 port 5001 connected with 10.35.185.43 port 44441
[ ID] Interval       Transfer     Bandwidth        Jitter   Lost/Total Datagrams
[  3]  0.0- 1.0 sec   129 KBytes  1.06 Mbits/sec   0.003 ms    0/   90 (0%)
[  3]  1.0- 2.0 sec   128 KBytes  1.05 Mbits/sec   0.003 ms    0/   89 (0%)
[  3]  2.0- 3.0 sec   128 KBytes  1.05 Mbits/sec   0.003 ms    0/   89 (0%)
[  3]  0.0- 3.0 sec   386 KBytes  1.05 Mbits/sec   0.003 ms    0/  269 (0%)

^C[root@listener1 ~]#


Version-Release number of selected component (if applicable):
2018-03-29.1


How reproducible:
always

Comment 1 Tim Rozet 2018-04-16 19:43:18 UTC
Most of the drops here are in table 241 (Egress ACL) and table 81 (???).

Comment 2 Tim Rozet 2018-04-16 19:51:16 UTC
It would be helpful if you can provide the ofproto/trace on the node that is dropping the packets.  You can do:
1) ovs-ofctl -O openflow13 show br-int (to get the ports)

2) ovs-appctl ofproto/trace br-int in_port=7,udp,dl_src=fa:16:3e:f4:bd:de,dl_dst=ff:ff:ff:ff:ff:ff,nw_dst=255.255.255.255,udp_dst=67,udp_src=68

^replace with your packet/port info

That way we can see which flows in the pipeline are being hit and which table is dropping the multicast.

Comment 3 Victor Pickard 2018-04-16 20:24:59 UTC
When using security groups with multicast traffic, you have to configure the port to allow ipv4 multicast address as shown below:

openstack port set --allowed-address ip-address=226.94.1.1,mac-address=01:00:5e:5e:01:01 74ab3b8e-1b95-4fef-a60d-295856b714b6


Replace the port in the above command with the port under test, and please confirm if the multicast packets reach the receiver (listener1).

Comment 4 Ziv Greenberg 2018-04-17 10:26:42 UTC
(In reply to Victor Pickard from comment #3)
> When using security groups with multicast traffic, you have to configure the
> port to allow ipv4 multicast address as shown below:
> 
> openstack port set --allowed-address
> ip-address=226.94.1.1,mac-address=01:00:5e:5e:01:01
> 74ab3b8e-1b95-4fef-a60d-295856b714b6
> 
> 
> Replace the port in the above command with the port under test, and please
> confirm if the multicast packets reach the receiver (listener1).

Hi Victor,

It did the trick, the multicast packets have been received in the listener1 instance as expected.

My question is, how come when executing the same test but with neutron deployment instead, I didn't need to configure the openstack port at all.

Thanks,
Ziv

Comment 6 Victor Pickard 2018-04-23 20:17:20 UTC
Hi Ziv,

With ODL as the backend driver, the multicast packets are dropped by acl checks. Configuring the port as above adds rules to the pipeline to allow packets that match the IP to egress the switch.

It is my understanding that OVS (neutron deployment) will flood multicast packets to all ports (unless you have igmp snooping enabled, in which case only sent to registered listeners/receivers), and doesn't require explicit rules to allow packets to egress (based on your earlier comment).

Do you think this is an issue, or just something that we need to make sure is documented properly when using ODL as backend driver?

Comment 7 Ziv Greenberg 2018-04-25 08:28:30 UTC
HI Victor,

Thank you for a detailed explanation!
I think the following question should be addressed to the product.

Franck, Nir, please let us know what is your thoughts?

Comment 8 Ziv Greenberg 2018-04-26 07:39:15 UTC

*** This bug has been marked as a duplicate of bug 1550663 ***


Note You need to log in before you can comment on or make changes to this bug.