The FDP team is no longer accepting new bugs in Bugzilla. Please report your issues under FDP project in Jira. Thanks.
Bug 1886314 - IP multicast doesn't work across localnet ports
Summary: IP multicast doesn't work across localnet ports
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: Red Hat Enterprise Linux Fast Datapath
Classification: Red Hat
Component: ovn2.13
Version: FDP 20.E
Hardware: x86_64
OS: Linux
unspecified
medium
Target Milestone: ---
: ---
Assignee: Dumitru Ceara
QA Contact: ying xu
URL:
Whiteboard:
Depends On:
Blocks: 1575512 1886103
TreeView+ depends on / blocked
 
Reported: 2020-10-08 07:45 UTC by Dumitru Ceara
Modified: 2020-12-01 07:53 UTC (History)
12 users (show)

Fixed In Version:
Doc Type: If docs needed, set a value
Doc Text:
Clone Of: 1886103
Environment:
Last Closed: 2020-10-27 09:49:14 UTC
Target Upstream Version:
Embargoed:


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
Red Hat Product Errata RHBA-2020:4356 0 None None None 2020-10-27 09:49:35 UTC

Description Dumitru Ceara 2020-10-08 07:45:03 UTC
+++ This bug was initially created as a clone of Bug #1886103 +++

Description of problem:
Verifying IP Multicast, with receiver and sender VM on two different nodes

Version-Release number of selected component (if applicable):
OSP puddle is:
tag: 16.1_20200930.1
OVN rpm used is:
 ovn2.13-20.06.2-11.el8fdp.x86_64

How reproducible:


Steps to Reproduce:
1. set  NeutronEnableIgmpSnooping: true in THT template.
2. Create a network with vlan on provider network for use with multicast verification.
| provider:network_type     | vlan                                                                                                                                                             |
| provider:physical_network | data2                                                                                                                                                            |
| provider:segmentation_id  | 409   

3.spawn two VMs attached to the above network.
4. Run multicast scenario and script is based on  the redhat solutions ,
https://access.redhat.com/solutions/5165391

multicast receiver:
./multicast.py -I 50.0.3.52 -M 239.0.0.1 -s -p 5405
multicast sender:
 ./multicast.py -I 50.0.3.26 -M 239.0.0.1 hello -p 5405 


Actual results:
When the sender and receiver VMs are on same compute node, receiver is able to get multicast packets.
But when the sender and receiver VMs are on different compute node, receiver is not able to get multicast packets.

Expected results:
sender and receiver VMs are on different compute node, receiver is able to get multicast packets.

Additional info:

Comment 5 ying xu 2020-10-13 09:14:19 UTC
I reproduced it on version:
# rpm -qa|grep ovn
ovn2.13-20.09.0-1.el8fdp.x86_64
ovn2.13-central-20.09.0-1.el8fdp.x86_64
ovn2.13-host-20.09.0-1.el8fdp.x86_64

script as belows:
server:

ovn-nbctl ls-add ls
ovn-nbctl lsp-add ls vm1
ovn-nbctl lsp-set-addresses vm1 00:00:00:00:00:01
ovn-nbctl lsp-add ls vm3
ovn-nbctl lsp-set-addresses vm3 00:00:00:00:00:03

ovn-nbctl lsp-add ls ln_p1
ovn-nbctl lsp-set-addresses ln_p1 unknown
ovn-nbctl lsp-set-type ln_p1 localnet
ovn-nbctl lsp-set-options ln_p1 network_name=nattest
ovs-vsctl add-br nat_test
ip link set nat_test up
ovs-vsctl set Open_vSwitch . external-ids:ovn-bridge-mappings=nattest:nat_test
ovs-vsctl add-port nat_test $nic_test2
ip link set $nic_test2 up

ip netns add vm1
ovs-vsctl add-port br-int vm1 -- set interface vm1 type=internal
ip link set vm1 netns vm1
ip netns exec vm1 ip link set vm1 address 00:00:00:00:00:01
ip netns exec vm1 ip addr add 42.42.42.2/24 dev vm1
ip netns exec vm1 ip -6 addr add 2000::2/64 dev vm1
ip netns exec vm1 ip link set vm1 up
ip netns exec vm1 ip r a default via 42.42.42.1
ip netns exec vm1 ip -6 route add default via 2000::1
ovs-vsctl set Interface vm1 external_ids:iface-id=vm1


#
ovn-nbctl set logical_switch ls other_config:mcast_querier=true other_config:mcast_snoop=true other_config:mcast_query_interval=30 other_config:mcast_eth_src=00:00:00:00:00:05 other_config:mcast_ip4_src=42.42.42.5 other_config:mcast_ip6_src=fe80::1


client:
ip netns add vm3
ovs-vsctl add-port br-int vm3 -- set interface vm3 type=internal
ip link set vm3 netns vm3
ip netns exec vm3 ip link set vm3 address 00:00:00:00:00:03
ip netns exec vm3 ip addr add 42.42.42.3/24 dev vm3
ip netns exec vm3 ip -6 addr add 2000::3/64 dev vm3
ip netns exec vm3 ip link set vm3 up
ip netns exec vm3 ip link set lo up
ip netns exec vm3 ip route add default via 42.42.42.1
ip netns exec vm3 ip -6 route add default via 2000::1
ovs-vsctl set Interface vm3 external_ids:iface-id=vm3

ovs-vsctl add-br nat_test
ip link set nat_test up
ovs-vsctl set Open_vSwitch . external-ids:ovn-bridge-mappings=nattest:nat_test
ovs-vsctl add-port nat_test $nic_test2
ip link set $nic_test2 up

then on client join the gourp:
ip netns exec vm3 join_group -f 4 -g 224.42.1.1 -i vm3 &
ip netns exec vm3 join_group -f 6 -g ff0e::1234 -i vm3 &

and check the groups on server:
ovn-sbctl list IGMP_group'
_uuid               : 74fb9fe0-5560-4bbd-8132-8b2521c8003f
address             : "ff02::1:ff00:2"
chassis             : 061065ac-bb0b-42ec-89cc-e8ceee42eea1
datapath            : 2dfb0efa-a8f8-49a7-974c-21e621f6da07
ports               : [7fe7eadc-4338-424a-92b8-a7b58567ff99]

_uuid               : afefc289-345c-494f-8d32-1f0a1139afc2
address             : "ff02::1:ff00:3"
chassis             : a9c0fe0b-bab1-479d-be2a-af19981afec9
datapath            : 2dfb0efa-a8f8-49a7-974c-21e621f6da07
ports               : [2fe1bdc9-407a-4a5e-8519-365e9cbb505a]

_uuid               : 86b2d8a9-4f76-4541-9e58-6558477145cf
address             : "ff0e::1234"                                    ----------------ipv6 group
chassis             : a9c0fe0b-bab1-479d-be2a-af19981afec9
datapath            : 2dfb0efa-a8f8-49a7-974c-21e621f6da07
ports               : [2fe1bdc9-407a-4a5e-8519-365e9cbb505a]

_uuid               : a4d52311-56b1-492c-a726-edac1366a542
address             : "ff02::1:ff00:1"
chassis             : 061065ac-bb0b-42ec-89cc-e8ceee42eea1
datapath            : 2dfb0efa-a8f8-49a7-974c-21e621f6da07
ports               : [7fe7eadc-4338-424a-92b8-a7b58567ff99]

_uuid               : e7d9117a-c8f7-4dae-a51b-d1c3b46475e6
address             : "224.42.1.1"                                        ------------ipv4 group
chassis             : a9c0fe0b-bab1-479d-be2a-af19981afec9
datapath            : 2dfb0efa-a8f8-49a7-974c-21e621f6da07
ports               : [2fe1bdc9-407a-4a5e-8519-365e9cbb505a]

then ping from the server to client,and tcpdump on client,client can't receive the icmp packets.
#tcpdump -r vm3.pcap -vv -n |grep 224.42.1.1.*ICMP
reading from file vm3.pcap, link-type LINUX_SLL (Linux cooked v1)
dropped privs to tcpdump                       ----------------------------no the specific icmp packets


and verified on version:

# rpm -qa|grep ovn
ovn2.13-20.09.0-2.el8fdp.x86_64
ovn2.13-central-20.09.0-2.el8fdp.x86_64
ovn2.13-host-20.09.0-2.el8fdp.x86_64

tcpdump -r vm3.pcap -vv -n |grep 224.42.1.1.*ICMP'
reading from file vm3.pcap, link-type LINUX_SLL (Linux cooked v1)
dropped privs to tcpdump
    42.42.42.2 > 224.42.1.1: ICMP echo request, id 43945, seq 1, length 64
    42.42.42.2 > 224.42.1.1: ICMP echo request, id 43945, seq 2, length 64
    42.42.42.2 > 224.42.1.1: ICMP echo request, id 43945, seq 3, length 64
tcpdump -r vm3.pcap -vv -n |grep ff0e::1234.*icmp6'
reading from file vm3.pcap, link-type LINUX_SLL (Linux cooked v1)
dropped privs to tcpdump
04:02:12.556075 IP6 (flowlabel 0x9bfc3, hlim 1, next-header ICMPv6 (58) payload length: 64) 2000::2 > ff0e::1234: [icmp6 sum ok] ICMP6, echo request, seq 1
04:02:13.584300 IP6 (flowlabel 0x9bfc3, hlim 1, next-header ICMPv6 (58) payload length: 64) 2000::2 > ff0e::1234: [icmp6 sum ok] ICMP6, echo request, seq 2
04:02:14.608253 IP6 (flowlabel 0x9bfc3, hlim 1, next-header ICMPv6 (58) payload length: 64) 2000::2 > ff0e::1234: [icmp6 sum ok] ICMP6, echo request, seq 3

Comment 7 errata-xmlrpc 2020-10-27 09:49:14 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory (ovn2.13 bug fix and enhancement update), and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHBA-2020:4356


Note You need to log in before you can comment on or make changes to this bug.