Bug 1761376
| Summary: | [RFE][OVN] [RHEL 8] Static IP multicast flood configuration | ||
|---|---|---|---|
| Product: | Red Hat Enterprise Linux Fast Datapath | Reporter: | Numan Siddique <nusiddiq> |
| Component: | ovn2.11 | Assignee: | Dumitru Ceara <dceara> |
| Status: | CLOSED ERRATA | QA Contact: | ying xu <yinxu> |
| Severity: | medium | Docs Contact: | |
| Priority: | unspecified | ||
| Version: | FDP 19.G | CC: | ctrautma, dceara, fleitner, jishi, kzhang, liali, mmichels, nusiddiq, qding |
| Target Milestone: | --- | Keywords: | FutureFeature |
| Target Release: | --- | ||
| Hardware: | Unspecified | ||
| OS: | Unspecified | ||
| Whiteboard: | |||
| Fixed In Version: | Doc Type: | Enhancement | |
| Doc Text: | Story Points: | --- | |
| Clone Of: | 1757715 | Environment: | |
| Last Closed: | 2019-11-06 05:23:45 UTC | Type: | --- |
| Regression: | --- | Mount Type: | --- |
| Documentation: | --- | CRM: | |
| Verified Versions: | Category: | --- | |
| oVirt Team: | --- | RHEL 7.3 requirements from Atomic Host: | |
| Cloudforms Team: | --- | Target Upstream Version: | |
| Embargoed: | |||
| Bug Depends On: | 1757715 | ||
| Bug Blocks: | |||
|
Description
Numan Siddique
2019-10-14 09:45:16 UTC
create topo as follows to test the feature:
client0---ls1---lr1---ls2---server0
|
server1
setup on server side:
systemctl start openvswitch
systemctl start ovn-northd
systemctl status ovn-northd
ovn-nbctl set-connection ptcp:6641
ovn-sbctl set-connection ptcp:6642
netstat -anp | grep 6642
ovs-vsctl set Open_vSwitch . external-ids:system-id=hv0
ovs-vsctl set Open_vSwitch . external-ids:ovn-remote=tcp:20.0.0.26:6642
ovs-vsctl set Open_vSwitch . external-ids:ovn-encap-type=geneve
ovs-vsctl set Open_vSwitch . external-ids:ovn-remote=tcp:20.0.0.25:6642
ovs-vsctl set Open_vSwitch . external-ids:ovn-encap-ip=20.0.0.26
systemctl start ovn-controller
ovs-vsctl show
ip netns add server0
ip link add veth0_s0 type veth peer name veth0_s0_p
ip link set veth0_s0 netns server0
ip netns exec server0 ip link set veth0_s0 address 1a:bc:b4:2c:c8:65
ovs-vsctl add-port br-int veth0_s0_p
ip link sh veth0_s0_p
ip netns exec server0 ip l
ip link set veth0_s0_p up
ip netns exec server0 ip link set lo up
ip netns exec server0 ip link set veth0_s0 up
ovs-vsctl set interface veth0_s0_p external_ids:iface-id=ls2p1
ip netns exec server0 ip addr sh
ip netns exec server0 ip addr add 1.1.2.2/24 dev veth0_s0
ip netns exec server0 ip route add default via 1.1.2.1 dev veth0_s0
ip netns exec server0 sysctl -w net.ipv4.icmp_echo_ignore_broadcasts=0
ip netns add server1
ip link add veth0_s1 type veth peer name veth0_s1_p
ip link set veth0_s1 netns server1
ip netns exec server1 ip link set veth0_s1 address 1a:bc:b4:2c:c8:64
ovs-vsctl add-port br-int veth0_s1_p
ip link sh veth0_s1_p
ip netns exec server1 ip l
ip link set veth0_s1_p up
ip netns exec server1 ip link set lo up
ip netns exec server1 ip link set veth0_s1 up
ovs-vsctl set interface veth0_s1_p external_ids:iface-id=ls2p2
ip netns exec server1 ip addr sh
ip netns exec server1 ip addr add 1.1.2.3/24 dev veth0_s1
ip netns exec server1 ip route add default via 1.1.2.1 dev veth0_s1
ip netns exec server1 sysctl -w net.ipv4.icmp_echo_ignore_broadcasts=0
setup on client side:
systemctl start openvswitch
systemctl start ovn-northd
systemctl status ovn-northd
ovn-sbctl set-connection ptcp:6642
ovn-nbctl set-connection ptcp:6641
ovs-vsctl set Open_vSwitch . external-ids:system-id=hv1
ovs-vsctl set Open_vSwitch . external-ids:ovn-remote=tcp:20.0.0.25:6642
ovs-vsctl set Open_vSwitch . external-ids:ovn-encap-type=geneve
ovs-vsctl set Open_vSwitch . external-ids:ovn-encap-ip=20.0.0.25
systemctl start ovn-controller
ovs-vsctl show
ip netns add client0
ip link add veth0_c0 type veth peer name veth0_c0_p
ip link set veth0_c0 netns client0
ovs-vsctl add-port br-int veth0_c0_p
ip netns exec client0 ip link set lo up
ip netns exec client0 ip link set veth0_c0 up
ip link set vet0_c0_p up
ip link set veth0_c0_p up
ovs-vsctl show
ovs-vsctl set interface veth0_c0_p external_ids:iface-id=ls1p1
ovn-nbctl ls-add ls1
ovn-nbctl lsp-add ls1 ls1p1
ip netns exec client0 ip link sh veth0_c0
ip netns exec client0 ip link set veth0_c0 address c2:bc:fa:98:7e:79
ovn-nbctl lsp-set-addresses ls1p1 c2:bc:fa:98:7e:79
ovn-nbctl ls-add ls2
ovn-nbctl lsp-add ls2 ls2p1
ovn-nbctl lsp-set-addresses ls2p1 1a:bc:b4:2c:c8:65
ovn-nbctl lsp-add ls2 ls2p2
ovn-nbctl lsp-set-addresses ls2p2 1a:bc:b4:2c:c8:64
ovn-nbctl show
ovn-nbctl lr-add lr1
ovn-nbctl lrp-add lr1 lr1-s1 00:de:ad:ff:01:01 1.1.1.1/24
ovn-nbctl lrp-add lr1 lr1-s2 00:de:ad:ff:01:02 1.1.2.1/24
ovn-nbctl lsp-add ls1 s1-lr1
ovn-nbctl lsp-set-type s1-lr1 router
ovn-nbctl lsp-set-addresses s1-lr1 00:de:ad:ff:01:01
ovn-nbctl lsp-set-options s1-lr1 router-port=lr1-s1
ovn-nbctl lsp-add ls2 s2-lr1
ovn-nbctl lsp-set-type s2-lr1 router
ovn-nbctl lsp-set-addresses s2-lr1 00:de:ad:ff:01:02
ovn-nbctl lsp-set-options s2-lr1 router-port=lr1-s2
ip netns exec client0 ip addr add 1.1.1.2/24 dev veth0_c0
ip netns exec client0 ip route add default via 1.1.1.1 dev veth0_c0
==Test for mcast_flood and mcast_flood_reports on logical switch:
enable mcast_snoop on ls2:
[root@ibm-x3650m5-03 igmp_relay]# ovn-nbctl set Logical_Switch ls2 other_config:mcast_querier=false other_config:mcast_snoop=true
ping multicast on server0 and dump packet on server1:
[root@ibm-x3650m4-01 igmp_relay]# ip netns exec server0 ping 224.1.1.1 -c 1 -t 64
PING 224.1.1.1 (224.1.1.1) 56(84) bytes of data.
--- 224.1.1.1 ping statistics ---
1 packets transmitted, 0 received, 100% packet loss, time 0ms
[root@ibm-x3650m4-01 ~]# ip netns exec server1 tcpdump -i any -nnle
tcpdump: verbose output suppressed, use -v or -vv for full protocol decode
listening on any, link-type LINUX_SLL (Linux cooked), capture size 262144 bytes
^C
0 packets captured
0 packets received by filter
0 packets dropped by kernel
<=== doesn't receive the multicast
set mcast_flood as true on port connected to server1:
[root@ibm-x3650m5-03 igmp_relay]# ovn-nbctl set Logical_Switch_Port ls2p2 options:mcast_flood=true
[root@ibm-x3650m4-01 ~]# ip netns exec server1 tcpdump -i any -nnle
tcpdump: verbose output suppressed, use -v or -vv for full protocol decode
listening on any, link-type LINUX_SLL (Linux cooked), capture size 262144 bytes
21:48:42.785036 M 1a:bc:b4:2c:c8:65 ethertype IPv4 (0x0800), length 100: 1.1.2.2 > 224.1.1.1: ICMP echo request, id 5817, seq 1, length 64
^C
1 packet captured
1 packet received by filter
0 packets dropped by kernel
<=== receive the multicast
join multicast group on server0 and dump packet on server1:
[root@ibm-x3650m4-01 igmp_relay]# ip netns exec server0 join_group -f 4 -g 224.1.1.1 -i veth0_s0 &
[root@ibm-x3650m4-01 ~]# ip netns exec server1 tcpdump -i any -nnle
tcpdump: verbose output suppressed, use -v or -vv for full protocol decode
listening on any, link-type LINUX_SLL (Linux cooked), capture size 262144 bytes
^C
0 packets captured
0 packets received by filter
0 packets dropped by kernel
<==== doesn't receive igmp report
enable mcast_flood_reports:
[root@ibm-x3650m5-03 igmp_relay]# ovn-nbctl set Logical_Switch_Port ls2p2 options:mcast_flood_reports=true
[root@ibm-x3650m4-01 ~]# ip netns exec server1 tcpdump -i any -nnle
tcpdump: verbose output suppressed, use -v or -vv for full protocol decode
listening on any, link-type LINUX_SLL (Linux cooked), capture size 262144 bytes
21:51:44.074996 M 1a:bc:b4:2c:c8:65 ethertype IPv4 (0x0800), length 56: 1.1.2.2 > 224.0.0.22: igmp v3 report, 1 group record(s)
21:51:44.379957 M 1a:bc:b4:2c:c8:65 ethertype IPv4 (0x0800), length 56: 1.1.2.2 > 224.0.0.22: igmp v3 report, 1 group record(s)
^C
2 packets captured
2 packets received by filter
0 packets dropped by kernel
<==== receive the igmp reports
==Test for mcast_flood on logical router:
disable mcast_snoop on ls1 and ls2:
[root@ibm-x3650m5-03 igmp_relay]# ovn-nbctl set Logical_Switch ls2 other_config:mcast_querier="false" other_config:mcast_snoop="false"
[root@ibm-x3650m5-03 igmp_relay]# ovn-nbctl set Logical_Switch ls1 other_config:mcast_querier="false" other_config:mcast_snoop="false"
enable igmp_relay on lr1:
[root@ibm-x3650m5-03 igmp_relay]# ovn-nbctl set logical_router lr1 options:mcast_relay="true"
send multicast on server0 and dump packet on client0:
[root@ibm-x3650m4-01 igmp_relay]# ip netns exec server0 ping 224.1.1.1 -c 1 -t 64
PING 224.1.1.1 (224.1.1.1) 56(84) bytes of data.
--- 224.1.1.1 ping statistics ---
1 packets transmitted, 0 received, 100% packet loss, time 0ms
[root@ibm-x3650m5-03 ~]# ip netns exec client0 tcpdump -i any -nnle
tcpdump: verbose output suppressed, use -v or -vv for full protocol decode
listening on any, link-type LINUX_SLL (Linux cooked), capture size 262144 bytes
^C
0 packets captured
0 packets received by filter
0 packets dropped by kernel
<=== no multicast received
enable mcast_flood on ls1 port connected to router:
[root@ibm-x3650m5-03 igmp_relay]# ovn-nbctl set Logical_Router_Port lr1-s1 options:mcast_flood=true
[root@ibm-x3650m5-03 ~]# ip netns exec client0 tcpdump -i any -nnle
tcpdump: verbose output suppressed, use -v or -vv for full protocol decode
listening on any, link-type LINUX_SLL (Linux cooked), capture size 262144 bytes
22:01:13.765252 M 00:de:ad:ff:01:01 ethertype IPv4 (0x0800), length 100: 1.1.2.2 > 224.1.1.1: ICMP echo request, id 6172, seq 1, length 64
^C
1 packet captured
1 packet received by filter
0 packets dropped by kernel
<==== multicast received
set VERIFIED
[root@ibm-x3650m4-01 igmp_relay]# rpm -qa | grep ovn ovn2.11-host-2.11.1-8.el8fdp.x86_64 kernel-kernel-networking-openvswitch-ovn-1.0-146.noarch ovn2.11-central-2.11.1-8.el8fdp.x86_64 ovn2.11-2.11.1-8.el8fdp.x86_64 tests in comment 3 is done on ovn2.11.1-8 Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory, and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://access.redhat.com/errata/RHBA-2019:3721 |