Bug 1761374
| Summary: | [RFE][OVN] [RHEL8] IGMP Relay support in OVN | ||
|---|---|---|---|
| Product: | Red Hat Enterprise Linux Fast Datapath | Reporter: | Numan Siddique <nusiddiq> |
| Component: | ovn2.11 | Assignee: | Dumitru Ceara <dceara> |
| Status: | CLOSED ERRATA | QA Contact: | Jianlin Shi <jishi> |
| Severity: | medium | Docs Contact: | |
| Priority: | unspecified | ||
| Version: | FDP 19.G | CC: | ctrautma, dceara, jishi, kfida, kzhang, liali, mmichels, nusiddiq, qding |
| Target Milestone: | --- | Keywords: | FutureFeature |
| Target Release: | --- | ||
| Hardware: | Unspecified | ||
| OS: | Unspecified | ||
| Whiteboard: | |||
| Fixed In Version: | ovn2.11-2.11.1-8.el8fdp | Doc Type: | Enhancement |
| Doc Text: | Story Points: | --- | |
| Clone Of: | 1757714 | Environment: | |
| Last Closed: | 2019-11-06 05:23:45 UTC | Type: | --- |
| Regression: | --- | Mount Type: | --- |
| Documentation: | --- | CRM: | |
| Verified Versions: | Category: | --- | |
| oVirt Team: | --- | RHEL 7.3 requirements from Atomic Host: | |
| Cloudforms Team: | --- | Target Upstream Version: | |
| Embargoed: | |||
| Bug Depends On: | 1757714 | ||
| Bug Blocks: | |||
|
Description
Numan Siddique
2019-10-14 09:39:29 UTC
Hi Dumitru,
I tried to test the feature on rhel8 with ovn2.11.1-7. but failed, could you help to check:
two chassis:
[root@ibm-x3650m5-03 igmp_relay]# ovn-sbctl show
Chassis "hv1"
hostname: "ibm-x3650m5-03.rhts.eng.pek2.redhat.com"
Encap geneve
ip: "20.0.0.25"
options: {csum="true"}
Port_Binding "ls1p1"
Chassis "hv0"
hostname: "ibm-x3650m4-01.rhts.eng.pek2.redhat.com"
Encap geneve
ip: "20.0.0.26"
options: {csum="true"}
Port_Binding "ls2p1"
two logical switches and one logical router:
[root@ibm-x3650m5-03 igmp_relay]# ovn-nbctl show
switch 9491dfbd-340d-439a-a4e8-60e35fc04cba (ls1)
port ls1p1
addresses: ["c2:bc:fa:98:7e:79"]
port s1-lr1
type: router
addresses: ["00:de:ad:ff:01:01"]
router-port: lr1-s1
switch 606ec963-c5c3-46d8-aa91-204c60620b18 (ls2)
port s2-lr1
type: router
addresses: ["00:de:ad:ff:01:02"]
router-port: lr1-s2
port ls2p1
addresses: ["1a:bc:b4:2c:c8:65"]
router e83862d2-d86b-4512-9b87-4f62e2365715 (lr1)
port lr1-s1
mac: "00:de:ad:ff:01:01"
networks: ["1.1.1.1/24"]
port lr1-s2
mac: "00:de:ad:ff:01:02"
networks: ["1.1.2.1/24"]
enable mcast_snoop on switch and mcast_relay on router:
ovn-nbctl set Logical_Switch ls1 other_config:mcast_querier="false" other_config:mcast_snoop="true"
ovn-nbctl set Logical_Switch ls2 other_config:mcast_querier="false" other_config:mcast_snoop="true"
ovn-nbctl set logical_router lr1 options:mcast_relay="true"
join multicast 224.1.1.1 on host connected to ls2:
[root@ibm-x3650m5-03 igmp_relay]# ovn-sbctl find IGMP_Group
_uuid : 70f8cf86-17cc-4f1c-99e1-6971373d5d04
address : "224.1.1.1"
chassis : 30d80e47-8397-4cf4-b592-21a5c2e8d83e
datapath : f98f706c-abe8-46ee-a332-0693ca6e7e47
ports : [81b11b99-deaa-4b6e-ba83-6217e653cd0b]
set icmp_echo_ignore_broadcasts on host which join 224.1.1.1:
[root@ibm-x3650m4-01 igmp_relay]# ip netns exec server0 sysctl -w net.ipv4.icmp_echo_ignore_broadcasts=0
net.ipv4.icmp_echo_ignore_broadcasts = 0
but failed to ping 224.1.1.1 on host connected to ls1:
[root@ibm-x3650m5-03 ~]# ip netns exec client0 ping 224.1.1.1 -c 1
PING 224.1.1.1 (224.1.1.1) 56(84) bytes of data.
--- 224.1.1.1 ping statistics ---
1 packets transmitted, 0 received, 100% packet loss, time 0ms
Hi Jianlin,
Looks like there's a bug, I see this in ovn-controller.log: "2019-10-16T08:04:27.298Z|00023|lflow|WARN|error parsing actions "clone { outport = "_MC_mrouter_flood"; output; }; drop;": Syntax error at `drop' expecting action".
I'll send a fix for it soon. Until then you can disable mcast_snoop on the client side to avoid the issue.
Thanks,
Dumitru
(In reply to Dumitru Ceara from comment #4) > Hi Jianlin, > > Looks like there's a bug, I see this in ovn-controller.log: > "2019-10-16T08:04:27.298Z|00023|lflow|WARN|error parsing actions "clone { > outport = "_MC_mrouter_flood"; output; }; drop;": Syntax error at `drop' > expecting action". > > I'll send a fix for it soon. Until then you can disable mcast_snoop on the > client side to avoid the issue. > An alternative to workaround the issue is to enable mcast_flood_unregistered on ls1:other_config. Verified on ovn2.11.1-8:
[root@ibm-x3650m4-01 igmp_relay]# rpm -qa | grep ovn
ovn2.11-host-2.11.1-8.el8fdp.x86_64
kernel-kernel-networking-openvswitch-ovn-1.0-146.noarch
ovn2.11-central-2.11.1-8.el8fdp.x86_64
ovn2.11-2.11.1-8.el8fdp.x86_64
setup on server:
systemctl start openvswitch
systemctl start ovn-northd
systemctl status ovn-northd
ovn-nbctl set-connection ptcp:6641
ovn-sbctl set-connection ptcp:6642
netstat -anp | grep 6642
ovs-vsctl set Open_vSwitch . external-ids:system-id=hv0
ovs-vsctl set Open_vSwitch . external-ids:ovn-remote=tcp:20.0.0.26:6642
ovs-vsctl set Open_vSwitch . external-ids:ovn-encap-type=geneve
ovs-vsctl set Open_vSwitch . external-ids:ovn-remote=tcp:20.0.0.25:6642
ovs-vsctl set Open_vSwitch . external-ids:ovn-encap-ip=20.0.0.26
systemctl start ovn-controller
ovs-vsctl show
ip netns add server0
ip link add veth0_s0 type veth peer name veth0_s0_p
ip link set veth0_s0 netns server0
ip netns exec server0 ip link set veth0_s0 address 1a:bc:b4:2c:c8:65
ovs-vsctl add-port br-int veth0_s0_p
ip link sh veth0_s0_p
ip netns exec server0 ip l
ip link set veth0_s0_p up
ip netns exec server0 ip link set lo up
ip netns exec server0 ip link set veth0_s up
ip netns exec server0 ip link set veth0_s0 up
ovs-vsctl set interface veth0_s0_p external_ids:iface-id=ls2p1
ip netns exec server0 ip addr sh
ip netns exec server0 ip addr add 1.1.2.2/24 dev veth0_s0
ip netns exec server0 ip route add default via 1.1.2.1 dev veth0_s0
ip netns exec server0 sysctl -w net.ipv4.icmp_echo_ignore_broadcasts=0
setup on client:
systemctl start openvswitch
systemctl start ovn-northd
systemctl status ovn-northd
ovn-sbctl set-connection ptcp:6642
ovn-nbctl set-connection ptcp:6641
ovs-vsctl set Open_vSwitch . external-ids:system-id=hv1
ovs-vsctl set Open_vSwitch . external-ids:ovn-remote=tcp:20.0.0.25:6642
ovs-vsctl set Open_vSwitch . external-ids:ovn-encap-type=geneve
ovs-vsctl set Open_vSwitch . external-ids:ovn-encap-ip=20.0.0.25
systemctl start ovn-controller
ovs-vsctl show
ip netns add client0
ip link add veth0_c0 type veth peer name veth0_c0_p
ip link set veth0_c0 netns client0
ovs-vsctl add-port br-int veth0_c0_p
ip netns exec client0 ip link set lo up
ip netns exec client0 ip link set veth0_c0 up
ip link set vet0_c0_p up
ip link set veth0_c0_p up
ovs-vsctl show
ovs-vsctl set interface veth0_c0_p external_ids:iface-id=ls1p1
ovn-nbctl ls-add ls1
ovn-nbctl lsp-add ls1 ls1p1
ip netns exec client0 ip link sh veth0_c0
ip netns exec client0 ip link set veth0_c0 address c2:bc:fa:98:7e:79
ovn-nbctl lsp-set-addresses ls1p1 c2:bc:fa:98:7e:79
ovn-nbctl ls-add ls2
ovn-nbctl lsp-add ls2 ls2p1
ovn-nbctl lsp-set-addresses ls2p1 1a:bc:b4:2c:c8:65
ovn-nbctl show
ovn-nbctl lr-add lr1
ovn-nbctl lrp-add lr1 lr1-s1 00:de:ad:ff:01:01 1.1.1.1/24
ovn-nbctl lrp-add lr1 lr1-s2 00:de:ad:ff:01:02 1.1.2.1/24
ovn-nbctl lsp-add ls1 s1-lr1
ovn-nbctl lsp-set-type s1-lr1 router
ovn-nbctl lsp-set-addresses s1-lr1 00:de:ad:ff:01:01
ovn-nbctl lsp-set-options s1-lr1 router-port=lr1-s1
ovn-nbctl lsp-add ls2 s2-lr1
ovn-nbctl lsp-set-type s2-lr1 router
ovn-nbctl lsp-set-addresses s2-lr1 00:de:ad:ff:01:02
ovn-nbctl lsp-set-options s2-lr1 router-port=lr1-s2
ip netns exec client0 ip addr add 1.1.1.2/24 dev veth0_c0
ip netns exec client0 ip route add default via 1.1.1.1 dev veth0_c0
ovn-nbctl set Logical_Switch ls1 other_config:mcast_querier="false" other_config:mcast_snoop="true"
ovn-nbctl set Logical_Switch ls2 other_config:mcast_querier="false" other_config:mcast_snoop="true"
ovn-nbctl set logical_router lr1 options:mcast_relay="true"
join multicast on server:
[root@ibm-x3650m4-01 igmp_relay]# ip netns exec server0 join_group -f 4 -g 224.1.1.1 -i veth0_s0 &
show igmp group on logical switch:
[root@ibm-x3650m5-03 igmp_relay]# ovn-sbctl list IGMP_Group
_uuid : 68c46888-0b15-443c-91ce-641ef2b6ad9d
address : "224.1.1.1"
chassis : 2b1b4695-49fb-46fa-9e8d-182adceb64cb
datapath : a2881440-0778-41d5-bda6-37377a0960e3
ports : [1b56b050-83f6-4602-8c8c-ebea36279fd6]
ping multicast on client:
[root@ibm-x3650m5-03 ~]# ip netns exec client0 ping 224.1.1.1 -c 1 -t 64
PING 224.1.1.1 (224.1.1.1) 56(84) bytes of data.
64 bytes from 1.1.2.2: icmp_seq=1 ttl=63 time=1.18 ms
--- 224.1.1.1 ping statistics ---
1 packets transmitted, 1 received, 0% packet loss, time 0ms
rtt min/avg/max/mdev = 1.179/1.179/1.179/0.000 ms
set mcast_relay as false and true:
[root@ibm-x3650m5-03 ~]# ovn-nbctl set logical_router lr1 options:mcast_relay="false"
[root@ibm-x3650m5-03 ~]# ip netns exec client0 ping 224.1.1.1 -c 1 -t 64
PING 224.1.1.1 (224.1.1.1) 56(84) bytes of data.
--- 224.1.1.1 ping statistics ---
1 packets transmitted, 0 received, 100% packet loss, time 0ms
<==== fail because mcast_relay is false, router won't relay the multicast
[root@ibm-x3650m5-03 ~]# ovn-nbctl set logical_router lr1 options:mcast_relay="true"
[root@ibm-x3650m5-03 ~]# ip netns exec client0 ping 224.1.1.1 -c 1 -t 64
PING 224.1.1.1 (224.1.1.1) 56(84) bytes of data.
64 bytes from 1.1.2.2: icmp_seq=1 ttl=63 time=1.00 ms
--- 224.1.1.1 ping statistics ---
1 packets transmitted, 1 received, 0% packet loss, time 0ms
rtt min/avg/max/mdev = 1.000/1.000/1.000/0.000 ms
<==== pass when mcast_relay is true
set VERIFIED
Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory, and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://access.redhat.com/errata/RHBA-2019:3721 |