Description of problem: enp4s0f1_1 is representor port. 2021-07-16T12:10:23.111Z|00461|tunnel(handler2)|DBG|tunnel port port 2: ovn-C7-0 (geneve: ::->10.10.51.121, key=flow, legacy_l2, dp port=2, ttl=64, csum=true) receive from flow icmp,tun_id=0x1,tun_src=10.10.51.121,tun_dst=10.10.51.111,tun_ipv6_src=::,tun_ipv6_dst=::,tun_gbp_id=0,tun_gbp_flags=0,tun_tos=0,tun_ttl=64,tun_erspan_ver=0,gtpu_flags=0,gtpu_msgtype=0,tun_flags=csum|key,in_port=2,vlan_tci=0x0000,dl_src=fa:16:3e:41:d4:64,dl_dst=f8:f2:1e:03:bf:f6,nw_src=7.7.7.110,nw_dst=7.7.7.93,nw_tos=0,nw_ecn=0,nw_ttl=64,icmp_type=8,icmp_code=0 2021-07-16T12:10:23.111Z|00462|tunnel(handler2)|DBG|tunnel port port 2: ovn-C7-0 (geneve: ::->10.10.51.121, key=flow, legacy_l2, dp port=2, ttl=64, csum=true) receive from flow icmp,tun_id=0x1,tun_src=10.10.51.121,tun_dst=10.10.51.111,tun_ipv6_src=::,tun_ipv6_dst=::,tun_gbp_id=0,tun_gbp_flags=0,tun_tos=0,tun_ttl=64,tun_erspan_ver=0,gtpu_flags=0,gtpu_msgtype=0,tun_flags=csum|key,in_port=2,vlan_tci=0x0000,dl_src=fa:16:3e:41:d4:64,dl_dst=f8:f2:1e:03:bf:f6,nw_src=7.7.7.110,nw_dst=7.7.7.93,nw_tos=0,nw_ecn=0,nw_ttl=64,icmp_type=8,icmp_code=0 2021-07-16T12:10:23.111Z|00463|netdev_offload_tc(handler2)|DBG|unsupported put action type: 2 2021-07-16T12:10:23.111Z|00464|dpif_netlink(handler2)|DBG|failed to offload flow: Operation not supported: enp4s0f1_1 <<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<< 2021-07-16T12:10:23.111Z|00465|dpif_netlink(handler2)|DBG|system@ovs-system: put[create] ufid:2bbd4978-1be8-4eea-b6ea-33bad51d0104 recirc_id(0),dp_hash(0/0),skb_priority(0/0),in_port(3),skb_mark(0/0),ct_state(0/0),ct_zone(0/0),ct_mark(0/0),ct_label(0/0),eth(src=f8:f2:1e:03:bf:f6/01:00:00:00:00:00,dst=fa:16:3e:41:d4:64),eth_type(0x0800),ipv4(src=7.7.7.93/0.0.0.0,dst=7.7.7.110/0.0.0.0,proto=1/0,tos=0/0x3,ttl=64/0,frag=no),icmp(type=0/0,code=0/0), actions:userspace(pid=2657823863,controller(reason=1,dont_send=0,continuation=0,recirc_id=12,rule_cookie=0x1ced6d86,controller_id=0,max_len=65535)),set(tunnel(tun_id=0x1,dst=10.10.51.121,ttl=64,tp_dst=6081,geneve({class=0x102,type=0x80,len=4,0x28001}),flags(df|csum|key))),2 While digging to ovs debugs, Before ovs restart 2021-07-16T12:14:11.969Z|00150|bridge|INFO|bridge br-int: deleted interface enp4s0f1_1 on port 8 2021-07-16T12:14:11.969Z|00151|dpif_netlink|DBG|port_changed: dpif:system@ovs-system vport:enp4s0f1_1 cmd:2 2021-07-16T12:14:11.993Z|00152|ovsdb_cs|DBG|unix:/var/run/openvswitch/db.sock: received unexpected reply message: {"error":null,"id":406,"result":[{"count":1},{}]} 2021-07-16T12:14:11.993Z|00153|ovsdb_cs|DBG|unix:/var/run/openvswitch/db.sock: received unexpected reply message: {"error":null,"id":408,"result":[{"count":1},{}]} 2021-07-16T12:14:20.135Z|00154|netdev_offload_tc|INFO|added ingress qdisc to enp4s0f1_1 2021-07-16T12:14:20.135Z|00155|netdev_offload|INFO|enp4s0f1_1: Assigned flow API 'linux_tc'. 2021-07-16T12:14:20.135Z|00156|bridge|INFO|bridge br-int: added interface enp4s0f1_1 on port 9 2021-07-16T12:14:20.136Z|00157|netdev_linux|DBG|unknown qdisc "mq" 2021-07-16T12:14:20.136Z|00158|dpif_netlink|DBG|port_changed: dpif:system@ovs-system vport:enp4s0f1_1 cmd:1 2021-07-16T12:14:20.145Z|00159|ovsdb_cs|DBG|unix:/var/run/openvswitch/db.sock: received unexpected reply message: {"error":null,"id":410,"result":[{"count":1},{"count":1},{}]} 2021-07-16T12:14:21.974Z|00160|connmgr|INFO|br-int<->unix#0: 42 flow_mods in the 8 s starting 10 s ago (21 adds, 21 deletes) After ovs restart 2021-07-16T12:37:30.404Z|00069|bridge|INFO|bridge br-int: deleted interface enp4s0f1_1 on port 9 2021-07-16T12:37:40.404Z|00070|connmgr|INFO|br-int<->unix#0: 21 flow_mods 10 s ago (4 adds, 17 deletes) 2021-07-16T12:37:40.852Z|00071|netdev_offload_tc|INFO|added ingress qdisc to enp4s0f1_1 2021-07-16T12:37:40.852Z|00072|netdev_offload|INFO|enp4s0f1_1: Assigned flow API 'linux_tc'. 2021-07-16T12:37:40.852Z|00073|bridge|INFO|bridge br-int: added interface enp4s0f1_1 on port 10 Flow programming: [root@hareshcomputesriovoffload-0 heat-admin]# ovs-appctl dpctl/dump-flows -m ufid:b0fa89c8-7562-474d-a2ec-dc25129ac24e, skb_priority(0/0),tunnel(tun_id=0x1,src=10.10.51.121,dst=10.10.51.159,ttl=0/0,tp_dst=6081,geneve({class=0x102,type=0x80,len=4,0x30002/0x7fffffff}),flags(+key)),skb_mark(0/0),ct_state(0/0),ct_zone(0/0),ct_mark(0/0),ct_label(0/0),recirc_id(0),dp_hash(0/0),in_port(genev_sys_6081),packet_type(ns=0/0,id=0/0),eth(src=fa:16:3e:6f:11:ff,dst=00:00:00:00:00:00/01:00:00:00:00:00),eth_type(0x0800),ipv4(src=0.0.0.0/0.0.0.0,dst=0.0.0.0/0.0.0.0,proto=0/0,tos=0/0,ttl=0/0,frag=no), packets:145153, bytes:6096412, used:0.061s, offloaded:yes, dp:tc, actions:enp4s0f1_2 ufid:6f432f1b-bc86-4db9-982d-dae91160d3b5, recirc_id(0),dp_hash(0/0),skb_priority(0/0),in_port(enp4s0f1_2),skb_mark(0/0),ct_state(0/0),ct_zone(0/0),ct_mark(0/0),ct_label(0/0),eth(src=00:00:00:00:00:00/01:00:00:00:00:00,dst=fa:16:3e:6f:11:ff),eth_type(0x0800),ipv4(src=0.0.0.0/0.0.0.0,dst=0.0.0.0/0.0.0.0,proto=0/0,tos=0/0x3,ttl=0/0,frag=no), packets:11599, bytes:603148, used:0.001s, dp:ovs, actions:userspace(pid=2201295342,controller(reason=1,dont_send=0,continuation=0,recirc_id=8,rule_cookie=0xff2250a2,controller_id=0,max_len=65535)),set(tunnel(tun_id=0x1,dst=10.10.51.121,ttl=64,tp_dst=6081,geneve({class=0x102,type=0x80,len=4,0x28001}),flags(df|csum|key))),genev_sys_6081 [root@hareshcomputesriovoffload-0 heat-admin]# systemctl restart ovs-vswitchd [root@hareshcomputesriovoffload-0 heat-admin]# ovs-appctl dpctl/dump-flows -m ufid:3485a30d-4a67-4993-8ea3-6527eec4b4d1, skb_priority(0/0),skb_mark(0/0),ct_state(0/0),ct_zone(0/0),ct_mark(0/0),ct_label(0/0),recirc_id(0),dp_hash(0/0),in_port(enp4s0f1_2),packet_type(ns=0/0,id=0/0),eth(src=f8:f2:1e:03:bf:f6,dst=fa:16:3e:6f:11:ff),eth_type(0x0800),ipv4(src=0.0.0.0/0.0.0.0,dst=0.0.0.0/0.0.0.0,proto=0/0,tos=0/0x3,ttl=0/0,frag=no), packets:2048, bytes:249772, used:0.310s, offloaded:yes, dp:tc, actions:set(tunnel(tun_id=0x1,dst=10.10.51.121,ttl=64,tp_dst=6081,key6(bad key length 1, expected 0)(01)geneve({class=0x102,type=0x80,len=4,0x28001}),flags(key))),genev_sys_6081 ufid:5813ab17-44a9-491f-818f-9baca5aa5222, skb_priority(0/0),tunnel(tun_id=0x1,src=10.10.51.121,dst=10.10.51.159,ttl=0/0,tp_dst=6081,geneve({class=0x102,type=0x80,len=4,0x30002/0x7fffffff}),flags(+key)),skb_mark(0/0),ct_state(0/0),ct_zone(0/0),ct_mark(0/0),ct_label(0/0),recirc_id(0),dp_hash(0/0),in_port(genev_sys_6081),packet_type(ns=0/0,id=0/0),eth(src=fa:16:3e:6f:11:ff,dst=00:00:00:00:00:00/01:00:00:00:00:00),eth_type(0x0800),ipv4(src=0.0.0.0/0.0.0.0,dst=0.0.0.0/0.0.0.0,proto=0/0,tos=0/0,ttl=0/0,frag=no), packets:2050, bytes:86086, used:0.310s, offloaded:yes, dp:tc, actions:enp4s0f1_2 Dont see any difference in qdisc of enp4s0f1_1. Before ovs restart [root@hareshcomputesriovoffload-0 /]# tc qdisc show dev enp4s0f1_1 qdisc mq 0: root qdisc fq_codel 0: parent :1 limit 10240p flows 1024 quantum 1514 target 5ms interval 100ms memory_limit 32Mb ecn drop_batch 64 qdisc ingress ffff: parent ffff:fff1 ---------------- After ovsrestart [root@hareshcomputesriovoffload-0 /]# tc qdisc show dev enp4s0f1_1 qdisc mq 0: root qdisc fq_codel 0: parent :1 limit 10240p flows 1024 quantum 1514 target 5ms interval 100ms memory_limit 32Mb ecn drop_batch 64 qdisc ingress ffff: parent ffff:fff1 ---------------- [root@hareshcomputesriovoffload-0 /]# Version-Release number of selected component (if applicable): RHEL: 8.4 kernel: 4.18.0-305.el8.x86_64 Ovs: openvswitch2.15-2.15.0-24.el8fdp.x86_64 How reproducible: Always Steps to Reproduce: 1. Configure ovs hw offload with geneve 2. Make sure flows in both ingress, egress direction are offloaded 3. reboot the machine 4. Send traffic again and check flow is offloaded only for ingress direction 5. Restart ovs-vswitchd 6. Now, flows for both direction offloaded. Actual results: broken offload feautre Expected results: Should have offloaded in both direction Additional info: restarting ovs-vswitchd fix the issue. And traffic from the both directions offloaded to the hw. I didnt find anything suspicious in system messages or in dmesg. Issue persist for all onwards created VMs untill ovs-vswitchd restarted.
I briefly discussed this one with Haresh earlier today and this is very related to how ovs starts stuff and needs input from ovs team. I have no idea on what this means: 2021-07-16T12:14:11.993Z|00152|ovsdb_cs|DBG|unix:/var/run/openvswitch/db.sock: received unexpected reply message: {"error":null,"id":406,"result":[{"count":1},{}]} Btw, I have seen issues with ingress qdisc that are only fixed after removing and adding the port back to the bridge - the restart didn't fix it. I don't know if both are related here, probably not, but worth having in mind anyway.
2021-07-16T12:14:11.993Z|00152|ovsdb_cs|DBG|unix:/var/run/openvswitch/db.sock: received unexpected reply message: {"error":null,"id":406,"result":[{"count":1},{}]} This happens usually because a transaction is ended before a reply is received. That's also why it's logged as DBG instead of WARN or INFO. It shouldn't have much to do with this situation. Actually, I wonder if there's a race during initialization w.r.t. offload feature support? Notice: 2021-07-16T12:10:23.111Z|00464|dpif_netlink(handler2)|DBG|failed to offload flow: Operation not supported: enp4s0f1_1 And we see a flow that gets generated like: ufid:6f432f1b-bc86-4db9-982d-dae91160d3b5, recirc_id(0),dp_hash(0/0),skb_priority(0/0),in_port(enp4s0f1_2),skb_mark(0/0),ct_state(0/0),ct_zone(0/0),ct_mark(0/0),ct_label(0/0),eth(src=00:00:00:00:00:00/01:00:00:00:00:00,dst=fa:16:3e:6f:11:ff),eth_type(0x0800),ipv4(src=0.0.0.0/0.0.0.0,dst=0.0.0.0/0.0.0.0,proto=0/0,tos=0/0x3,ttl=0/0,frag=no), packets:11599, bytes:603148, used:0.001s, dp:ovs, actions:userspace(pid=2201295342,controller(reason=1,dont_send=0,continuation=0,recirc_id=8,rule_cookie=0xff2250a2,controller_id=0,max_len=65535)),set(tunnel(tun_id=0x1,dst=10.10.51.121,ttl=64,tp_dst=6081,geneve({class=0x102,type=0x80,len=4,0x28001}),flags(df|csum|key))),genev_sys_6081 Which requires userspace()/slowpath action. Is there some kind of feature flag that isn't ready when OVS starts, and then can become ready later?
(In reply to Aaron Conole from comment #2) > Actually, I wonder if there's a race during initialization w.r.t. offload > feature support? Notice: Maybe, but.. > Which requires userspace()/slowpath action. Is there some kind of feature > flag that isn't ready when OVS starts, and then can become ready later? This specific flow cannot be offloaded due to that, actions:userspace(..controler..). IOW, at least from this flow only, I'm afraid we can't know much.
I dont see this behavior with vlan. Flows are properly offloaded after nodes reboots and doesn't need to restart ovs-vswitchd.
Seems this bz may not be needed anymore, unless OVS wants to have specific tests for this. https://bugzilla.redhat.com/show_bug.cgi?id=1946162#c39
Marcelo, this is different issue and open yet, this should be fixed.
Okay. I'm not seeing what's the difference then. Would you mind elaborating please? AFAICT from this commit description https://git.kernel.org/pub/scm/linux/kernel/git/netdev/net-next.git/commit/?id=74fc4f828769 it matches the behaviour described in comment #0 and with the fact that it doesn't affect vlans (comment #4). Despite the ovsdb log messages (comment #0), though.
Nevermind. I see it now :-)
(In reply to Aaron Conole from comment #2) > Actually, I wonder if there's a race during initialization w.r.t. offload > feature support? Notice: As you can see in the above comments, that's a possibility :-) but ... > And we see a flow that gets generated like: > > ufid:6f432f1b-bc86-4db9-982d-dae91160d3b5, > recirc_id(0),dp_hash(0/0),skb_priority(0/0),in_port(enp4s0f1_2),skb_mark(0/ > 0),ct_state(0/0),ct_zone(0/0),ct_mark(0/0),ct_label(0/0),eth(src=00:00:00:00: > 00:00/01:00:00:00:00:00,dst=fa:16:3e:6f:11:ff),eth_type(0x0800),ipv4(src=0.0. > 0.0/0.0.0.0,dst=0.0.0.0/0.0.0.0,proto=0/0,tos=0/0x3,ttl=0/0,frag=no), > packets:11599, bytes:603148, used:0.001s, dp:ovs, > actions:userspace(pid=2201295342,controller(reason=1,dont_send=0, > continuation=0,recirc_id=8,rule_cookie=0xff2250a2,controller_id=0, > max_len=65535)),set(tunnel(tun_id=0x1,dst=10.10.51.121,ttl=64,tp_dst=6081, > geneve({class=0x102,type=0x80,len=4,0x28001}),flags(df|csum|key))), > genev_sys_6081 > > Which requires userspace()/slowpath action. Is there some kind of feature > flag that isn't ready when OVS starts, and then can become ready later? Not that I'm aware of. Moreover, if a tc feature probe failed, I would expect ovs to simply use dp:ovs instead, and not generate a completely different flow like this one. I wonder how sane this action is. Considering that it is sending the packet to the controller AND outputting it on a port, and: "pid=2201295342", which is likely not a valid pid. Can this be out of a stack/memory trash maybe?
(In reply to Marcelo Ricardo Leitner from comment #9) > (In reply to Aaron Conole from comment #2) > > Actually, I wonder if there's a race during initialization w.r.t. offload > > feature support? Notice: > > As you can see in the above comments, that's a possibility :-) but > > ... > > And we see a flow that gets generated like: > > > > ufid:6f432f1b-bc86-4db9-982d-dae91160d3b5, > > recirc_id(0),dp_hash(0/0),skb_priority(0/0),in_port(enp4s0f1_2),skb_mark(0/ > > 0),ct_state(0/0),ct_zone(0/0),ct_mark(0/0),ct_label(0/0),eth(src=00:00:00:00: > > 00:00/01:00:00:00:00:00,dst=fa:16:3e:6f:11:ff),eth_type(0x0800),ipv4(src=0.0. > > 0.0/0.0.0.0,dst=0.0.0.0/0.0.0.0,proto=0/0,tos=0/0x3,ttl=0/0,frag=no), > > packets:11599, bytes:603148, used:0.001s, dp:ovs, > > actions:userspace(pid=2201295342,controller(reason=1,dont_send=0, > > continuation=0,recirc_id=8,rule_cookie=0xff2250a2,controller_id=0, > > max_len=65535)),set(tunnel(tun_id=0x1,dst=10.10.51.121,ttl=64,tp_dst=6081, > > geneve({class=0x102,type=0x80,len=4,0x28001}),flags(df|csum|key))), > > genev_sys_6081 > > > > Which requires userspace()/slowpath action. Is there some kind of feature > > flag that isn't ready when OVS starts, and then can become ready later? > > Not that I'm aware of. Moreover, if a tc feature probe failed, I would > expect ovs to simply use dp:ovs instead, and not generate a completely > different flow like this one. > > I wonder how sane this action is. Considering that it is sending the packet > to the controller AND outputting it on a port, and: "pid=2201295342", which > is likely not a valid pid. Can this be out of a stack/memory trash maybe? Yes, PID is not ovs-vswitchd's. I didnt find any such process on the node as well. But ping was working. Interesting, restarting ovs sorts out everything.
Is it possible to reproduce this without an openstack environment?
Yes Aaron it should be possible.
rephrasing: How can I reproduce this outside of the openstack environment?
(In reply to Aaron Conole from comment #13) > rephrasing: How can I reproduce this outside of the openstack environment? All steps are same as mentioned in "steps to reproduce" except flow programmed in my case are by ml2/ovn. So we need controller. Otherwise, we have to program the flow using tc manually.
Can you help me set up a reproducer environment? I'm concerned - I see no reason for the userspace() attribute to be generated (just as with bz#2002888 - so maybe there's a bug internally here) Once you have an environment we can work with can we add a debug RPM? -Aaron
Sure Aaron, That would be faster, Possible in day or two, i would share the environment.
What's a better summary for the bz then?
Please target FDP 21.J
@mmichels @nusiddiq Can we please get a clone for OVN-2021, as this is also an issue for OSP 16.2
As the patch for the OVN part of this bug is being reviewed under bz-2022001, therefore here I could sanity verify this bz. However I have verified that the tables 71 and 72 have been populated as below: [root@dell-per740-81 ~]# rpm -qa |grep -E 'ovn|openvswitch' openvswitch2.15-2.15.0-53.el8fdp.x86_64 ovn-2021-central-21.09.1-23.el8fdp.x86_64 openvswitch-selinux-extra-policy-1.0-28.el8fdp.noarch ovn-2021-host-21.09.1-23.el8fdp.x86_64 ovn-2021-21.09.1-23.el8fdp.x86_64 ovn-nbctl ls-add sw0 ovn-nbctl lsp-add sw0 sw0-p1 ovn-nbctl lsp-set-addresses sw0-p1 "50:54:00:00:00:03 10.0.0.3" unknown ovn-nbctl lsp-add sw0 sw0-p2 ovn-nbctl lsp-set-addresses sw0-p2 "50:54:00:00:00:04 10.0.0.4" ovn-nbctl lsp-set-port-security sw0-p2 "50:54:00:00:00:04 10.0.0.4" ovn-nbctl lsp-add sw0 sw0-p3 ovn-nbctl lsp-set-addresses sw0-p3 unknown ovn-nbctl ls-add sw1 ovn-nbctl lsp-add sw1 sw1-p1 ovn-nbctl lsp-set-addresses sw1-p1 "40:54:00:00:00:03 11.0.0.3" unknown ovn-nbctl lsp-add sw1 sw1-p2 ovn-nbctl lsp-set-addresses sw1-p2 "40:54:00:00:00:04 11.0.0.4" ovn-nbctl lsp-set-port-security sw1-p2 "40:54:00:00:00:04 11.0.0.4" ovn-nbctl lr-add lr0 ovn-nbctl lrp-add lr0 lr0-sw0 00:00:00:00:ff:01 10.0.0.1/24 ovn-nbctl lsp-add sw0 sw0-lr0 ovn-nbctl lsp-set-type sw0-lr0 router ovn-nbctl lsp-set-addresses sw0-lr0 router ovn-nbctl lsp-set-options sw0-lr0 router-port=lr0-sw0 ovn-nbctl lrp-add lr0 lr0-sw1 00:00:00:00:ff:02 11.0.0.1/24 ovn-nbctl lsp-add sw1 sw1-lr0 ovn-nbctl lsp-set-type sw1-lr0 router ovn-nbctl lsp-set-addresses sw1-lr0 router ovn-nbctl lsp-set-options sw1-lr0 router-port=lr0-sw1 ovn-nbctl --wait=hv sync ip netns add vm1 ovs-vsctl add-port br-int vm1 -- set interface vm1 type=internal ip link set vm1 netns vm1 ip netns exec vm1 ip link set vm1 address 50:54:00:00:00:03 ip netns exec vm1 ip addr add 10.0.0.3/24 dev vm1 ip netns exec vm1 ip link set vm1 up ip netns exec vm1 ip link set lo up ip netns exec vm1 ip route add default via 10.0.0.1 ovs-vsctl set Interface vm1 external_ids:iface-id=sw0-p1 options:tx_pcap=hv1/vm1-tx.pcap options:rxq_pcap=hv1/vm1-rx.pcap ofport-request=1 ip netns add vm2 ovs-vsctl add-port br-int vm2 -- set interface vm2 type=internal ip link set vm2 netns vm2 ip netns exec vm2 ip link set vm2 address 40:54:00:00:00:04 ip netns exec vm2 ip addr add 11.0.0.4/24 dev vm2 ip netns exec vm2 ip link set vm2 up ip netns exec vm2 ip link set lo up ip netns exec vm2 ip route add default via 11.0.0.1 ovs-vsctl set Interface vm2 external_ids:iface-id=sw1-p2 options:tx_pcap=hv1/vm2-tx.pcap options:rxq_pcap=hv1/vm2-rx.pcap ofport-request=2 ip netns add vm3 ovs-vsctl add-port br-int vm3 -- set interface vm3 type=internal ip link set vm3 netns vm3 #ip netns exec vm3 ip link set vm3 address 50:54:00:00:00:11 ip netns exec vm3 ip addr add 10.0.0.10/24 dev vm3 ip netns exec vm3 ip link set vm3 up ip netns exec vm3 ip link set lo up ip netns exec vm3 ip route add default via 10.0.0.1 ovs-vsctl set Interface vm3 external_ids:iface-id=sw0-p3 options:tx_pcap=hv1/vm3-tx.pcap options:rxq_pcap=hv1/vm3-rx.pcap ofport-request=3 ####################### [root@dell-per740-81 ~]# ovn-sbctl dump-flows sw0 |grep ls_in_lookup_fdb # will result with ports sw0-p1, sw0-p3 table=3 (ls_in_lookup_fdb ), priority=100 , match=(inport == "sw0-p1"), action=(reg0[11] = lookup_fdb(inport, eth.src); next;) table=3 (ls_in_lookup_fdb ), priority=100 , match=(inport == "sw0-p3"), action=(reg0[11] = lookup_fdb(inport, eth.src); next;) table=3 (ls_in_lookup_fdb ), priority=0 , match=(1), action=(next;) [root@dell-per740-81 ~]# ovn-sbctl dump-flows sw0 |grep ls_in_put_fdb table=4 (ls_in_put_fdb ), priority=100 , match=(inport == "sw0-p1" && reg0[11] == 0), action=(put_fdb(inport, eth.src); next;) table=4 (ls_in_put_fdb ), priority=100 , match=(inport == "sw0-p3" && reg0[11] == 0), action=(put_fdb(inport, eth.src); next;) table=4 (ls_in_put_fdb ), priority=0 , match=(1), action=(next;) [root@dell-per740-81 ~]# ovs-ofctl dump-flows br-int table=72 cookie=0x40968c6b, duration=148.847s, table=72, n_packets=9, n_bytes=686, priority=100,reg14=0x1,metadata=0x1,dl_src=50:54:00:00:00:03 actions=load:0x1->NXM_NX_REG10[8] cookie=0xbc83c5e6, duration=148.463s, table=72, n_packets=9, n_bytes=690, priority=100,reg14=0x3,metadata=0x1,dl_src=72:9b:ea:77:23:36 actions=load:0x1->NXM_NX_REG10[8] [root@dell-per740-81 ~]# ovs-ofctl dump-flows br-int table=71 cookie=0x40968c6b, duration=165.166s, table=71, n_packets=0, n_bytes=0, priority=100,metadata=0x1,dl_dst=50:54:00:00:00:03 actions=load:0x1->NXM_NX_REG15[] cookie=0xbc83c5e6, duration=164.782s, table=71, n_packets=0, n_bytes=0, priority=100,metadata=0x1,dl_dst=72:9b:ea:77:23:36 actions=load:0x3->NXM_NX_REG15[]
Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory (ovn bug fix and enhancement update), and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://access.redhat.com/errata/RHBA-2022:0049
The needinfo request[s] on this closed bug have been removed as they have been unresolved for 500 days