The FDP team is no longer accepting new bugs in Bugzilla. Please report your issues under FDP project in Jira. Thanks.
Bug 1983111 - Existing FDB flows are not added when ovn-controller claims the first logical port of a logical switch
Summary: Existing FDB flows are not added when ovn-controller claims the first logical...
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: Red Hat Enterprise Linux Fast Datapath
Classification: Red Hat
Component: ovn2.13
Version: RHEL 8.0
Hardware: Unspecified
OS: Unspecified
medium
urgent
Target Milestone: ---
: FDP 21.J
Assignee: Numan Siddique
QA Contact: Ehsan Elahi
URL:
Whiteboard:
Depends On:
Blocks: 1946162 2022001
TreeView+ depends on / blocked
 
Reported: 2021-07-16 14:31 UTC by Haresh Khandelwal
Modified: 2023-09-15 01:11 UTC (History)
15 users (show)

Fixed In Version:
Doc Type: If docs needed, set a value
Doc Text:
Clone Of:
: 2022001 (view as bug list)
Environment:
Last Closed: 2022-01-10 16:49:01 UTC
Target Upstream Version:
Embargoed:


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
Red Hat Issue Tracker FD-1429 0 None None None 2021-08-24 14:57:57 UTC
Red Hat Product Errata RHBA-2022:0049 0 None None None 2022-01-10 16:49:12 UTC

Description Haresh Khandelwal 2021-07-16 14:31:51 UTC
Description of problem:

enp4s0f1_1 is representor port.

2021-07-16T12:10:23.111Z|00461|tunnel(handler2)|DBG|tunnel port port 2: ovn-C7-0 (geneve: ::->10.10.51.121, key=flow, legacy_l2, dp port=2, ttl=64, csum=true)
 receive from flow icmp,tun_id=0x1,tun_src=10.10.51.121,tun_dst=10.10.51.111,tun_ipv6_src=::,tun_ipv6_dst=::,tun_gbp_id=0,tun_gbp_flags=0,tun_tos=0,tun_ttl=64,tun_erspan_ver=0,gtpu_flags=0,gtpu_msgtype=0,tun_flags=csum|key,in_port=2,vlan_tci=0x0000,dl_src=fa:16:3e:41:d4:64,dl_dst=f8:f2:1e:03:bf:f6,nw_src=7.7.7.110,nw_dst=7.7.7.93,nw_tos=0,nw_ecn=0,nw_ttl=64,icmp_type=8,icmp_code=0
2021-07-16T12:10:23.111Z|00462|tunnel(handler2)|DBG|tunnel port port 2: ovn-C7-0 (geneve: ::->10.10.51.121, key=flow, legacy_l2, dp port=2, ttl=64, csum=true)
 receive from flow icmp,tun_id=0x1,tun_src=10.10.51.121,tun_dst=10.10.51.111,tun_ipv6_src=::,tun_ipv6_dst=::,tun_gbp_id=0,tun_gbp_flags=0,tun_tos=0,tun_ttl=64,tun_erspan_ver=0,gtpu_flags=0,gtpu_msgtype=0,tun_flags=csum|key,in_port=2,vlan_tci=0x0000,dl_src=fa:16:3e:41:d4:64,dl_dst=f8:f2:1e:03:bf:f6,nw_src=7.7.7.110,nw_dst=7.7.7.93,nw_tos=0,nw_ecn=0,nw_ttl=64,icmp_type=8,icmp_code=0
2021-07-16T12:10:23.111Z|00463|netdev_offload_tc(handler2)|DBG|unsupported put action type: 2
2021-07-16T12:10:23.111Z|00464|dpif_netlink(handler2)|DBG|failed to offload flow: Operation not supported: enp4s0f1_1  <<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<
2021-07-16T12:10:23.111Z|00465|dpif_netlink(handler2)|DBG|system@ovs-system: put[create] ufid:2bbd4978-1be8-4eea-b6ea-33bad51d0104 recirc_id(0),dp_hash(0/0),skb_priority(0/0),in_port(3),skb_mark(0/0),ct_state(0/0),ct_zone(0/0),ct_mark(0/0),ct_label(0/0),eth(src=f8:f2:1e:03:bf:f6/01:00:00:00:00:00,dst=fa:16:3e:41:d4:64),eth_type(0x0800),ipv4(src=7.7.7.93/0.0.0.0,dst=7.7.7.110/0.0.0.0,proto=1/0,tos=0/0x3,ttl=64/0,frag=no),icmp(type=0/0,code=0/0), actions:userspace(pid=2657823863,controller(reason=1,dont_send=0,continuation=0,recirc_id=12,rule_cookie=0x1ced6d86,controller_id=0,max_len=65535)),set(tunnel(tun_id=0x1,dst=10.10.51.121,ttl=64,tp_dst=6081,geneve({class=0x102,type=0x80,len=4,0x28001}),flags(df|csum|key))),2

While digging to ovs debugs,

Before ovs restart

2021-07-16T12:14:11.969Z|00150|bridge|INFO|bridge br-int: deleted interface enp4s0f1_1 on port 8
2021-07-16T12:14:11.969Z|00151|dpif_netlink|DBG|port_changed: dpif:system@ovs-system vport:enp4s0f1_1 cmd:2
2021-07-16T12:14:11.993Z|00152|ovsdb_cs|DBG|unix:/var/run/openvswitch/db.sock: received unexpected reply message: {"error":null,"id":406,"result":[{"count":1},{}]}
2021-07-16T12:14:11.993Z|00153|ovsdb_cs|DBG|unix:/var/run/openvswitch/db.sock: received unexpected reply message: {"error":null,"id":408,"result":[{"count":1},{}]}
2021-07-16T12:14:20.135Z|00154|netdev_offload_tc|INFO|added ingress qdisc to enp4s0f1_1
2021-07-16T12:14:20.135Z|00155|netdev_offload|INFO|enp4s0f1_1: Assigned flow API 'linux_tc'.
2021-07-16T12:14:20.135Z|00156|bridge|INFO|bridge br-int: added interface enp4s0f1_1 on port 9
2021-07-16T12:14:20.136Z|00157|netdev_linux|DBG|unknown qdisc "mq"
2021-07-16T12:14:20.136Z|00158|dpif_netlink|DBG|port_changed: dpif:system@ovs-system vport:enp4s0f1_1 cmd:1
2021-07-16T12:14:20.145Z|00159|ovsdb_cs|DBG|unix:/var/run/openvswitch/db.sock: received unexpected reply message: {"error":null,"id":410,"result":[{"count":1},{"count":1},{}]}
2021-07-16T12:14:21.974Z|00160|connmgr|INFO|br-int<->unix#0: 42 flow_mods in the 8 s starting 10 s ago (21 adds, 21 deletes)

After ovs restart

2021-07-16T12:37:30.404Z|00069|bridge|INFO|bridge br-int: deleted interface enp4s0f1_1 on port 9
2021-07-16T12:37:40.404Z|00070|connmgr|INFO|br-int<->unix#0: 21 flow_mods 10 s ago (4 adds, 17 deletes)
2021-07-16T12:37:40.852Z|00071|netdev_offload_tc|INFO|added ingress qdisc to enp4s0f1_1
2021-07-16T12:37:40.852Z|00072|netdev_offload|INFO|enp4s0f1_1: Assigned flow API 'linux_tc'.
2021-07-16T12:37:40.852Z|00073|bridge|INFO|bridge br-int: added interface enp4s0f1_1 on port 10

Flow programming:

[root@hareshcomputesriovoffload-0 heat-admin]# ovs-appctl dpctl/dump-flows -m
ufid:b0fa89c8-7562-474d-a2ec-dc25129ac24e, skb_priority(0/0),tunnel(tun_id=0x1,src=10.10.51.121,dst=10.10.51.159,ttl=0/0,tp_dst=6081,geneve({class=0x102,type=0x80,len=4,0x30002/0x7fffffff}),flags(+key)),skb_mark(0/0),ct_state(0/0),ct_zone(0/0),ct_mark(0/0),ct_label(0/0),recirc_id(0),dp_hash(0/0),in_port(genev_sys_6081),packet_type(ns=0/0,id=0/0),eth(src=fa:16:3e:6f:11:ff,dst=00:00:00:00:00:00/01:00:00:00:00:00),eth_type(0x0800),ipv4(src=0.0.0.0/0.0.0.0,dst=0.0.0.0/0.0.0.0,proto=0/0,tos=0/0,ttl=0/0,frag=no), packets:145153, bytes:6096412, used:0.061s, offloaded:yes, dp:tc, actions:enp4s0f1_2

ufid:6f432f1b-bc86-4db9-982d-dae91160d3b5, recirc_id(0),dp_hash(0/0),skb_priority(0/0),in_port(enp4s0f1_2),skb_mark(0/0),ct_state(0/0),ct_zone(0/0),ct_mark(0/0),ct_label(0/0),eth(src=00:00:00:00:00:00/01:00:00:00:00:00,dst=fa:16:3e:6f:11:ff),eth_type(0x0800),ipv4(src=0.0.0.0/0.0.0.0,dst=0.0.0.0/0.0.0.0,proto=0/0,tos=0/0x3,ttl=0/0,frag=no), packets:11599, bytes:603148, used:0.001s, dp:ovs, actions:userspace(pid=2201295342,controller(reason=1,dont_send=0,continuation=0,recirc_id=8,rule_cookie=0xff2250a2,controller_id=0,max_len=65535)),set(tunnel(tun_id=0x1,dst=10.10.51.121,ttl=64,tp_dst=6081,geneve({class=0x102,type=0x80,len=4,0x28001}),flags(df|csum|key))),genev_sys_6081

[root@hareshcomputesriovoffload-0 heat-admin]# systemctl restart ovs-vswitchd

[root@hareshcomputesriovoffload-0 heat-admin]# ovs-appctl dpctl/dump-flows -m
ufid:3485a30d-4a67-4993-8ea3-6527eec4b4d1, skb_priority(0/0),skb_mark(0/0),ct_state(0/0),ct_zone(0/0),ct_mark(0/0),ct_label(0/0),recirc_id(0),dp_hash(0/0),in_port(enp4s0f1_2),packet_type(ns=0/0,id=0/0),eth(src=f8:f2:1e:03:bf:f6,dst=fa:16:3e:6f:11:ff),eth_type(0x0800),ipv4(src=0.0.0.0/0.0.0.0,dst=0.0.0.0/0.0.0.0,proto=0/0,tos=0/0x3,ttl=0/0,frag=no), packets:2048, bytes:249772, used:0.310s, offloaded:yes, dp:tc, actions:set(tunnel(tun_id=0x1,dst=10.10.51.121,ttl=64,tp_dst=6081,key6(bad key length 1, expected 0)(01)geneve({class=0x102,type=0x80,len=4,0x28001}),flags(key))),genev_sys_6081
ufid:5813ab17-44a9-491f-818f-9baca5aa5222, skb_priority(0/0),tunnel(tun_id=0x1,src=10.10.51.121,dst=10.10.51.159,ttl=0/0,tp_dst=6081,geneve({class=0x102,type=0x80,len=4,0x30002/0x7fffffff}),flags(+key)),skb_mark(0/0),ct_state(0/0),ct_zone(0/0),ct_mark(0/0),ct_label(0/0),recirc_id(0),dp_hash(0/0),in_port(genev_sys_6081),packet_type(ns=0/0,id=0/0),eth(src=fa:16:3e:6f:11:ff,dst=00:00:00:00:00:00/01:00:00:00:00:00),eth_type(0x0800),ipv4(src=0.0.0.0/0.0.0.0,dst=0.0.0.0/0.0.0.0,proto=0/0,tos=0/0,ttl=0/0,frag=no), packets:2050, bytes:86086, used:0.310s, offloaded:yes, dp:tc, actions:enp4s0f1_2

Dont see any difference in qdisc of enp4s0f1_1.
Before ovs restart
[root@hareshcomputesriovoffload-0 /]# tc qdisc show dev enp4s0f1_1
qdisc mq 0: root 
qdisc fq_codel 0: parent :1 limit 10240p flows 1024 quantum 1514 target 5ms interval 100ms memory_limit 32Mb ecn drop_batch 64 
qdisc ingress ffff: parent ffff:fff1 ---------------- 

After ovsrestart
[root@hareshcomputesriovoffload-0 /]# tc qdisc show dev enp4s0f1_1
qdisc mq 0: root 
qdisc fq_codel 0: parent :1 limit 10240p flows 1024 quantum 1514 target 5ms interval 100ms memory_limit 32Mb ecn drop_batch 64 
qdisc ingress ffff: parent ffff:fff1 ---------------- 
[root@hareshcomputesriovoffload-0 /]# 

Version-Release number of selected component (if applicable):
RHEL: 8.4 
kernel: 4.18.0-305.el8.x86_64
Ovs: openvswitch2.15-2.15.0-24.el8fdp.x86_64

How reproducible:
Always

Steps to Reproduce:
1. Configure ovs hw offload with geneve
2. Make sure flows in both ingress, egress direction are offloaded
3. reboot the machine
4. Send traffic again and check flow is offloaded only for ingress direction
5. Restart ovs-vswitchd
6. Now, flows for both direction offloaded.

Actual results:
broken offload feautre

Expected results:
Should have offloaded in both direction

Additional info:
restarting ovs-vswitchd fix the issue. And traffic from the both directions offloaded to the hw. 
I didnt find anything suspicious in system messages or in dmesg. 
Issue persist for all onwards created VMs untill ovs-vswitchd restarted.

Comment 1 Marcelo Ricardo Leitner 2021-07-16 17:10:21 UTC
I briefly discussed this one with Haresh earlier today and this is very related to how ovs starts stuff and needs input from ovs team. I have no idea on what this means:
2021-07-16T12:14:11.993Z|00152|ovsdb_cs|DBG|unix:/var/run/openvswitch/db.sock: received unexpected reply message: {"error":null,"id":406,"result":[{"count":1},{}]}


Btw, I have seen issues with ingress qdisc that are only fixed after removing and adding the port back to the bridge - the restart didn't fix it. I don't know if both are related here, probably not, but worth having in mind anyway.

Comment 2 Aaron Conole 2021-07-21 17:27:13 UTC
2021-07-16T12:14:11.993Z|00152|ovsdb_cs|DBG|unix:/var/run/openvswitch/db.sock: received unexpected reply message: {"error":null,"id":406,"result":[{"count":1},{}]}

This happens usually because a transaction is ended before a reply is received.  That's also why it's logged
as DBG instead of WARN or INFO.  It shouldn't have much to do with this situation.

Actually, I wonder if there's a race during initialization w.r.t. offload feature support?  Notice:

2021-07-16T12:10:23.111Z|00464|dpif_netlink(handler2)|DBG|failed to offload flow: Operation not supported: enp4s0f1_1

And we see a flow that gets generated like:

ufid:6f432f1b-bc86-4db9-982d-dae91160d3b5, recirc_id(0),dp_hash(0/0),skb_priority(0/0),in_port(enp4s0f1_2),skb_mark(0/0),ct_state(0/0),ct_zone(0/0),ct_mark(0/0),ct_label(0/0),eth(src=00:00:00:00:00:00/01:00:00:00:00:00,dst=fa:16:3e:6f:11:ff),eth_type(0x0800),ipv4(src=0.0.0.0/0.0.0.0,dst=0.0.0.0/0.0.0.0,proto=0/0,tos=0/0x3,ttl=0/0,frag=no), packets:11599, bytes:603148, used:0.001s, dp:ovs, actions:userspace(pid=2201295342,controller(reason=1,dont_send=0,continuation=0,recirc_id=8,rule_cookie=0xff2250a2,controller_id=0,max_len=65535)),set(tunnel(tun_id=0x1,dst=10.10.51.121,ttl=64,tp_dst=6081,geneve({class=0x102,type=0x80,len=4,0x28001}),flags(df|csum|key))),genev_sys_6081

Which requires userspace()/slowpath action.  Is there some kind of feature flag that isn't ready when OVS starts, and then can become ready later?

Comment 3 Marcelo Ricardo Leitner 2021-07-21 18:08:36 UTC
(In reply to Aaron Conole from comment #2)
> Actually, I wonder if there's a race during initialization w.r.t. offload
> feature support?  Notice:

Maybe, but..

> Which requires userspace()/slowpath action.  Is there some kind of feature
> flag that isn't ready when OVS starts, and then can become ready later?

This specific flow cannot be offloaded due to that, actions:userspace(..controler..).
IOW, at least from this flow only, I'm afraid we can't know much.

Comment 4 Haresh Khandelwal 2021-07-28 14:38:06 UTC
I dont see this behavior with vlan. 
Flows are properly offloaded after nodes reboots and doesn't need to restart ovs-vswitchd.

Comment 5 Marcelo Ricardo Leitner 2021-08-24 14:56:38 UTC
Seems this bz may not be needed anymore, unless OVS wants to have specific tests for this.
https://bugzilla.redhat.com/show_bug.cgi?id=1946162#c39

Comment 6 Haresh Khandelwal 2021-08-24 18:51:17 UTC
Marcelo, this is different issue and open yet, this should be fixed.

Comment 7 Marcelo Ricardo Leitner 2021-08-24 21:45:26 UTC
Okay. I'm not seeing what's the difference then. Would you mind elaborating please?

AFAICT from this commit description
https://git.kernel.org/pub/scm/linux/kernel/git/netdev/net-next.git/commit/?id=74fc4f828769
it matches the behaviour described in comment #0 and with the fact that it doesn't affect vlans (comment #4). Despite the ovsdb log messages (comment #0), though.

Comment 8 Marcelo Ricardo Leitner 2021-08-24 21:48:02 UTC
Nevermind. I see it now :-)

Comment 9 Marcelo Ricardo Leitner 2021-08-24 21:55:01 UTC
(In reply to Aaron Conole from comment #2)
> Actually, I wonder if there's a race during initialization w.r.t. offload
> feature support?  Notice:

As you can see in the above comments, that's a possibility :-)  but

...
> And we see a flow that gets generated like:
> 
> ufid:6f432f1b-bc86-4db9-982d-dae91160d3b5,
> recirc_id(0),dp_hash(0/0),skb_priority(0/0),in_port(enp4s0f1_2),skb_mark(0/
> 0),ct_state(0/0),ct_zone(0/0),ct_mark(0/0),ct_label(0/0),eth(src=00:00:00:00:
> 00:00/01:00:00:00:00:00,dst=fa:16:3e:6f:11:ff),eth_type(0x0800),ipv4(src=0.0.
> 0.0/0.0.0.0,dst=0.0.0.0/0.0.0.0,proto=0/0,tos=0/0x3,ttl=0/0,frag=no),
> packets:11599, bytes:603148, used:0.001s, dp:ovs,
> actions:userspace(pid=2201295342,controller(reason=1,dont_send=0,
> continuation=0,recirc_id=8,rule_cookie=0xff2250a2,controller_id=0,
> max_len=65535)),set(tunnel(tun_id=0x1,dst=10.10.51.121,ttl=64,tp_dst=6081,
> geneve({class=0x102,type=0x80,len=4,0x28001}),flags(df|csum|key))),
> genev_sys_6081
> 
> Which requires userspace()/slowpath action.  Is there some kind of feature
> flag that isn't ready when OVS starts, and then can become ready later?

Not that I'm aware of. Moreover, if a tc feature probe failed, I would expect ovs to simply use dp:ovs instead, and not generate a completely different flow like this one.

I wonder how sane this action is. Considering that it is sending the packet to the controller AND outputting it on a port, and: "pid=2201295342", which is likely not a valid pid. Can this be out of a stack/memory trash maybe?

Comment 10 Haresh Khandelwal 2021-08-25 08:22:49 UTC
(In reply to Marcelo Ricardo Leitner from comment #9)
> (In reply to Aaron Conole from comment #2)
> > Actually, I wonder if there's a race during initialization w.r.t. offload
> > feature support?  Notice:
> 
> As you can see in the above comments, that's a possibility :-)  but
> 
> ...
> > And we see a flow that gets generated like:
> > 
> > ufid:6f432f1b-bc86-4db9-982d-dae91160d3b5,
> > recirc_id(0),dp_hash(0/0),skb_priority(0/0),in_port(enp4s0f1_2),skb_mark(0/
> > 0),ct_state(0/0),ct_zone(0/0),ct_mark(0/0),ct_label(0/0),eth(src=00:00:00:00:
> > 00:00/01:00:00:00:00:00,dst=fa:16:3e:6f:11:ff),eth_type(0x0800),ipv4(src=0.0.
> > 0.0/0.0.0.0,dst=0.0.0.0/0.0.0.0,proto=0/0,tos=0/0x3,ttl=0/0,frag=no),
> > packets:11599, bytes:603148, used:0.001s, dp:ovs,
> > actions:userspace(pid=2201295342,controller(reason=1,dont_send=0,
> > continuation=0,recirc_id=8,rule_cookie=0xff2250a2,controller_id=0,
> > max_len=65535)),set(tunnel(tun_id=0x1,dst=10.10.51.121,ttl=64,tp_dst=6081,
> > geneve({class=0x102,type=0x80,len=4,0x28001}),flags(df|csum|key))),
> > genev_sys_6081
> > 
> > Which requires userspace()/slowpath action.  Is there some kind of feature
> > flag that isn't ready when OVS starts, and then can become ready later?
> 
> Not that I'm aware of. Moreover, if a tc feature probe failed, I would
> expect ovs to simply use dp:ovs instead, and not generate a completely
> different flow like this one.
> 
> I wonder how sane this action is. Considering that it is sending the packet
> to the controller AND outputting it on a port, and: "pid=2201295342", which
> is likely not a valid pid. Can this be out of a stack/memory trash maybe?
Yes, PID is not ovs-vswitchd's. I didnt find any such process on the node as well. But ping was working.
Interesting, restarting ovs sorts out everything.

Comment 11 Aaron Conole 2021-08-30 13:33:24 UTC
Is it possible to reproduce this without an openstack environment?

Comment 12 Haresh Khandelwal 2021-08-30 18:32:56 UTC
Yes Aaron it should be possible.

Comment 13 Aaron Conole 2021-09-01 11:58:03 UTC
rephrasing: How can I reproduce this outside of the openstack environment?

Comment 14 Haresh Khandelwal 2021-09-06 06:08:37 UTC
(In reply to Aaron Conole from comment #13)
> rephrasing: How can I reproduce this outside of the openstack environment?

All steps are same as mentioned in "steps to reproduce" except flow programmed in my case are by ml2/ovn. So we need controller. Otherwise, we have to program the flow using tc manually.

Comment 15 Aaron Conole 2021-09-14 19:49:27 UTC
Can you help me set up a reproducer environment?

I'm concerned - I see no reason for the userspace() attribute to be generated (just as with bz#2002888 - so maybe there's a bug internally here)

Once you have an environment we can work with can we add a debug RPM?

-Aaron

Comment 16 Haresh Khandelwal 2021-09-15 10:51:40 UTC
Sure Aaron, That would be faster, Possible in day or two, i would share the environment.

Comment 25 Marcelo Ricardo Leitner 2021-10-06 15:23:04 UTC
What's a better summary for the bz then?

Comment 26 Karrar Fida 2021-10-21 15:45:06 UTC
Please target FDP 21.J

Comment 27 Karrar Fida 2021-10-21 15:47:36 UTC
@mmichels @nusiddiq 
Can we please get a clone for OVN-2021, as this is also an issue for OSP 16.2

Comment 31 Ehsan Elahi 2021-12-15 12:07:45 UTC
As the patch for the OVN part of this bug is being reviewed under bz-2022001, therefore here I could sanity verify this bz. However I have verified that the tables 71 and 72 have been populated as below:

[root@dell-per740-81 ~]# rpm -qa |grep -E 'ovn|openvswitch'
openvswitch2.15-2.15.0-53.el8fdp.x86_64
ovn-2021-central-21.09.1-23.el8fdp.x86_64
openvswitch-selinux-extra-policy-1.0-28.el8fdp.noarch
ovn-2021-host-21.09.1-23.el8fdp.x86_64
ovn-2021-21.09.1-23.el8fdp.x86_64

ovn-nbctl ls-add sw0
ovn-nbctl lsp-add sw0 sw0-p1
ovn-nbctl lsp-set-addresses sw0-p1 "50:54:00:00:00:03 10.0.0.3" unknown
ovn-nbctl lsp-add sw0 sw0-p2
ovn-nbctl lsp-set-addresses sw0-p2 "50:54:00:00:00:04 10.0.0.4"
ovn-nbctl lsp-set-port-security sw0-p2 "50:54:00:00:00:04 10.0.0.4"
ovn-nbctl lsp-add sw0 sw0-p3
ovn-nbctl lsp-set-addresses sw0-p3 unknown

ovn-nbctl ls-add sw1
ovn-nbctl lsp-add sw1 sw1-p1
ovn-nbctl lsp-set-addresses sw1-p1 "40:54:00:00:00:03 11.0.0.3" unknown
ovn-nbctl lsp-add sw1 sw1-p2
ovn-nbctl lsp-set-addresses sw1-p2 "40:54:00:00:00:04 11.0.0.4"
ovn-nbctl lsp-set-port-security sw1-p2 "40:54:00:00:00:04 11.0.0.4"

ovn-nbctl lr-add lr0
ovn-nbctl lrp-add lr0 lr0-sw0 00:00:00:00:ff:01 10.0.0.1/24
ovn-nbctl lsp-add sw0 sw0-lr0
ovn-nbctl lsp-set-type sw0-lr0 router
ovn-nbctl lsp-set-addresses sw0-lr0 router
ovn-nbctl lsp-set-options sw0-lr0 router-port=lr0-sw0

ovn-nbctl lrp-add lr0 lr0-sw1 00:00:00:00:ff:02 11.0.0.1/24
ovn-nbctl lsp-add sw1 sw1-lr0
ovn-nbctl lsp-set-type sw1-lr0 router
ovn-nbctl lsp-set-addresses sw1-lr0 router
ovn-nbctl lsp-set-options sw1-lr0 router-port=lr0-sw1

ovn-nbctl --wait=hv sync

ip netns add vm1
ovs-vsctl add-port br-int vm1 -- set interface vm1 type=internal
ip link set vm1 netns vm1
ip netns exec vm1 ip link set vm1 address 50:54:00:00:00:03
ip netns exec vm1 ip addr add 10.0.0.3/24 dev vm1
ip netns exec vm1 ip link set vm1 up
ip netns exec vm1 ip link set lo up
ip netns exec vm1 ip route add default via 10.0.0.1
ovs-vsctl set Interface vm1 external_ids:iface-id=sw0-p1 options:tx_pcap=hv1/vm1-tx.pcap options:rxq_pcap=hv1/vm1-rx.pcap ofport-request=1

ip netns add vm2
ovs-vsctl add-port br-int vm2 -- set interface vm2 type=internal
ip link set vm2 netns vm2
ip netns exec vm2 ip link set vm2 address 40:54:00:00:00:04
ip netns exec vm2 ip addr add 11.0.0.4/24 dev vm2
ip netns exec vm2 ip link set vm2 up
ip netns exec vm2 ip link set lo up
ip netns exec vm2 ip route add default via 11.0.0.1
ovs-vsctl set Interface vm2 external_ids:iface-id=sw1-p2 options:tx_pcap=hv1/vm2-tx.pcap options:rxq_pcap=hv1/vm2-rx.pcap ofport-request=2

ip netns add vm3
ovs-vsctl add-port br-int vm3 -- set interface vm3 type=internal
ip link set vm3 netns vm3
#ip netns exec vm3 ip link set vm3 address 50:54:00:00:00:11
ip netns exec vm3 ip addr add 10.0.0.10/24 dev vm3
ip netns exec vm3 ip link set vm3 up
ip netns exec vm3 ip link set lo up
ip netns exec vm3 ip route add default via 10.0.0.1
ovs-vsctl set Interface vm3 external_ids:iface-id=sw0-p3 options:tx_pcap=hv1/vm3-tx.pcap options:rxq_pcap=hv1/vm3-rx.pcap ofport-request=3

#######################
[root@dell-per740-81 ~]# ovn-sbctl dump-flows sw0 |grep ls_in_lookup_fdb  # will result with ports sw0-p1, sw0-p3
  table=3 (ls_in_lookup_fdb   ), priority=100  , match=(inport == "sw0-p1"), action=(reg0[11] = lookup_fdb(inport, eth.src); next;)
  table=3 (ls_in_lookup_fdb   ), priority=100  , match=(inport == "sw0-p3"), action=(reg0[11] = lookup_fdb(inport, eth.src); next;)
  table=3 (ls_in_lookup_fdb   ), priority=0    , match=(1), action=(next;)
[root@dell-per740-81 ~]# ovn-sbctl dump-flows sw0 |grep ls_in_put_fdb
  table=4 (ls_in_put_fdb      ), priority=100  , match=(inport == "sw0-p1" && reg0[11] == 0), action=(put_fdb(inport, eth.src); next;)
  table=4 (ls_in_put_fdb      ), priority=100  , match=(inport == "sw0-p3" && reg0[11] == 0), action=(put_fdb(inport, eth.src); next;)
  table=4 (ls_in_put_fdb      ), priority=0    , match=(1), action=(next;)


[root@dell-per740-81 ~]# ovs-ofctl dump-flows br-int table=72 
 cookie=0x40968c6b, duration=148.847s, table=72, n_packets=9, n_bytes=686, priority=100,reg14=0x1,metadata=0x1,dl_src=50:54:00:00:00:03 actions=load:0x1->NXM_NX_REG10[8]
 cookie=0xbc83c5e6, duration=148.463s, table=72, n_packets=9, n_bytes=690, priority=100,reg14=0x3,metadata=0x1,dl_src=72:9b:ea:77:23:36 actions=load:0x1->NXM_NX_REG10[8]
[root@dell-per740-81 ~]# ovs-ofctl dump-flows br-int table=71
 cookie=0x40968c6b, duration=165.166s, table=71, n_packets=0, n_bytes=0, priority=100,metadata=0x1,dl_dst=50:54:00:00:00:03 actions=load:0x1->NXM_NX_REG15[]
 cookie=0xbc83c5e6, duration=164.782s, table=71, n_packets=0, n_bytes=0, priority=100,metadata=0x1,dl_dst=72:9b:ea:77:23:36 actions=load:0x3->NXM_NX_REG15[]

Comment 33 errata-xmlrpc 2022-01-10 16:49:01 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory (ovn bug fix and enhancement update), and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHBA-2022:0049

Comment 34 Red Hat Bugzilla 2023-09-15 01:11:34 UTC
The needinfo request[s] on this closed bug have been removed as they have been unresolved for 500 days


Note You need to log in before you can comment on or make changes to this bug.