Description of problem: When enabling empty-lb-events in OVN after a load balancer is created and applied, the existing load balancers will not receive table ls_in_pre_lb lflows at priority 130: [root@ovn-control-plane ~]# ovn-nbctl list nb_global _uuid : e06120f9-5eb7-499a-82ce-1a9ef15cbe66 connections : [136e49e2-a10d-43d1-bc41-112500012868] external_ids : {} hv_cfg : 0 hv_cfg_timestamp : 0 ipsec : false name : "" nb_cfg : 0 nb_cfg_timestamp : 0 options : {e2e_timestamp="1611161497", mac_prefix="b6:7b:88", max_tunid="16711680", northd_probe_interval="5000", svc_monitor_mac="ca:b6:2e:39:53:e7"} sb_cfg : 0 sb_cfg_timestamp : 0 ssl : [] [root@ovn-control-plane ~]# ovn-sbctl lflow-list | grep priority=130 [root@ovn-control-plane ~]# ovn-nbctl set nb_global . options:controller_event=true [root@ovn-control-plane ~]# ovn-sbctl lflow-list | grep priority=130 [root@ovn-control-plane ~]# ovn-nbctl lb-add blah 9.9.9.9:4444 "" [root@ovn-control-plane ~]# ovn-nbctl ls-lb-add ovn-worker blah [root@ovn-control-plane ~]# ovn-sbctl lflow-list | grep priority=130 table=4 (ls_in_pre_lb ), priority=130 , match=(ip4.dst == 9.9.9.9 && tcp && tcp.dst == 4444), action=(trigger_event(event = "empty_lb_backends", meter = "", vip = "9.9.9.9:4444", protocol = "tcp", load_balancer = "b7bba31e-b61a-4621-98b4-fff317516d90");) [root@ovn-control-plane ~]# ovn-nbctl ls-lb-list ovn-worker UUID LB PROTO VIP IPs 5fd61dca-b4eb-41b9-9019-e2c5f484292c tcp 10.96.0.10:53 10.244.0.3:53,10.244.0.4:53 tcp 10.96.0.10:9153 10.244.0.3:9153,10.244.0.4:9153 tcp 10.96.0.1:443 172.18.0.2:6443 94484842-522e-43e0-9c30-21b3ba965bb2 udp 10.96.0.10:53 10.244.0.3:53,10.244.0.4:53 b7bba31e-b61a-4621-98b4-fff317516d90 blah tcp 9.9.9.9:4444 [root@ovn-control-plane ~]# rpm -qa | grep ovn ovn-central-20.09.0-2.fc32.x86_64 ovn-20.09.0-2.fc32.x86_64 ovn-host-20.09.0-2.fc32.x86_64
Additionally, using empty-lb-events on a load balancer does not work with --reject. This is because reject happens in ls_in_stateful, which happens later than empty-lb-events (ls_in_pre_lb). We need to also fix this. Since this bugzilla is most likely going to be resolved by moving the controller_event:true configuration option to the LB itself, this will also resolve this issue.
Actually after talking with Lorenzo my last comment is invalid. If --reject is added to a LB, then that LB does not receive the table 130 flow for empty-lb-events. However, this bug is still valid and we should move the configuration for empty-lb-events from a global config option to a per LB config option.
I think the reason that the empty events lflow was not showing up was because it requires an entry in the load balancer that has no endpoints.
https://github.com/ovn-org/ovn/commit/21248950f5a547d2b4d75ddb0d3c305f4971fab3
according to the comment6, to solve this issue,add a new option --event to set for each lb. test on the version: # rpm -qa|grep ovn ovn2.13-host-20.12.0-15.el8fdp.x86_64 ovn2.13-central-20.12.0-15.el8fdp.x86_64 ovn2.13-20.12.0-15.el8fdp.x86_64 # ovn-nbctl show switch 83cdd8ca-d599-4d1e-92b3-144607d9c0cf (s2) port hv1_vm00_vnet1 addresses: ["00:de:ad:01:00:01 172.16.102.11 2001:db8:102::11"] port s2_r1 type: router addresses: ["00:de:ad:ff:01:02 172.16.102.1 2001:db8:102::1"] router-port: r1_s2 port hv1_vm01_vnet1 addresses: ["00:de:ad:01:01:01 172.16.102.12 2001:db8:102::12"] switch e75e243e-0aec-409c-a51e-920335ea4f48 (s3) port hv0_vm01_vnet1 addresses: ["00:de:ad:00:01:01 172.16.103.12 2001:db8:103::12"] port hv0_vm00_vnet1 addresses: ["00:de:ad:00:00:01 172.16.103.11 2001:db8:103::11"] port s3_r1 type: router addresses: ["00:de:ad:ff:01:03 172.16.103.1 2001:db8:103::1"] router-port: r1_s3 router d6f13f6d-567c-4389-b627-257be1ac487d (r1) port r1_s2 mac: "00:de:ad:ff:01:02" networks: ["172.16.102.1/24", "2001:db8:102::1/64"] port r1_s3 mac: "00:de:ad:ff:01:03" networks: ["172.16.103.1/24", "2001:db8:103::1/64"] config two lbs, one with --event, one without. ovn-nbctl --event lb-add lb0 30.0.0.100:80 "" ovn-nbctl lb-add lb0 [3000::100]:80 "" ovn-nbctl lb-add lb1 30.0.0.101:80 "" ovn-nbctl lb-add lb1 [3000::101]:80 "" ovn-nbctl ls-lb-add s2 lb1 ovn-nbctl ls-lb-add s2 lb0 curl 30.0.0.100:80 --connect-timeout 5 curl -g [3000::100]:80 --connect-timeout 5 curl 30.0.0.101:80 --connect-timeout 5 curl -g [3000::101]:80 --connect-timeout 5 check controller_event,only 30.0.0.10/3000::100 was reported in the event,30.0.0.101/3000::101 was not. # ovn-sbctl list controller_even _uuid : 8a5afd98-a1b6-47d6-9bd1-6648112df6b7 chassis : 53b1aa00-35aa-490a-97bc-6413732b1c64 event_info : {load_balancer="3cc3a438-da21-463c-8a04-4318d0967a87", protocol=tcp, vip="[3000::100]:80"} event_type : empty_lb_backends seq_num : 2 _uuid : 3965535e-aaf5-4128-bdf6-8d6fdbda851a chassis : 53b1aa00-35aa-490a-97bc-6413732b1c64 event_info : {load_balancer="3cc3a438-da21-463c-8a04-4318d0967a87", protocol=tcp, vip="30.0.0.100:80"} event_type : empty_lb_backends seq_num : 1
Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory (ovn2.13 bug fix and enhancement update), and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://access.redhat.com/errata/RHBA-2021:0836