Bug 2177173 - Router load balancers with no backends and event=false,reject=false should silently drop traffic.
Summary: Router load balancers with no backends and event=false,reject=false should si...
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: Red Hat Enterprise Linux Fast Datapath
Classification: Red Hat
Component: ovn23.03
Version: FDP 23.A
Hardware: Unspecified
OS: Unspecified
high
medium
Target Milestone: ---
: ---
Assignee: Ales Musil
QA Contact: ying xu
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2023-03-10 09:24 UTC by Dumitru Ceara
Modified: 2023-07-06 20:05 UTC (History)
7 users (show)

Fixed In Version: ovn23.03-23.03.0-16.el8fdp
Doc Type: If docs needed, set a value
Doc Text:
Clone Of:
Environment:
Last Closed: 2023-07-06 20:05:24 UTC
Target Upstream Version:
Embargoed:


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
Red Hat Issue Tracker FD-2726 0 None None None 2023-03-10 09:25:14 UTC
Red Hat Product Errata RHBA-2023:3991 0 None None None 2023-07-06 20:05:36 UTC

Description Dumitru Ceara 2023-03-10 09:24:48 UTC
Description of problem:

Consider the following load balancer:
$ ovn-nbctl list load_balancer
_uuid               : 5f11f997-9937-4677-90d4-b3b0ff904724
external_ids        : {}
health_check        : []
ip_port_mappings    : {}
name                : lb
options             : {event="false", reject="false"}
protocol            : tcp
selection_fields    : []
vips                : {"42.42.42.42:4242"=""}

Applied on a gw router:
$ ovn-nbctl lr-lb-list lr0
UUID                                    LB                  PROTO      VIP                 IPs
5f11f997-9937-4677-90d4-b3b0ff904724    lb                  tcp        42.42.42.42:4242

$ ovn-nbctl get logical_router lr0 options:chassis
chassis-1

This generates the following SB flow:
$ ovn-sbctl --uuid lflow-list lr0 | grep ct_lb
  uuid=0xdda19791, table=7 (lr_in_dnat         ), priority=120  , match=(ct.new && !ct.rel && ip4 && reg0 == 42.42.42.42 && tcp && reg9[16..31] == 4242), action=(ct_lb_mark(backends=);)

ovn-controller doesn't fail to parse this (that's arguably correct) and generates the following openflow:

$ ovs-ofctl dump-flows br-int | grep dda19791
 cookie=0xdda19791, duration=511.785s, table=15, n_packets=0, n_bytes=0, idle_age=511, priority=120,ct_state=+new-rel+trk,tcp,reg0=0x2a2a2a2a,reg9=0x10920000/0xffff0000,metadata=0x3 actions=ct(table=16,zone=NXM_NX_REG11[0..15],nat)

Basically just sending the original packet through conntrack and letting it advance unchanged.

Instead ovn-northd should generate a drop flow if event=false,reject=false, e.g.:
table=7 (lr_in_dnat         ), priority=120  , match=(ct.new && !ct.rel && ip4 && reg0 == 42.42.42.42 && tcp && reg9[16..31] == 4242), action=(drop;)

For the case when event=true or reject=true we already take action and either generate a controller event or reject the packet when no backends are available.

Version-Release number of selected component (if applicable):
upstream v23.03.0 and downstream ovn23.03-23.03.0-6.el8fdp or older

How reproducible:
Always.

Steps to Reproduce:
Apply this patch to the upstream ovn repo (change the sandbox test script) and start a sandbox:
diff --git a/tutorial/ovn-setup.sh b/tutorial/ovn-setup.sh
index 969b2330f6..059d51d2c9 100755
--- a/tutorial/ovn-setup.sh
+++ b/tutorial/ovn-setup.sh
@@ -23,6 +23,10 @@ ovn-nbctl lsp-set-type lrp1-attachment router
 ovn-nbctl lsp-set-addresses lrp1-attachment 00:00:00:00:ff:02
 ovn-nbctl lsp-set-options lrp1-attachment router-port=lrp1

+ovn-nbctl create load_balancer name=lb options:reject=false options:event=false vips:\"42.42.42.42:4242\"=\"\" protocol=tcp                                                                                                                 
+ovn-nbctl lr-lb-add lr0 lb
+ovn-nbctl set logical_router lr0 options:chassis=chassis-1
+
 ovs-vsctl add-port br-int p1 -- \
     set Interface p1 external_ids:iface-id=sw0-port1
 ovs-vsctl add-port br-int p2 -- \
---

$ make sandbox
[.. then]

$ ./ovn-setup.sh

Comment 2 OVN Bot 2023-03-28 14:56:53 UTC
ovn23.03 fast-datapath-rhel-9 clone created at https://bugzilla.redhat.com/show_bug.cgi?id=2182403

Comment 6 ying xu 2023-04-23 06:05:01 UTC
# ovn-nbctl list load_balancer
_uuid               : d1302b1c-98d4-4638-a8bd-9fffd96acd71
external_ids        : {}
health_check        : []
ip_port_mappings    : {}
name                : lb2
options             : {reject="true"}
protocol            : udp
selection_fields    : []
vips                : {"172.16.103.10:8000"="", "172.16.103.20:8000"="172.16.102.12:80,172.16.103.12:80", "[2001:db8:103::10]:8000"="", "[2001:db8:103::20]:8000"="[2001:db8:102::12]:80,[2001:db8:103::12]:80"}


test on old verson:
# rpm -qa|grep ovn
ovn23.03-host-23.03.0-4.el8fdp.x86_64
ovn23.03-central-23.03.0-4.el8fdp.x86_64
ovn23.03-23.03.0-4.el8fdp.x86_64

when set reject=true
# ovn-sbctl dump-flows s3 | grep "ls_in_lb "|grep 172.16.103.10
  table=12(ls_in_lb           ), priority=120  , match=(ct.new && ip4.dst == 172.16.103.10 && udp.dst == 8000), action=(reg0 = 0; reject { outport <-> inport; next(pipeline=egress,table=5);};)
reject=false event=false

then set reject=false and event=false
ovn-nbctl set load_balancer $uuid options:reject=false
ovn-nbctl set load_balancer $uuid options:event=false

# ovn-sbctl dump-flows s3 | grep "ls_in_lb "|grep 172.16.103.10
  table=12(ls_in_lb           ), priority=120  , match=(ct.new && ip4.dst == 172.16.103.10 && udp.dst == 8000), action=(reg0[1] = 0; ct_lb_mark(backends=);)


tested on new version:
after set reject=false and event=false
ovn-nbctl set load_balancer $uuid options:reject=false
ovn-nbctl set load_balancer $uuid options:event=false

# ovn-sbctl dump-flows s3 | grep "ls_in_lb "|grep 172.16.103.10
  table=12(ls_in_lb           ), priority=120  , match=(ct.new && ip4.dst == 172.16.103.10 && udp.dst == 8000), action=(drop;)  -----------drop

# ip netns exec vm10 ncat --udp 172.16.103.10 8000 <<< h ----------------send packet to vip, no reply ,only drop.
02:03:38.811858 Out 00:de:ad:01:00:01 ethertype IPv4 (0x0800), length 46: 172.16.102.11.48982 > 172.16.103.10.8000: UDP, length 2


set verified.

Comment 10 errata-xmlrpc 2023-07-06 20:05:24 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory (ovn23.03 bug fix and enhancement update), and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHBA-2023:3991


Note You need to log in before you can comment on or make changes to this bug.