Description of problem: For reject ACLs ovn-northd always creates 2 versions of each flow. One for ipv4 and one for ipv6. But these ACLs in many cases already contains matching on one of these protocols. As a result half of logical flows related to reject ACLs inside Southbound db are invalid because contains matching on both ipv4 and ipv6. For example: (ct.est && ct_label.blocked == 0) && ip6 && (ip4.dst==172.30.88.132 && tcp && tcp.dst==80) --------------------------------------- ------------------------------------------- This added by ovn-northd This provided by ovn-k8s In a test performed in https://bugzilla.redhat.com/show_bug.cgi?id=1859924 there are 800K of such invalid logical flows that takes ~400MB of space, i.e. almost half of a total size of the Southbound db. Possible solutions: 1. Parse the expression inside ovn-northd and avoid adding logical flow if expression is inconsistent. --> Heavy, extra computational cost on northd which is already pretty busy. 2. Quick and dirty: Simple string search for conflicting protocols if match doesn't contain "!" or "||" symbols. --> Not a beautiful solution, will not catch all cases. 3. Add a new column, e.g. 'reject_protocols' to NB db where user could add hints about which protocols matched in this reject ACL. So, northd will generate flows only for these protocols. (It might make sense to add 'options' column to ACL table instead of 'reject_protocols' and configure 'options:reject_protocols' so we could reuse 'options' in the future for some other hints or configurations. Worth to discuss.) --> Simple enough, should not be hard to use from ovn-k8s. IIRC, during previous discussions we concluded on 3rd solution.
To address this issue, a different approach was taken. A new OVN action was added called "reject" which can handle both IPv4 and IPv6 packets. With this, we will add only one logical flow for each reject ACL. The patches are merged upstream.
I can reproduce this bz on the version: # rpm -qa|grep ovn ovn2.13-host-20.09.0-1.el8fdp.x86_64 ovn2.13-central-20.09.0-1.el8fdp.x86_64 ovn2.13-20.09.0-1.el8fdp.x86_64 config acl reject rules: #ovn-nbctl acl-add ls to-lport 900 "tcp && tcp.dst == 2345" reject then check the lflows: #ovn-sbctl lflow-list |grep "tcp && tcp.dst == 2345" table=5 (ls_out_acl ), priority=1910 , match=(ip4 && tcp && (tcp && tcp.dst == 2345)), action=(reg0 = 0; eth.dst <-> eth.src; ip4.dst <-> ip4.src; tcp_reset { outport <-> inport; next(pipeline=ingress,table=20); };) table=5 (ls_out_acl ), priority=1910 , match=(ip6 && tcp && (tcp && tcp.dst == 2345)), action=(reg0 = 0; eth.dst <-> eth.src; ip6.dst <-> ip6.src; tcp_reset { outport <-> inport; next(pipeline=ingress,table=20); };) table=5 (ls_out_acl ), priority=1900 , match=(ip4 && (tcp && tcp.dst == 2345)), action=(reg0 = 0; icmp4 { eth.dst <-> eth.src; ip4.dst <-> ip4.src; outport <-> inport; next(pipeline=ingress,table=20); };) table=5 (ls_out_acl ), priority=1900 , match=(ip6 && (tcp && tcp.dst == 2345)), action=(reg0 = 0; icmp6 { eth.dst <-> eth.src; ip6.dst <-> ip6.src; outport <-> inport; next(pipeline=ingress,table=20); };) there are 4 flows about this rule. verified on the version: # rpm -qa|grep ovn ovn2.13-host-20.09.0-10.el8fdp.x86_64 ovn2.13-central-20.09.0-10.el8fdp.x86_64 ovn2.13-20.09.0-10.el8fdp.x86_64 #ovn-sbctl lflow-list |grep "tcp && tcp.dst == 2345" table=5 (ls_out_acl ), priority=1900 , match=(tcp && tcp.dst == 2345), action=(reg0 = 0; reject { /* eth.dst <-> eth.src; ip.dst <-> ip.src; is implicit. */ outport <-> inport; next(pipeline=ingress,table=20); };) there is only 1 flow about this rule.
verified on rhel7: # rpm -qa|grep ovn ovn2.13-20.09.0-10.el7fdp.x86_64 ovn2.13-central-20.09.0-10.el7fdp.x86_64 ovn2.13-host-20.09.0-10.el7fdp.x86_64 #ovn-nbctl acl-add ls to-lport 800 "ip6 && udp && udp.dst == 2349" reject' (Expected 0, got 0) #ovn-sbctl lflow-list |grep 'udp && udp.dst == 2349'' table=5 (ls_out_acl ), priority=1800 , match=(ip6 && udp && udp.dst == 2349), action=(reg0 = 0; reject { /* eth.dst <-> eth.src; ip.dst <-> ip.src; is implicit. */ outport <-> inport; next(pipeline=ingress,table=20); };) there is only 1 flow.
Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory (ovn2.13 bug fix and enhancement update), and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://access.redhat.com/errata/RHBA-2020:5308