The FDP team is no longer accepting new bugs in Bugzilla. Please report your issues under FDP project in Jira. Thanks.
Bug 1878248 - Logical router policy with pkt_mark is int32 resulting in overflow when a uint32 is used
Summary: Logical router policy with pkt_mark is int32 resulting in overflow when a uin...
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: Red Hat Enterprise Linux Fast Datapath
Classification: Red Hat
Component: ovn2.13
Version: RHEL 8.0
Hardware: Unspecified
OS: Unspecified
high
high
Target Milestone: ---
: ---
Assignee: Numan Siddique
QA Contact: Jianlin Shi
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2020-09-11 17:27 UTC by Alexander Constantinescu
Modified: 2020-10-27 09:49 UTC (History)
4 users (show)

Fixed In Version:
Doc Type: If docs needed, set a value
Doc Text:
Clone Of:
Environment:
Last Closed: 2020-10-27 09:49:14 UTC
Target Upstream Version:
Embargoed:


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
Red Hat Product Errata RHBA-2020:4356 0 None None None 2020-10-27 09:49:35 UTC

Description Alexander Constantinescu 2020-09-11 17:27:01 UTC
Description of problem:


Version-Release number of selected component (if applicable):


How reproducible:


Steps to Reproduce:
1.
2.
3.

Actual results:


Expected results:


Additional info:

Comment 1 Alexander Constantinescu 2020-09-11 17:33:59 UTC
Excuse me, I mistakenly clicked on enter while writing:

Description of problem:

When specifying a logical router policy with the optional field: pkt_mark the value results in overflow when a uint32 is used. It seems like the type of this field is int32 in OVN even though the skb field for packet marking supports uint32. The goal of this is to convert an IPv4 address to an integer, hence why this is important for CMS's like ovn-kubernetes. 

Assigning to Numan directly since he is already aware of the issue/context. 

Version-Release number of selected component (if applicable):

ovn-20.06.2-3.fc31.x86_64

How reproducible:


Steps to Reproduce:
1.
2.
3.

Actual results:


Expected results:


Additional info:

Comment 6 Jianlin Shi 2020-10-09 08:31:25 UTC
tested with following script:

systemctl start openvswitch                                                
                             
systemctl start ovn-northd                                          
ovn-nbctl set-connection ptcp:6641                   
ovn-sbctl set-connection ptcp:6642                                                 
ovs-vsctl set open . external_ids:system-id=hv1 external_ids:ovn-remote=tcp:1.1.23.25:6642 external_ids:ovn-encap-type=geneve external_ids:ovn-encap-ip=1.1.1.23.25
systemctl restart ovn-controller   
ip netns add server0                    
ip link add veth0_s0 netns server0 type veth peer name veth0_s0_p
ip netns exec server0 ip link set lo up                                            
ip netns exec server0 ip link set veth0_s0 up                              
ip netns exec server0 ip link set veth0_s0 address 00:00:00:01:01:02
ip netns exec server0 ip addr add 192.168.1.1/24 dev veth0_s0
ip netns exec server0 ip -6 addr add 3001::1/64 dev veth0_s0
ip netns exec server0 ip route add default via 192.168.1.254 dev veth0_s0
ip netns exec server0 ip -6 route add default via 3001::a dev veth0_s0
ovs-vsctl add-port br-int veth0_s0_p         
ip link set veth0_s0_p up                                           
ovs-vsctl set interface veth0_s0_p external_ids:iface-id=ls1p1
ovn-nbctl ls-add ls1                                        
                                                                         
ovn-nbctl lsp-add ls1 ls1p1                                           
ovn-nbctl lsp-set-addresses ls1p1 00:00:00:01:01:02        
ovn-nbctl lr-add lr1                                   
ovn-nbctl lrp-add lr1 lr1-ls1 00:00:00:00:00:01 192.168.1.254/24 3001::a/64      
ovn-nbctl lsp-add ls1 ls1-lr1                                                    
ovn-nbctl lsp-set-type ls1-lr1 router                               
ovn-nbctl lsp-set-options ls1-lr1 router-port=lr1-ls1                                                                                                                                        
ovn-nbctl lsp-set-addresses ls1-lr1 '00:00:00:00:00:01 192.168.1.254/24 3001::a/64'
ovn-nbctl ls-add ls2                       
ovn-nbctl lsp-add ls2 ls2-lr1                            
ovn-nbctl lsp-set-type ls2-lr1 router                                
ovn-nbctl lsp-set-options ls2-lr1 router-port=lr1-ls2    
ovn-nbctl lsp-set-addresses ls2-lr1 '00:00:00:00:00:02 192.168.0.254/24 3000::a/64'
ovn-nbctl lrp-add lr1 lr1-ls2 00:00:00:00:00:02 192.168.0.254/24 3000::a/64
ovn-nbctl lsp-add ls2 ls2p1
ovn-nbctl lsp-set-addresses ls2p1 00:00:00:02:01:01
ip netns add server1
ip link add veth0_s1 netns server1 type veth peer name veth0_s1_p
ip netns exec server1 ip link set lo up
ip netns exec server1 ip link set veth0_s1 up
ip netns exec server1 ip link set veth0_s1 address 00:00:00:02:01:01
ip netns exec server1 ip addr add 192.168.0.1/24 dev veth0_s1
ip netns exec server1 ip -6 addr add 3000::1/64 dev veth0_s1
ip netns exec server1 ip route add default via 192.168.0.254 dev veth0_s1
ip netns exec server1 ip -6 route add default via 3000::a dev veth0_s1
ovs-vsctl add-port br-int veth0_s1_p
ip link set veth0_s1_p up
ovs-vsctl set interface veth0_s1_p external_ids:iface-id=ls2p1
ovs-vsctl add-br br-phys
ovs-vsctl set open . external-ids:ovn-bridge-mappings=public:br-phys
ovn-nbctl ls-add public
ovn-nbctl lrp-add lr1 lr1_p 00:00:20:20:12:13 172.168.0.100/24 1111::100/64
ovn-nbctl lsp-add public p_lr1
ovn-nbctl lsp-set-type p_lr1 router
ovn-nbctl lsp-set-addresses p_lr1 router
ovn-nbctl lsp-set-options p_lr1 router-port=lr1_p
ovn-nbctl lsp-add public ln_public
ovn-nbctl lsp-set-type ln_public localnet
ovn-nbctl lsp-set-addresses ln_public unknown
ovn-nbctl lsp-set-options ln_public network_name=public
ip netns add ext
ip link add veth0_e netns ext type veth peer name veth0_e_p
ovs-vsctl add-port br-phys veth0_e_p
ip link set veth0_e_p up
ip netns exec ext ip link set veth0_e up
ip netns exec ext ip addr add 172.168.0.1/24 dev veth0_e
ip netns exec ext ip -6 addr add 1111::1/64 dev veth0_e
ip netns exec ext ip route add default via 172.168.0.100 dev veth0_e
ip netns exec ext ip -6 route add default via 1111::100 dev veth0_e
ovn-nbctl lr-policy-add lr1 2000 ip4.src==192.168.0.1 allow
ovn-nbctl lr-policy-add lr1 1000 ip6.src==3001::1 allow
pol1=$(ovn-nbctl --bare --columns _uuid find logical_router_policy priority=2000)
pol2=$(ovn-nbctl --bare --columns _uuid find logical_router_policy priority=1000)
ovn-nbctl set logical_router_policy $pol1 options:pkt_mark=100
ovs-ofctl --protocols=OpenFlow13 add-flow br-phys 'table=0, priority=100, pkt_mark=0x64 actions=drop'
ovn-nbctl --wait=hv sync
ip netns exec server1 ping 172.168.0.1 -c 1
ovs-ofctl -O openflow15 dump-flows br-int | grep pkt_mark
ovn-nbctl set logical_router_policy $pol1 options:pkt_mark=4294967295
ovs-ofctl -O openflow15 dump-flows br-int | grep pkt_mark

reproduced on ovn20.06.2-11:

+ ip netns exec server1 ping 172.168.0.1 -c 1                                                                       
PING 172.168.0.1 (172.168.0.1) 56(84) bytes of data.                                                                                                                               
64 bytes from 172.168.0.1: icmp_seq=1 ttl=63 time=3.61 ms                                             
                                                                              
--- 172.168.0.1 ping statistics ---                                           
1 packets transmitted, 1 received, 0% packet loss, time 0ms                                                                                                                               
rtt min/avg/max/mdev = 3.608/3.608/3.608/0.000 ms                                                                               
+ ovs-ofctl -O openflow15 dump-flows br-int                                                            
+ grep pkt_mark                                                                                                                                                                     
 cookie=0xaf836c8d, duration=0.054s, table=20, n_packets=1, n_bytes=98, idle_age=0, priority=2000,ip,metadata=0x2,nw_src=192.168.0.1 actions=set_field:0x64->pkt_mark,resubmit(,21)
+ ovn-nbctl set logical_router_policy 361ab8ee-2fb8-4298-80ef-11f7aab68cc8 options:pkt_mark=4294967295                                                        
+ ovs-ofctl -O openflow15 dump-flows br-int                                                      
+ grep pkt_mark

<=== no pkt_mark flow in flow table for br-int if set pkt_mark as 4294967295

Verified on ovn20.09.0-1:

[root@wsfd-advnetlab18 bz1878248]# rpm -qa | grep -E "openvswitch|ovn"
openvswitch-selinux-extra-policy-1.0-23.el8fdp.noarch
ovn2.13-20.09.0-1.el8fdp.x86_64
openvswitch2.13-2.13.0-60.el8fdp.x86_64
ovn2.13-central-20.09.0-1.el8fdp.x86_64
ovn2.13-host-20.09.0-1.el8fdp.x86_64

+ ip netns exec server1 ping 172.168.0.1 -c 1                    
PING 172.168.0.1 (172.168.0.1) 56(84) bytes of data.                               
64 bytes from 172.168.0.1: icmp_seq=1 ttl=63 time=4.05 ms                  
                                                                    
--- 172.168.0.1 ping statistics ---                          
1 packets transmitted, 1 received, 0% packet loss, time 0ms 
rtt min/avg/max/mdev = 4.047/4.047/4.047/0.000 ms                        
+ ovs-ofctl -O openflow15 dump-flows br-int                           
+ grep pkt_mark                              
 cookie=0x2783d472, duration=0.062s, table=20, n_packets=1, n_bytes=98, idle_age=0, priority=2000,ip,metadata=0x2,nw_src=192.168.0.1 actions=set_field:0x64->pkt_mark,resubmit(,21)
+ ovn-nbctl set logical_router_policy 91084a4e-a418-4b1f-af4e-b286b13e05e6 options:pkt_mark=4294967295
+ ovs-ofctl -O openflow15 dump-flows br-int                 
+ grep pkt_mark                                                          
 cookie=0x7ca7e0d3, duration=0.004s, table=20, n_packets=0, n_bytes=0, idle_age=0, priority=2000,ip,metadata=0x2,nw_src=192.168.0.1 actions=set_field:0xffffffff->pkt_mark,resubmit(,21)

<==== actions=set_field:0xffffffff->pkt_mark

Comment 7 Jianlin Shi 2020-10-09 08:35:02 UTC
also verified on rhel7 version:

[root@wsfd-advnetlab16 bz1878248]# rpm -qa | grep -E "openvswitch|ovn"
kernel-kernel-networking-openvswitch-ovn-common-1.0-9.noarch
ovn2.13-20.09.0-1.el7fdp.x86_64
kernel-kernel-networking-openvswitch-ovn-basic-1.0-30.noarch
openvswitch-selinux-extra-policy-1.0-15.el7fdp.noarch
openvswitch2.13-2.13.0-51.el7fdp.x86_64
ovn2.13-central-20.09.0-1.el7fdp.x86_64
ovn2.13-host-20.09.0-1.el7fdp.x86_64

+ ip netns exec server1 ping 172.168.0.1 -c 1
PING 172.168.0.1 (172.168.0.1) 56(84) bytes of data.
64 bytes from 172.168.0.1: icmp_seq=1 ttl=63 time=5.71 ms

--- 172.168.0.1 ping statistics ---
1 packets transmitted, 1 received, 0% packet loss, time 0ms
rtt min/avg/max/mdev = 5.715/5.715/5.715/0.000 ms
+ ovs-ofctl -O openflow15 dump-flows br-int
+ grep pkt_mark
 cookie=0x6edc6cdd, duration=0.064s, table=20, n_packets=1, n_bytes=98, idle_age=0, priority=2000,ip,metadata=0x2,nw_src=192.168.0.1 actions=set_field:0x64->pkt_mark,resubmit(,21)
+ ovn-nbctl set logical_router_policy 0f7c1d43-0258-41ea-b861-22e4d78ad93e options:pkt_mark=4294967295
+ ovs-ofctl -O openflow15 dump-flows br-int
+ grep pkt_mark
 cookie=0xf8dc2221, duration=0.003s, table=20, n_packets=0, n_bytes=0, idle_age=0, priority=2000,ip,metadata=0x2,nw_src=192.168.0.1 actions=set_field:0xffffffff->pkt_mark,resubmit(,21)

<=== actions=set_field:0xffffffff->pkt_mark,

Comment 9 errata-xmlrpc 2020-10-27 09:49:14 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory (ovn2.13 bug fix and enhancement update), and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHBA-2020:4356


Note You need to log in before you can comment on or make changes to this bug.