Bug 1711131 - [ovs-dpdk]tcp traffic is not forward in snat with ct_mark change on reply topo
Summary: [ovs-dpdk]tcp traffic is not forward in snat with ct_mark change on reply topo
Keywords:
Status: CLOSED INSUFFICIENT_DATA
Alias: None
Product: Red Hat Enterprise Linux Fast Datapath
Classification: Red Hat
Component: openvswitch2.11
Version: FDP 19.C
Hardware: Unspecified
OS: Unspecified
unspecified
unspecified
Target Milestone: ---
: ---
Assignee: Flavio Leitner
QA Contact: Jiying Qiu
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2019-05-17 03:56 UTC by Jiying Qiu
Modified: 2023-09-14 05:28 UTC (History)
8 users (show)

Fixed In Version:
Doc Type: If docs needed, set a value
Doc Text:
Clone Of:
Environment:
Last Closed: 2020-02-27 12:23:21 UTC
Target Upstream Version:
Embargoed:


Attachments (Terms of Use)

Description Jiying Qiu 2019-05-17 03:56:50 UTC
Description of problem:
in snat topo,with ct_mark change on reply ,tcp traffic can not forward.

Version-Release number of selected component (if applicable):
openvswitch2.11-2.11.0-9.el7fdp.x86_64
openvswitch-selinux-extra-policy-1.0-11.el7fdp.noarch
dpdk-18.11-4.el7_6.x86_64

How reproducible:
always

topo:
vm---ovs----(physical connect)-----server

Steps to Reproduce:
1.ovs setup
# ovs-vsctl show 
d4cd1f98-2727-4d9b-8862-4f449718388a
    Bridge "ovsbr0"
        Port "ovsbr0"
            Interface "ovsbr0"
                type: internal
        Port "vhost0"
            Interface "vhost0"
                type: dpdkvhostuserclient
                options: {n_rxq="1", vhost-server-path="/tmp/vhost0"}
        Port "dpdk0"
            Interface "dpdk0"
                type: dpdk
                options: {dpdk-devargs="0000:05:00.0", n_rxq="1"}
    ovs_version: "2.11.0"
2.ovs flow setup
ovs-ofctl add-flow ovsbr0 "table=0,priority=1000,ip,in_port=vhost0 actions=ct(commit,zone=1,nat(src=172.16.100.254)),output:dpdk0"
ovs-ofctl add-flow ovsbr0 "table=0,priority=1000,ct_state=-trk,ip,in_port=dpdk0 actions=ct(table=0,zone=1,nat)"
ovs-ofctl add-flow ovsbr0 "table=0,priority=1000,ct_state=+trk,ct_zone=1,ip,in_port=dpdk0 actions=ct(commit,table=1,zone=1,exec(load:0x1->NXM_NX_CT_MARK[]))"
ovs-ofctl add-flow ovsbr0 "table=0,priority=100,arp,arp_op=1 actions=move:NXM_OF_ARP_TPA[]->NXM_NX_REG2[],resubmit(,8),resubmit(,10)"
ovs-ofctl add-flow ovsbr0 "table=0,priority=10,arp actions=NORMAL"
ovs-ofctl add-flow ovsbr0 "table=0,priority=0 actions=NORMAL"
ovs-ofctl add-flow ovsbr0 "table=0,priority=1 actions=drop"
ovs-ofctl add-flow ovsbr0 "table=1,ct_state=+rpl,ct_zone=1,ct_mark=0x1,ip,in_port=dpdk0 actions=output:vhost0"
#####arp flow set
ovs-ofctl add-flow ovsbr0 "table=8,reg2=0xac1064fe actions=load:0x10401->OXM_OF_PKT_REG0[]"
ovs-ofctl add-flow ovsbr0 "table=8,priority=0 actions=load:0->OXM_OF_PKT_REG0[]"

ovs-ofctl add-flow ovsbr0 "table=10,priority=100,arp,reg0=0,reg1=0 actions=NORMAL"
ovs-ofctl add-flow ovsbr0 "table=10,priority=10,arp,arp_op=1 actions=load:0x2->NXM_OF_ARP_OP[],move:NXM_NX_ARP_SHA[]->NXM_NX_ARP_THA[],move:OXM_OF_PKT_REG0[0..47]->NXM_NX_ARP_SHA[],move:NXM_OF_ARP_SPA[]->NXM_OF_ARP_TPA[],move:NXM_NX_REG2[]->NXM_OF_ARP_SPA[],move:NXM_OF_ETH_SRC[]->NXM_OF_ETH_DST[],move:OXM_OF_PKT_REG0[0..47]->NXM_OF_ETH_SRC[],move:NXM_OF_IN_PORT[]->NXM_NX_REG3[0..15],load:0->NXM_OF_IN_PORT[],output:NXM_NX_REG3[0..15]"
ovs-ofctl add-flow ovsbr0 "table=10,priority=0 actions=drop"
ovs-ofctl add-flow ovsbr0 "table=100,priority=100 actions=drop"

3.in vm ip : 10.167.100.2
  in server ip :172.16.100.1
vm : ssh -o ConnectTimeout=3 root.100.1 hostname

Actual results:
connection timeout
tcp,orig=(src=10.167.100.2,dst=172.16.100.1,sport=58890,dport=22),reply=(src=172.16.100.1,dst=172.16.100.254,sport=22,dport=58890),zone=1,mark=1,protoinfo=(state=ESTABLISHED)

Expected results:
tcp traffic is normal

Additional info:
vm ping server. it is about 50% loss
[root@localhost ~]# ping 172.16.100.1 -c 10
PING 172.16.100.1 (172.16.100.1) 56(84) bytes of data.
64 bytes from 172.16.100.1: icmp_seq=1 ttl=64 time=0.601 ms
64 bytes from 172.16.100.1: icmp_seq=3 ttl=64 time=0.477 ms
64 bytes from 172.16.100.1: icmp_seq=5 ttl=64 time=0.473 ms
64 bytes from 172.16.100.1: icmp_seq=7 ttl=64 time=0.475 ms
64 bytes from 172.16.100.1: icmp_seq=9 ttl=64 time=0.468 ms

--- 172.16.100.1 ping statistics ---
10 packets transmitted, 5 received, 50% packet loss, time 9005ms
rtt min/avg/max/mdev = 0.468/0.498/0.601/0.058 ms

Comment 1 Flavio Leitner 2020-01-07 13:30:33 UTC
Can you dump the flows and see if the packet is reaching table 1?

Has this setup worked before?

IIRC, you can't use table=1 with commit inside ct() together.
Looking at ovs-actions(7):

   The ct action
       Syntax:
              ct(argument]...)
              ct(commit[, argument]...)
[...]
       Without commit, the ct action accepts the following arguments:
[...]
              table=table
              nat
              nat(type=addrs[:ports][,flag]...)
[...]
      The following options are available only with commit:
[...]
              force
              exec(action...)
              alg=alg

Thanks
fbl

Comment 2 Flavio Leitner 2020-02-27 12:23:21 UTC
Hello,

This is waiting for information from the reporter for several weeks now.
Therefore I am going to close this to clean up. However, feel free to re-open the bug if you can provide the data requested and I will be glad to continue to help.
Thanks
fbl

Comment 3 Red Hat Bugzilla 2023-09-14 05:28:45 UTC
The needinfo request[s] on this closed bug have been removed as they have been unresolved for 1000 days


Note You need to log in before you can comment on or make changes to this bug.