RHEL Engineering is moving the tracking of its product development work on RHEL 6 through RHEL 9 to Red Hat Jira (issues.redhat.com). If you're a Red Hat customer, please continue to file support cases via the Red Hat customer portal. If you're not, please head to the "RHEL project" in Red Hat Jira and file new tickets here. Individual Bugzilla bugs in the statuses "NEW", "ASSIGNED", and "POST" are being migrated throughout September 2023. Bugs of Red Hat partners with an assigned Engineering Partner Manager (EPM) are migrated in late September as per pre-agreed dates. Bugs against components "kernel", "kernel-rt", and "kpatch" are only migrated if still in "NEW" or "ASSIGNED". If you cannot log in to RH Jira, please consult article #7032570. That failing, please send an e-mail to the RH Jira admins at rh-issues@redhat.com to troubleshoot your issue as a user management inquiry. The email creates a ServiceNow ticket with Red Hat. Individual Bugzilla bugs that are migrated will be moved to status "CLOSED", resolution "MIGRATED", and set with "MigratedToJIRA" in "Keywords". The link to the successor Jira issue will be found under "Links", have a little "two-footprint" icon next to it, and direct you to the "RHEL project" in Red Hat Jira (issue links are of type "https://issues.redhat.com/browse/RHEL-XXXX", where "X" is a digit). This same link will be available in a blue banner at the top of the page informing you that that bug has been migrated.
Bug 1639173 - [RFE] [ovs-dpdk] userspace conntrack need to support fragment packets
Summary: [RFE] [ovs-dpdk] userspace conntrack need to support fragment packets
Keywords:
Status: CLOSED CURRENTRELEASE
Alias: None
Product: Red Hat Enterprise Linux 7
Classification: Red Hat
Component: openvswitch
Version: 7.5
Hardware: Unspecified
OS: Unspecified
medium
medium
Target Milestone: rc
: ---
Assignee: Aaron Conole
QA Contact: Jiying Qiu
URL:
Whiteboard:
: 1684341 (view as bug list)
Depends On:
Blocks: 1684341
TreeView+ depends on / blocked
 
Reported: 2018-10-15 08:34 UTC by Jiying Qiu
Modified: 2023-03-24 14:17 UTC (History)
10 users (show)

Fixed In Version: openvswitch2.12-2.12.0-3
Doc Type: If docs needed, set a value
Doc Text:
Clone Of:
Environment:
Last Closed: 2022-09-14 13:32:14 UTC
Target Upstream Version:
Embargoed:


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
Red Hat Issue Tracker RHELPLAN-97716 0 None None None 2021-09-21 10:02:31 UTC

Description Jiying Qiu 2018-10-15 08:34:11 UTC
Description of problem:
in ovs-dpdk environment, need to add support fragment packets forward.

Version-Release number of selected component (if applicable):
openvswitch-2.9.0-70.el7fdp.x86_64.rpm
openvswitch2.10-2.10.0-10.el7fdp.x86_64.rpm
dpdk-17.11-13.el7.x86_64.rpm

How reproducible:
always

Steps to Reproduce:
1.setup ovs+dpdk environment
2.when large size packet pass through the ovs, it can not be forward.

Actual results:


Expected results:
support fragment packets forward.

Additional info:
In the following job,when send 2048,4096,8192 size packet, it can not forward.
https://beaker.engineering.redhat.com/jobs/2953408

Comment 2 qding 2018-10-15 09:15:25 UTC
To clarify, the bug is for userspace conntrack not support fragmentation.

Comment 4 Aaron Conole 2018-11-05 15:33:52 UTC
There are some patches initially available, but they haven't all been accepted.

Comment 6 Aaron Conole 2019-07-01 16:56:05 UTC
*** Bug 1684341 has been marked as a duplicate of this bug. ***

Comment 7 Aaron Conole 2019-10-21 16:52:15 UTC
Resolved with OvS 2.12 which will release as part of FD19.G

Comment 8 Rick Alongi 2020-01-15 13:35:47 UTC
This issue is said to be fixed in OVS 2.12. However, when I run the test specified per comment 0 of the BZ
using openvswitch2.12-2.12.0-12.el7fdp or openvswitch2.12-2.12.0-12.el8fdp, I still see the jumbo frame pings failing:

[root@localhost ~]# ping 10.167.43.1 -s 1024 -c 3
PING 10.167.43.1 (10.167.43.1) 1024(1052) bytes of data.
1032 bytes from 10.167.43.1: icmp_seq=1 ttl=64 time=0.175 ms
1032 bytes from 10.167.43.1: icmp_seq=3 ttl=64 time=0.168 ms

--- 10.167.43.1 ping statistics ---
3 packets transmitted, 2 received, 33% packet loss, time 2000ms
rtt min/avg/max/mdev = 0.168/0.171/0.175/0.013 ms
[root@localhost ~]# echo $?
0
[root@localhost ~]# 
++ assert_pass result 'acl allow by ping [1024] IPv4'
++ '[' 0 -eq 0 ']'
++ echo ::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::
::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::
++ echo ':: [   PASS   ] :: acl' allow by ping '[1024]' IPv4
:: [   PASS   ] :: acl allow by ping [1024] IPv4
++ echo ::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::
::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::
++ VMSH_NOLOGOUT=1
++ vmsh run_cmd g1 'ping 10.167.43.1 -s 2048 -c 3'
spawn virsh console g1
Connected to domain g1
Escape character is ^]

[root@localhost ~]# ping 10.167.43.1 -s 2048 -c 3
PING 10.167.43.1 (10.167.43.1) 2048(2076) bytes of data.

--- 10.167.43.1 ping statistics ---
3 packets transmitted, 0 received, 100% packet loss, time 1999ms

[root@localhost ~]# echo $?
1
[root@localhost ~]# 
++ assert_pass result 'acl allow by ping [2048] IPv4'
++ '[' 1 -eq 0 ']'
++ echo ::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::
::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::
++ echo ':: [   FAIL   ] :: acl' allow by ping '[2048]' IPv4
:: [   FAIL   ] :: acl allow by ping [2048] IPv4
++ echo ::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::
::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::
++ eval result=1
+++ result=1
++ VMSH_NOLOGOUT=1
++ vmsh run_cmd g1 'ping 10.167.43.1 -s 4096 -c 3'
spawn virsh console g1
Connected to domain g1
Escape character is ^]

[root@localhost ~]# ping 10.167.43.1 -s 4096 -c 3
PING 10.167.43.1 (10.167.43.1) 4096(4124) bytes of data.

--- 10.167.43.1 ping statistics ---
3 packets transmitted, 0 received, 100% packet loss, time 1999ms

[root@localhost ~]# echo $?
1
[root@localhost ~]# 
++ assert_pass result 'acl allow by ping [4096] IPv4'
++ '[' 1 -eq 0 ']'
++ echo ::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::
::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::
++ echo ':: [   FAIL   ] :: acl' allow by ping '[4096]' IPv4
:: [   FAIL   ] :: acl allow by ping [4096] IPv4
++ echo ::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::
::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::
++ eval result=1
+++ result=1
++ VMSH_NOLOGOUT=1
++ vmsh run_cmd g1 'ping 10.167.43.1 -s 8192 -c 3'
spawn virsh console g1
Connected to domain g1
Escape character is ^]

[root@localhost ~]# ping 10.167.43.1 -s 8192 -c 3
PING 10.167.43.1 (10.167.43.1) 8192(8220) bytes of data.

--- 10.167.43.1 ping statistics ---
3 packets transmitted, 0 received, 100% packet loss, time 1999ms

Below is a set of commands used to configure for that particular test:

        ovs-ofctl del-flows $ovs_br
        ovs-ofctl dump-flows $ovs_br
        ovs-ofctl add-flow $ovs_br "table=0,priority=1,action=drop"
        ovs-ofctl add-flow $ovs_br "table=0,priority=10,arp,action=normal"
        ovs-ofctl add-flow $ovs_br
"table=0,priority=100,ip,ct_state=-trk,action=ct(table=1)"
        ovs-ofctl add-flow $ovs_br
"table=1,in_port=$ofport_v1,ip,ct_state=+trk+new,action=ct(commit),$ofport_p1"
        ovs-ofctl add-flow $ovs_br
"table=1,in_port=$ofport_v1,ip,ct_state=+trk+est,action=$ofport_p1"
        ovs-ofctl add-flow $ovs_br
"table=1,in_port=$ofport_p1,ip,ct_state=+trk+est,action=$ofport_v1"

Comment 9 Rick Alongi 2020-01-17 14:10:43 UTC
Setting NEED_INFO flag as I discussed this with aconole via email and he is going to investigate.

Comment 11 Jiying Qiu 2020-07-15 03:29:02 UTC
Have tried openvswitch2.12-2.12.0-4.el7fdp.x86_64.rpm & openvswitch2.12-2.12.0-4.el8fdp.x86_64.rpm.

Big packets can forward to peer. But has packets dropped using icmp. Works well using udp and tcp.


Note You need to log in before you can comment on or make changes to this bug.