The FDP team is no longer accepting new bugs in Bugzilla. Please report your issues under FDP project in Jira. Thanks.
Bug 1922430 - ixgbe: 82599 chksum rx offload marks UDP packets over IPv4 with zero checksum as corrupted
Summary: ixgbe: 82599 chksum rx offload marks UDP packets over IPv4 with zero checksum...
Keywords:
Status: CLOSED NEXTRELEASE
Alias: None
Product: Red Hat Enterprise Linux Fast Datapath
Classification: Red Hat
Component: DPDK
Version: FDP 21.A
Hardware: Unspecified
OS: Unspecified
unspecified
unspecified
Target Milestone: ---
: ---
Assignee: David Marchand
QA Contact: liting
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2021-01-29 17:20 UTC by Paolo Valerio
Modified: 2022-05-16 13:15 UTC (History)
6 users (show)

Fixed In Version:
Doc Type: If docs needed, set a value
Doc Text:
Clone Of:
Environment:
Last Closed: 2022-05-16 12:47:32 UTC
Target Upstream Version:
Embargoed:


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
Red Hat Issue Tracker FD-1057 0 None None None 2022-05-16 13:15:11 UTC

Description Paolo Valerio 2021-01-29 17:20:11 UTC
The original bug has been filed here:

https://bugs.dpdk.org/show_bug.cgi?id=629

it has been verified that this behavior affects OvS conntrack which avoids processing the packets because they got marked as corrupted.

Original description reported below:

on ixgbe (in particular on 82599 hw) when receiving UDP packets with zero checksum (no checksum) over IPv4,
ol_flags PKT_RX_L4_CKSUM_BAD bit is set, whereas PKT_RX_L4_CKSUM_GOOD was expected instead.

That happens because of a hardware errata [1], but still, needs to be handled because applications like
for example, OvS are in turn affected by this issue.

This behavior can be easily reproduced using testpmd with --enable-rx-cksum option.

# lshw -businfo -c network
Bus info          Device     Class          Description
=======================================================
pci@0000:01:00.0  em1        network        82599ES 10-Gigabit SFI/SFP+ Network Connection
pci@0000:01:00.1  em2        network        82599ES 10-Gigabit SFI/SFP+ Network Connection

testpmd -l 2,4 -w 0000:01:00.0 -- -i --port-topology=chained --enable-rx-cksum
testpmd> show device info all

********************* Infos for device 0000:01:00.0 *********************
Bus name: pci
Driver name: net_ixgbe
Devargs: 
Connect to socket: 0

	Port id: 0 
	MAC address: EC:F4:BB:DB:FC:18
	Device name: 0000:01:00.0
	Device speed capability: 1 Gbps   10 Gbps

testpmd> set fwd rxonly
testpmd> set verbose 1
testpmd> start

and sending packets from a tester machine using scapy (w/ zero checksum):

sendp(Ether(src="ec:f4:bb:dc:09:d0",dst="ec:f4:bb:db:fc:18")/IP(src="192.168.30.200", dst="192.168.30.100")/UDP(chksum=0)/Raw("a"*100), iface="em1")

the results is:

port 0/queue 0: received 1 packets
  src=EC:F4:BB:DC:09:D0 - dst=EC:F4:BB:DB:FC:18 - type=0x0800 - length=142 - nb_segs=1 - hw ptype: L2_ETHER L3_IPV4 L4_UDP  - sw ptype: L2_ETHER L3_IPV4 L4_UDP  - l2_len=14 - l3_len=20 - l4_len=8 - Receive queue=0x0
  ol_flags: PKT_RX_L4_CKSUM_BAD PKT_RX_IP_CKSUM_GOOD PKT_RX_OUTER_L4_CKSUM_UNKNOWN


[1] https://patchwork.ozlabs.org/project/netdev/patch/20090724040031.30202.1531.stgit@localhost.localdomain/

Comment 1 David Marchand 2021-02-15 12:58:25 UTC
Fixes (well, a workaround for 82599 only since this is a hw bug) have been merged in v21.02.
https://git.dpdk.org/dpdk/commit/?id=9a40edb599d7
https://git.dpdk.org/dpdk/commit/?id=b9c366e029f5

Comment 2 David Marchand 2021-07-12 12:55:11 UTC
And a followup patch on stats in v21.05.
https://git.dpdk.org/dpdk/commit/?id=2ee14c8905e9

All those are now backported in stable/20.11 part of 20.11.2, freshly released upstream.

Now the question is whether we want those fixes downstream now, or if we can wait next merging of dpdk LTS releases downstream.

Comment 4 Paolo Valerio 2021-08-04 18:14:42 UTC
(In reply to David Marchand from comment #2)
> And a followup patch on stats in v21.05.
> https://git.dpdk.org/dpdk/commit/?id=2ee14c8905e9
> 
> All those are now backported in stable/20.11 part of 20.11.2, freshly
> released upstream.
> 
> Now the question is whether we want those fixes downstream now, or if we can
> wait next merging of dpdk LTS releases downstream.

if you agree, waiting for the next LTS sounds reasonable to me.


Note You need to log in before you can comment on or make changes to this bug.