The original bug has been filed here:
it has been verified that this behavior affects OvS conntrack which avoids processing the packets because they got marked as corrupted.
Original description reported below:
on ixgbe (in particular on 82599 hw) when receiving UDP packets with zero checksum (no checksum) over IPv4,
ol_flags PKT_RX_L4_CKSUM_BAD bit is set, whereas PKT_RX_L4_CKSUM_GOOD was expected instead.
That happens because of a hardware errata , but still, needs to be handled because applications like
for example, OvS are in turn affected by this issue.
This behavior can be easily reproduced using testpmd with --enable-rx-cksum option.
# lshw -businfo -c network
Bus info Device Class Description
pci@0000:01:00.0 em1 network 82599ES 10-Gigabit SFI/SFP+ Network Connection
pci@0000:01:00.1 em2 network 82599ES 10-Gigabit SFI/SFP+ Network Connection
testpmd -l 2,4 -w 0000:01:00.0 -- -i --port-topology=chained --enable-rx-cksum
testpmd> show device info all
********************* Infos for device 0000:01:00.0 *********************
Bus name: pci
Driver name: net_ixgbe
Connect to socket: 0
Port id: 0
MAC address: EC:F4:BB:DB:FC:18
Device name: 0000:01:00.0
Device speed capability: 1 Gbps 10 Gbps
testpmd> set fwd rxonly
testpmd> set verbose 1
and sending packets from a tester machine using scapy (w/ zero checksum):
sendp(Ether(src="ec:f4:bb:dc:09:d0",dst="ec:f4:bb:db:fc:18")/IP(src="192.168.30.200", dst="192.168.30.100")/UDP(chksum=0)/Raw("a"*100), iface="em1")
the results is:
port 0/queue 0: received 1 packets
src=EC:F4:BB:DC:09:D0 - dst=EC:F4:BB:DB:FC:18 - type=0x0800 - length=142 - nb_segs=1 - hw ptype: L2_ETHER L3_IPV4 L4_UDP - sw ptype: L2_ETHER L3_IPV4 L4_UDP - l2_len=14 - l3_len=20 - l4_len=8 - Receive queue=0x0
ol_flags: PKT_RX_L4_CKSUM_BAD PKT_RX_IP_CKSUM_GOOD PKT_RX_OUTER_L4_CKSUM_UNKNOWN
Fixes (well, a workaround for 82599 only since this is a hw bug) have been merged in v21.02.
And a followup patch on stats in v21.05.
All those are now backported in stable/20.11 part of 20.11.2, freshly released upstream.
Now the question is whether we want those fixes downstream now, or if we can wait next merging of dpdk LTS releases downstream.
(In reply to David Marchand from comment #2)
> And a followup patch on stats in v21.05.
> All those are now backported in stable/20.11 part of 20.11.2, freshly
> released upstream.
> Now the question is whether we want those fixes downstream now, or if we can
> wait next merging of dpdk LTS releases downstream.
if you agree, waiting for the next LTS sounds reasonable to me.