Description of problem: UDP data loss is supposed be less then 1% in the below situation. 1. vRSS hashing excludes the port where UDP data is through. # ethtool -n eth0 rx-flow-hash udp4 UDP over IPV4 flows use these fields for computing Hash flow key: IP SA IP DA 2. Packets length is larger than MTU, e.g. 8k. Tested with 2 Standard_D15_v2 VMs using the below commands on each respectively: # iperf3 -s 4 -p 8001 # iperf3 -u -c 10.0.0.4 -p 8001 -4 -b 0 -l 8k -P 64 -t 60 --get-server-output -i 60 And the result is: [ ID] Interval Transfer Bitrate Jitter Lost/Total Datagrams [ 5] 0.00-60.01 sec 1.09 MBytes 152 Kbits/sec 4.741 ms 31221/31360 (1e+02%) receiver [ 6] 0.00-60.01 sec 944 KBytes 129 Kbits/sec 3.708 ms 30642/30760 (1e+02%) receiver ... [SUM] 0.00-60.01 sec 55.2 MBytes 7.71 Mbits/sec 5.510 ms 1646217/1653281 (1e+02%) receiver Version-Release number of selected component (if applicable): 4.18.0-372.29.1.el8_6.x86_64 How reproducible: 100% on Azure. Steps to Reproduce: 1. Create 2 VMs and disable vRSS hashing by - # ethtool -N eth0 rx-flow-hash udp4 sd 2. run iperf3 commands in description individually. Actual results: Data loss is greater than 1%. Expected results: Data loss is less than 1%. Additional info: 1. This issue presents from 8.6 through 9.1. 2. Related BZ - https://bugzilla.redhat.com/show_bug.cgi?id=1474300 (looks like a re-occurrence). 3. Issue does not present when packets are smaller than MTU. 4. Issue presents on other VM sizes, e.g. Standard_D16s_v5.
(In reply to Li Tian from comment #0) > Description of problem: > UDP data loss is supposed be less then 1% in the below situation. > 1. vRSS hashing excludes the port where UDP data is through. > # ethtool -n eth0 rx-flow-hash udp4 > UDP over IPV4 flows use these fields for computing Hash flow key: > IP SA > IP DA > 2. Packets length is larger than MTU, e.g. 8k. > > Tested with 2 Standard_D15_v2 VMs using the below commands on each > respectively: > # iperf3 -s 4 -p 8001 > # iperf3 -u -c 10.0.0.4 -p 8001 -4 -b 0 -l 8k -P 64 -t 60 > --get-server-output -i 60 > > And the result is: > [ ID] Interval Transfer Bitrate Jitter Lost/Total > Datagrams > [ 5] 0.00-60.01 sec 1.09 MBytes 152 Kbits/sec 4.741 ms 31221/31360 > (1e+02%) receiver > [ 6] 0.00-60.01 sec 944 KBytes 129 Kbits/sec 3.708 ms 30642/30760 > (1e+02%) receiver > ... > [SUM] 0.00-60.01 sec 55.2 MBytes 7.71 Mbits/sec 5.510 ms > 1646217/1653281 (1e+02%) receiver > > Version-Release number of selected component (if applicable): > 4.18.0-372.29.1.el8_6.x86_64 > > How reproducible: > 100% on Azure. > > Steps to Reproduce: > 1. Create 2 VMs and disable vRSS hashing by - > # ethtool -N eth0 rx-flow-hash udp4 sd > 2. run iperf3 commands in description individually. > > Actual results: > Data loss is greater than 1%. > > Expected results: > Data loss is less than 1%. > > Additional info: > 1. This issue presents from 8.6 through 9.1. > 2. Related BZ - https://bugzilla.redhat.com/show_bug.cgi?id=1474300 (looks > like a re-occurrence). > 3. Issue does not present when packets are smaller than MTU. > 4. Issue presents on other VM sizes, e.g. Standard_D16s_v5. Is it a realistic scenario in real world to have package size larger than the MTU setting? @vkuznets perhaps you can shed a light on this matter? Thanks!
(In reply to Eduardo Otubo from comment #1) > > Is it a realistic scenario in real world to have package size larger than > the MTU setting? > Generally speaking it is. An application using UDP has no idea about the MTU size of the underlying NIC, it can always try sending bigger packets. I'm not sure how common it is though.
Issue still presents on latest 8.8 (4.18.0-472.el8.x86_64): [ ID] Interval Transfer Bitrate Jitter Lost/Total Datagrams [ 5] 0.00-60.02 sec 584 KBytes 79.7 Kbits/sec 0.111 ms 69556/69629 (1e+02%) receiver [ 6] 0.00-60.02 sec 600 KBytes 81.9 Kbits/sec 0.072 ms 69521/69596 (1e+02%) receiver ... [SUM] 0.00-60.02 sec 40.4 MBytes 5.65 Mbits/sec 0.135 ms 4450502/4455678 (1e+02%) receiver
'1. This issue presents from 8.6 through 9.1.' --- What's the result for 8.5/9.0?