Bug 2148346 - ice/i40e card: sriov container test ran failed with 2000byte and 9000byte
Summary: ice/i40e card: sriov container test ran failed with 2000byte and 9000byte
Keywords:
Status: NEW
Alias: None
Product: Red Hat Enterprise Linux Fast Datapath
Classification: Red Hat
Component: DPDK
Version: FDP 22.K
Hardware: Unspecified
OS: Unspecified
unspecified
unspecified
Target Milestone: ---
: ---
Assignee: David Marchand
QA Contact: liting
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2022-11-25 06:25 UTC by liting
Modified: 2023-07-13 07:25 UTC (History)
3 users (show)

Fixed In Version:
Doc Type: If docs needed, set a value
Doc Text:
Clone Of:
Environment:
Last Closed:
Target Upstream Version:
Embargoed:


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
Red Hat Issue Tracker FD-2503 0 None None None 2022-11-25 06:34:43 UTC

Description liting 2022-11-25 06:25:23 UTC
Description of problem:
ice/i40e card: sriov container test ran failed with 2000byte and 9000byte

Version-Release number of selected component (if applicable):
[root@netqe22 ~]# uname -r
4.18.0-372.32.1.el8_6.x86_64
dpdk-21.11-1.el8.x86_64.rpm

How reproducible:


Steps to Reproduce:
1. create one vf for each pf
[root@netqe22 ~]# ip link show
4: enp3s0f0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 9120 qdisc mq state UP mode DEFAULT group default qlen 1000
    link/ether 40:a6:b7:0b:d0:ac brd ff:ff:ff:ff:ff:ff
    vf 0     link/ether 00:de:ad:01:01:01 brd ff:ff:ff:ff:ff:ff, spoof checking off, link-state auto, trust on
7: enp3s0f1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 9120 qdisc mq state UP mode DEFAULT group default qlen 1000
    link/ether 40:a6:b7:0b:d0:ad brd ff:ff:ff:ff:ff:ff
    vf 0     link/ether 00:de:ad:02:02:02 brd ff:ff:ff:ff:ff:ff, spoof checking off, link-state auto, trust on


[root@netqe22 perf]# podman run -i -t --privileged -v /dev/vfio/vfio:/dev/vfio/vfio -v /dev/hugepages:/dev/hugepages 5ff210fe8267 dpdk-testpmd -l 2,26,4 -n 4 -m 1024 -- -i --forward-mode=mac --eth-peer=0,00:00:00:00:00:01 --eth-peer=1,00:00:00:00:00:02 --burst=32 --rxd=4096 --txd=4096 --nb-cores=2 --rxq=1 --txq=1 --mbcache=512 --auto-start
EAL: Detected CPU lcores: 48
EAL: Detected NUMA nodes: 2
EAL: Detected shared linkage of DPDK
EAL: Multi-process socket /var/run/dpdk/rte/mp_socket
EAL: Selected IOVA mode 'VA'
EAL: No available 2048 kB hugepages reported
EAL: VFIO support initialized
EAL: Using IOMMU type 1 (Type 1)
EAL: Probe PCI driver: net_iavf (8086:154c) device: 0000:03:02.0 (socket 0)
EAL: Probe PCI driver: net_iavf (8086:154c) device: 0000:03:0a.0 (socket 0)
TELEMETRY: No legacy callbacks, legacy socket not created
Interactive-mode selected
Set mac packet forwarding mode
Auto-start selected
testpmd: create a new mbuf pool <mb_pool_0>: n=180224, size=2176, socket=0
testpmd: preferred mempool ops selected: ring_mp_mc
Configuring Port 0 (socket 0)
iavf_configure_queues(): RXDID[1] is not supported, request default RXDID[1] in Queue[0]

Port 0: link state change event

Port 0: link state change event
Port 0: 00:DE:AD:01:01:01
Configuring Port 1 (socket 0)
iavf_configure_queues(): RXDID[1] is not supported, request default RXDID[1] in Queue[0]

Port 1: link state change event

Port 1: link state change event
Port 1: 00:DE:AD:02:02:02
Checking link statuses...
Done
Start automatic packet forwarding
mac packet forwarding - ports=2 - cores=2 - streams=2 - NUMA support enabled, MP allocation mode: native
Logical Core 4 (socket 0) forwards packets on 1 streams:
  RX P=0/Q=0 (socket 0) -> TX P=1/Q=0 (socket 0) peer=00:00:00:00:00:02
Logical Core 26 (socket 0) forwards packets on 1 streams:
  RX P=1/Q=0 (socket 0) -> TX P=0/Q=0 (socket 0) peer=00:00:00:00:00:01

  mac packet forwarding packets/burst=32
  nb forwarding cores=2 - nb forwarding ports=2
  port 0: RX queue number: 1 Tx queue number: 1
    Rx offloads=0x0 Tx offloads=0x10000
    RX queue: 0
      RX desc=4096 - RX free threshold=32
      RX threshold registers: pthresh=0 hthresh=0  wthresh=0
      RX Offloads=0x0
    TX queue: 0
      TX desc=4096 - TX free threshold=32
      TX threshold registers: pthresh=0 hthresh=0  wthresh=0
      TX offloads=0x10000 - TX RS bit threshold=32
  port 1: RX queue number: 1 Tx queue number: 1
    Rx offloads=0x0 Tx offloads=0x10000
    RX queue: 0
      RX desc=4096 - RX free threshold=32
      RX threshold registers: pthresh=0 hthresh=0  wthresh=0
      RX Offloads=0x0
    TX queue: 0
      TX desc=4096 - TX free threshold=32
      TX threshold registers: pthresh=0 hthresh=0  wthresh=0
      TX offloads=0x10000 - TX RS bit threshold=32
testpmd> start
Packet forwarding already started
3. send traffic with trex
[root@netqe32 trafficgen]# ./binary-search.py --traffic-generator=trex-txrx --frame-size=2000 --num-flows=1024 --max-loss-pct=0 --search-runtime=10 --validation-runtime=60 --rate-tolerance=10 --runtime-tolerance=10 --rate=25 --rate-unit=% --duplicate-packet-failure=retry-to-fail --negative-packet-loss=retry-to-fail --warmup-trial --warmup-trial-runtime=10 --rate=100 --rate-unit=% --one-shot=0 --use-src-ip-flows=1 --use-dst-ip-flows=1 --use-src-mac-flows=0 --use-dst-mac-flows=0 --src-macs=00:00:00:00:00:01,00:00:00:00:00:02 --dst-macs=00:de:ad:01:01:01,00:de:ad:02:02:02 --send-teaching-measurement --send-teaching-warmup --teaching-warmup-packet-type=generic --teaching-warmup-packet-rate=1000


Actual results:
testpmd> stop
Telling cores to stop...
Waiting for lcores to finish...

  ---------------------- Forward statistics for port 0  ----------------------
  RX-packets: 74954          RX-dropped: 0             RX-total: 74954
  TX-packets: 3072           TX-dropped: 0             TX-total: 3072
  ----------------------------------------------------------------------------

  ---------------------- Forward statistics for port 1  ----------------------
  RX-packets: 74954          RX-dropped: 0             RX-total: 74954
  TX-packets: 3072           TX-dropped: 0             TX-total: 3072
  ----------------------------------------------------------------------------

  +++++++++++++++ Accumulated forward statistics for all ports+++++++++++++++
  RX-packets: 149908         RX-dropped: 0             RX-total: 149908
  TX-packets: 6144           TX-dropped: 0             TX-total: 6144
  ++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++

The trex run traffic failed and cannot get result. 

Expected results:
When sending the 2000byte and 9000byte traffic, and it can got normal result.

Additional info:
2000byte i40e:
https://beaker.engineering.redhat.com/jobs/7276826
2000byte ice:
https://beaker.engineering.redhat.com/jobs/7271879
9000byte ice:
https://beaker.engineering.redhat.com/jobs/7271606

Comment 1 liting 2022-11-25 09:47:25 UTC
For ixgbe card:
2000byte sriov pvp and sriov container pass
https://beaker.engineering.redhat.com/jobs/7277507
https://beaker.engineering.redhat.com/jobs/7276785
9000byte sriov pvp fail:
https://beaker.engineering.redhat.com/jobs/7277431
9000byte sriov container fail:
https://beaker.engineering.redhat.com/jobs/7277355

Comment 2 liting 2022-11-28 07:15:53 UTC
Run sriov pvp and container case on i40e card with rhel9. The 2000byte and 9000byte sriov pvp work well. But 2000byte and 9000byte sriov container case run failed.
https://beaker.engineering.redhat.com/jobs/7282867
https://beaker-archive.host.prod.eng.bos.redhat.com/beaker-logs/2022/11/72828/7282867/13013900/153341149/i40e_10.html

Comment 3 liting 2022-11-28 09:59:11 UTC
Run sriov pvp and container case on i40e card with rhel8.4. The 2000byte and 9000byte sriov pvp work well. But 2000byte and 9000byte sriov container case run failed.
https://beaker.engineering.redhat.com/jobs/7282992
https://beaker-archive.host.prod.eng.bos.redhat.com/beaker-logs/2022/11/72829/7282992/13014086/153345304/i40e_10.html

Comment 4 liting 2023-03-13 05:35:33 UTC
This issue still exist on rhel8.8 and rhel9.2
rhel8.8
https://beaker.engineering.redhat.com/jobs/7617603
rhel9.2
https://beaker.engineering.redhat.com/jobs/7617620

Comment 5 liting 2023-03-13 05:41:02 UTC
The test card of #comment4 is CX6 LX card.


Note You need to log in before you can comment on or make changes to this bug.