Note: This bug is displayed in read-only format because the product is no longer active in Red Hat Bugzilla.
The FDP team is no longer accepting new bugs in Bugzilla. Please report your issues under FDP project in Jira. Thanks.

Bug 2186069

Summary: ixgbe card: it run failed when running the regression bug2055446
Product: Red Hat Enterprise Linux Fast Datapath Reporter: liting <tli>
Component: DPDKAssignee: Kevin Traynor <ktraynor>
DPDK sub component: sriov QA Contact: liting <tli>
Status: CLOSED NOTABUG Docs Contact:
Severity: unspecified    
Priority: unspecified CC: ctrautma, fleitner, jhsiao
Version: RHEL 8.0   
Target Milestone: ---   
Target Release: ---   
Hardware: Unspecified   
OS: Unspecified   
Whiteboard:
Fixed In Version: Doc Type: If docs needed, set a value
Doc Text:
Story Points: ---
Clone Of: Environment:
Last Closed: 2024-09-16 15:22:16 UTC Type: Bug
Regression: --- Mount Type: ---
Documentation: --- CRM:
Verified Versions: Category: ---
oVirt Team: --- RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: --- Target Upstream Version:
Embargoed:

Description liting 2023-04-12 02:44:43 UTC
Description of problem:


Version-Release number of selected component (if applicable):
kernel: 4.18.0-372.40.1.el8_6.x86_64
dpdk-21.11-1.el8.x86_64.rpm

How reproducible:


Steps to Reproduce:
1.Create vf and bind vf to dpdk
2.Run the testpmd command and use trex send traffic(16 vlan packets and 16 no van packets) 
python3 /mnt/tests/kernel/networking/ovs-dpdk/regression_bug/testcase_bug2055446/checkpmd.py -t dpdk-testpmd -p a -b 0000:17:10.0 -c dell-per740-55.rhts.eng.pek2.redhat.com -n ens3f0
dpdk-testpmd -c 0xf -n 4 -a 0000:17:10.0 -- -i --forward-mode=mac --auto-start
EAL: Detected CPU lcores: 80
EAL: Detected NUMA nodes: 2
EAL: Detected shared linkage of DPDK
EAL: Multi-process socket /var/run/dpdk/rte/mp_socket
EAL: Selected IOVA mode 'VA'
EAL: No available 2048 kB hugepages reported
EAL: VFIO support initialized
EAL: Using IOMMU type 1 (Type 1)
EAL: Probe PCI driver: net_ixgbe_vf (8086:10ed) device: 0000:17:10.0 (socket 0)
TELEMETRY: No legacy callbacks, legacy socket not created
Interactive-mode selected
Set mac packet forwarding mode
Auto-start selected
testpmd: create a new mbuf pool <mb_pool_0>: n=171456, size=2176, socket=0
testpmd: preferred mempool ops selected: ring_mp_mc
testpmd: create a new mbuf pool <mb_pool_1>: n=171456, size=2176, socket=1
testpmd: preferred mempool ops selected: ring_mp_mc

Warning! port-topology=paired and odd forward ports number, the last port will pair with itself.

Configuring Port 0 (socket 0)
Port 0: 00:00:00:00:00:02
Checking link statuses...
Done
Error during enabling promiscuous mode for port 0: Operation not supported - ignore
Start automatic packet forwarding
mac packet forwarding - ports=1 - cores=1 - streams=1 - NUMA support enabled, MP allocation mode: native
Logical Core 1 (socket 1) forwards packets on 1 streams:
  RX P=0/Q=0 (socket 0) -> TX P=0/Q=0 (socket 0) peer=02:00:00:00:00:00

  mac packet forwarding packets/burst=32
  nb forwarding cores=1 - nb forwarding ports=1
  port 0: RX queue number: 1 Tx queue number: 1
    Rx offloads=0x0 Tx offloads=0x0
    RX queue: 0
      RX desc=512 - RX free threshold=32
      RX threshold registers: pthresh=0 hthresh=0  wthresh=0
      RX Offloads=0x0
    TX queue: 0
      TX desc=512 - TX free threshold=32
      TX threshold registers: pthresh=32 hthresh=0  wthresh=0
      TX offloads=0x0 - TX RS bit threshold=32
testpmd> stop
stop

Telling cores to stop...
Waiting for lcores to finish...

  ---------------------- Forward statistics for port 0  ----------------------
  RX-packets: 16             RX-dropped: 0             RX-total: 16
  TX-packets: 16             TX-dropped: 0             TX-total: 16
  ----------------------------------------------------------------------------

  +++++++++++++++ Accumulated forward statistics for all ports+++++++++++++++
  RX-packets: 16             RX-dropped: 0             RX-total: 16
  TX-packets: 16             TX-dropped: 0             TX-total: 16
  ++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++

Done.
tesquit
tpmd> quit


Stopping port 0...
Stopping ports...
Done

Shutting down port 0...
Closing ports...
Port 0 is closed
Done

Bye...

Actual results:
Testpmd only can receive 16 no vlan packets, it cannot receive vlan packets.

Expected results:
Testpmd should receive 16 novlan packets and 16 vlan packets.

Additional info:
fail job:
https://beaker.engineering.redhat.com/jobs/7725685

Comment 1 liting 2023-04-12 04:57:52 UTC
Rhel9.2 also has this issue.
https://beaker.engineering.redhat.com/jobs/7728576

Comment 2 liting 2023-04-12 07:04:57 UTC
rhel8.4 also has this issue.
https://beaker.engineering.redhat.com/jobs/7729047
When testpmd set vlan filter, and it can work well. Send 16 no vlan packet, 16 vlan packets. Testpmd receive 32 packets.
[root@dell-per750-37 ~]# dpdk-testpmd -c 0xf -n 4 -a 0000:17:10.0 -- -i --forward-mode=mac --auto-start
EAL: Detected 80 lcore(s)
EAL: Detected 2 NUMA nodes
EAL: Multi-process socket /var/run/dpdk/rte/mp_socket
EAL: Selected IOVA mode 'VA'
EAL: No available hugepages reported in hugepages-2048kB
EAL: Probing VFIO support...
EAL: VFIO support initialized
EAL:   using IOMMU type 1 (Type 1)
EAL: Probe PCI driver: net_ixgbe_vf (8086:10ed) device: 0000:17:10.0 (socket 0)
EAL: No legacy callbacks, legacy socket not created
Interactive-mode selected
Set mac packet forwarding mode
Auto-start selected
testpmd: create a new mbuf pool <mb_pool_0>: n=171456, size=2176, socket=0
testpmd: preferred mempool ops selected: ring_mp_mc
testpmd: create a new mbuf pool <mb_pool_1>: n=171456, size=2176, socket=1
testpmd: preferred mempool ops selected: ring_mp_mc

Warning! port-topology=paired and odd forward ports number, the last port will pair with itself.

Configuring Port 0 (socket 0)
Port 0: 00:00:00:00:00:02
Checking link statuses...
Done
Error during enabling promiscuous mode for port 0: Operation not supported - ignore
Start automatic packet forwarding
mac packet forwarding - ports=1 - cores=1 - streams=1 - NUMA support enabled, MP allocation mode: native
Logical Core 1 (socket 1) forwards packets on 1 streams:
  RX P=0/Q=0 (socket 0) -> TX P=0/Q=0 (socket 0) peer=02:00:00:00:00:00

  mac packet forwarding packets/burst=32
  nb forwarding cores=1 - nb forwarding ports=1
  port 0: RX queue number: 1 Tx queue number: 1
    Rx offloads=0x0 Tx offloads=0x0
    RX queue: 0
      RX desc=512 - RX free threshold=32
      RX threshold registers: pthresh=0 hthresh=0  wthresh=0
      RX Offloads=0x0
    TX queue: 0
      TX desc=512 - TX free threshold=32
      TX threshold registers: pthresh=32 hthresh=0  wthresh=0
      TX offloads=0x0 - TX RS bit threshold=32
testpmd> vlan set filter on 0
testpmd> rx_vlan add 11 0
testpmd> start
Packet forwarding already started
testpmd> stop
Telling cores to stop...
Waiting for lcores to finish...

  ---------------------- Forward statistics for port 0  ----------------------
  RX-packets: 32             RX-dropped: 0             RX-total: 32
  TX-packets: 32             TX-dropped: 0             TX-total: 32
  ----------------------------------------------------------------------------

  +++++++++++++++ Accumulated forward statistics for all ports+++++++++++++++
  RX-packets: 32             RX-dropped: 0             RX-total: 32
  TX-packets: 32             TX-dropped: 0             TX-total: 32
  ++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++

Done.
testpmd>

Comment 3 liting 2023-11-07 05:55:45 UTC
rhel9.2 RHEL-9.2.0-updates-20231031.43 still has issue.
https://beaker.engineering.redhat.com/jobs/8526270

Comment 4 Kevin Traynor 2024-09-16 15:22:16 UTC
Migrated to https://issues.redhat.com/browse/FD-2819