Bug 1889631 - The "Should forward and receive packets" dpdk test always fails
Summary: The "Should forward and receive packets" dpdk test always fails
Keywords:
Status: CLOSED CURRENTRELEASE
Alias: None
Product: OpenShift Container Platform
Classification: Red Hat
Component: CNF Platform Validation
Version: 4.7
Hardware: Unspecified
OS: Unspecified
high
high
Target Milestone: ---
: 4.7.0
Assignee: Sebastian Scheinkman
QA Contact: Nikita
URL:
Whiteboard:
Depends On:
Blocks: 1889678 1889743
TreeView+ depends on / blocked
 
Reported: 2020-10-20 08:53 UTC by Sebastian Scheinkman
Modified: 2022-08-24 12:52 UTC (History)
3 users (show)

Fixed In Version:
Doc Type: No Doc Update
Doc Text:
Clone Of:
: 1889743 (view as bug list)
Environment:
Last Closed: 2022-08-24 12:52:49 UTC
Target Upstream Version:
Embargoed:


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
Github openshift-kni cnf-features-deploy pull 344 0 None closed Bug 1889631: Make dpdk test more robust 2021-02-02 23:32:02 UTC

Description Sebastian Scheinkman 2020-10-20 08:53:00 UTC
Description of problem:

When running dpdk test we use the default gw network so the testpmd in loopback mode can find some packets but after the keepalived change there is no more multicast packets on the nic

Comment 3 Sebastian Scheinkman 2020-10-27 15:51:26 UTC
Test validated on Intel card

• [SLOW TEST:218.918 seconds]
dpdk
/home/sscheink/Documents/GolangProjects/src/github.com/openshift-kni/cnf-features-deploy/functests/dpdk/dpdk.go:88
  Validate the build
  /home/sscheink/Documents/GolangProjects/src/github.com/openshift-kni/cnf-features-deploy/functests/dpdk/dpdk.go:148
    Should forward and receive packets from a pod running dpdk base on a image created by building config
    /home/sscheink/Documents/GolangProjects/src/github.com/openshift-kni/cnf-features-deploy/functests/dpdk/dpdk.go:149

container output:

++ cat /sys/fs/cgroup/cpuset/cpuset.cpus
+ export CPU=10,12,14,16
+ CPU=10,12,14,16
10,12,14,16
+ echo 10,12,14,16
+ echo 0000:3b:02.4
0000:3b:02.4
+ cat
+ expect -f test.sh
spawn testpmd -l 10,12,14,16 -w 0000:3b:02.4 --iova-mode=va -- -i --portmask=0x1 --nb-cores=2 --eth-peer=0,ff:ff:ff:ff:ff:ff --forward-mode=txonly --no-mlockall
EAL: Detected 52 lcore(s)
EAL: Detected 2 NUMA nodes
EAL: Multi-process socket /var/run/dpdk/rte/mp_socket
EAL: Selected IOVA mode 'VA'
EAL: No free hugepages reported in hugepages-1048576kB
EAL: Probing VFIO support...
EAL: VFIO support initialized
EAL: PCI device 0000:3b:02.4 on NUMA socket 0
EAL:   probe driver: 8086:154c net_i40e_vf
EAL:   using IOMMU type 1 (Type 1)
i40evf_dev_init(): Init vf failed
EAL: Releasing pci mapped resource for 0000:3b:02.4
EAL: Calling pci_unmap_resource for 0000:3b:02.4 at 0x4300000000
EAL: Calling pci_unmap_resource for 0000:3b:02.4 at 0x4300010000
EAL: Requested device 0000:3b:02.4 cannot be used
testpmd: No probed ethernet devices
Interactive-mode selected
Set txonly packet forwarding mode
testpmd: create a new mbuf pool <mbuf_pool_socket_0>: n=171456, size=2176, socket=0
testpmd: preferred mempool ops selected: ring_mp_mc
Done
testpmd> port stop 0
Invalid port 0
testpmd> port detach 0
Removing a device...
Invalid port 0
testpmd> port attach 0000:3b:02.4
Attaching a new port...
EAL: PCI device 0000:3b:02.4 on NUMA socket 0
EAL:   probe driver: 8086:154c net_i40e_vf
EAL:   using IOMMU type 1 (Type 1)
Port 0 is attached. Now total ports is 1
Done
testpmd> port start 0
Configuring Port 0 (socket 0)
Port 0: 2E:26:E0:2F:48:BE
Checking link statuses...
Done
testpmd> start

Warning! port-topology=paired and odd forward ports number, the last port will pair with itself.

txonly packet forwarding - ports=1 - cores=1 - streams=1 - NUMA support enabled, MP allocation mode: native
Logical Core 12 (socket 0) forwards packets on 1 streams:
  RX P=0/Q=0 (socket 0) -> TX P=0/Q=0 (socket 0) peer=FF:FF:FF:FF:FF:FF

  txonly packet forwarding packets/burst=32
  packet len=64 - nb packet segments=1
  nb forwarding cores=2 - nb forwarding ports=1
  port 0: RX queue number: 1 Tx queue number: 1
    Rx offloads=0x0 Tx offloads=0x0
    RX queue: 0
      RX desc=512 - RX free threshold=32
      RX threshold registers: pthresh=8 hthresh=8  wthresh=0
      RX Offloads=0x0
    TX queue: 0
      TX desc=512 - TX free threshold=32
      TX threshold registers: pthresh=32 hthresh=0  wthresh=0
      TX offloads=0x0 - TX RS bit threshold=32
testpmd> stop
Telling cores to stop...
Waiting for lcores to finish...

  ---------------------- Forward statistics for port 0  ----------------------
  RX-packets: 1              RX-dropped: 0             RX-total: 1
  TX-packets: 111177567      TX-dropped: 488858657     TX-total: 600036224
  ----------------------------------------------------------------------------

  +++++++++++++++ Accumulated forward statistics for all ports+++++++++++++++
  RX-packets: 1              RX-dropped: 0             RX-total: 1
  TX-packets: 111177567      TX-dropped: 488858657     TX-total: 600036224
  ++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++

Done.
testpmd> quit

Stopping port 0...
Stopping ports...
Done

Shutting down port 0...
Closing ports...
Done

Bye...
+ sleep INF

  ~  oc -n dpdk-testing logs -f dpdk-nbr4f
++ cat /sys/fs/cgroup/cpuset/cpuset.cpus
+ export CPU=2,4,6,8
+ CPU=2,4,6,8
+ echo 2,4,6,8
2,4,6,8
+ echo 0000:3b:02.1
0000:3b:02.1
+ '[' testpmd == testpmd ']'
+ envsubst
+ chmod +x test.sh
+ expect -f test.sh
spawn ./customtestpmd -l 2,4,6,8 -w 0000:3b:02.1 --iova-mode=va -- -i --portmask=0x1 --nb-cores=2 --forward-mode=mac --port-topology=loop --no-mlockall
EAL: Detected 52 lcore(s)
EAL: Detected 2 NUMA nodes
EAL: Multi-process socket /var/run/dpdk/rte/mp_socket
EAL: Selected IOVA mode 'VA'
EAL: No free hugepages reported in hugepages-1048576kB
EAL: Probing VFIO support...
EAL: VFIO support initialized
EAL: PCI device 0000:3b:02.1 on NUMA socket 0
EAL:   probe driver: 8086:154c net_i40e_vf
EAL:   using IOMMU type 1 (Type 1)
Interactive-mode selected
Set mac packet forwarding mode
testpmd: create a new mbuf pool <mbuf_pool_socket_0>: n=171456, size=2176, socket=0
testpmd: preferred mempool ops selected: ring_mp_mc
Configuring Port 0 (socket 0)
Port 0: E2:90:22:51:90:CB
Checking link statuses...
Done
i40evf_execute_vf_cmd(): No response for 14
i40evf_config_promisc(): fail to execute command CONFIG_PROMISCUOUS_MODE
Error during enabling promiscuous mode for port 0: Resource temporarily unavailable - ignore
testpmd> port stop 0
Stopping ports...
i40evf_execute_vf_cmd(): No response for 9
i40evf_switch_queue(): fail to switch TX 0 off
i40evf_dev_tx_queue_stop(): Failed to switch TX queue 0 off
i40evf_stop_queues(): Fail to stop queue 0
i40evf_handle_aq_msg(): command mismatch,expect 11, get 14
i40evf_handle_aq_msg(): command mismatch,expect 11, get 9
Checking link statuses...
Done
testpmd> port detach 0
Removing a device...
Port was not closed
EAL: Releasing pci mapped resource for 0000:3b:02.1
EAL: Calling pci_unmap_resource for 0000:3b:02.1 at 0x4300000000
EAL: Calling pci_unmap_resource for 0000:3b:02.1 at 0x4300010000
Device of port 0 is detached
Now total ports is 0
Done
testpmd> port attach 0000:3b:02.1
Attaching a new port...
EAL: PCI device 0000:3b:02.1 on NUMA socket 0
EAL:   probe driver: 8086:154c net_i40e_vf
EAL:   using IOMMU type 1 (Type 1)
Port 0 is attached. Now total ports is 1
Done
testpmd> port start 0
Configuring Port 0 (socket 0)
Port 0: C6:76:57:B8:16:7F
Checking link statuses...
Done
testpmd> start
mac packet forwarding - ports=1 - cores=1 - streams=1 - NUMA support enabled, MP allocation mode: native
Logical Core 4 (socket 0) forwards packets on 1 streams:
  RX P=0/Q=0 (socket 0) -> TX P=0/Q=0 (socket 0) peer=02:00:00:00:00:00

  mac packet forwarding packets/burst=32
  nb forwarding cores=2 - nb forwarding ports=1
  port 0: RX queue number: 1 Tx queue number: 1
    Rx offloads=0x0 Tx offloads=0x0
    RX queue: 0
      RX desc=512 - RX free threshold=32
      RX threshold registers: pthresh=8 hthresh=8  wthresh=0
      RX Offloads=0x0
    TX queue: 0
      TX desc=512 - TX free threshold=32
      TX threshold registers: pthresh=32 hthresh=0  wthresh=0
      TX offloads=0x0 - TX RS bit threshold=32
testpmd> stop
Telling cores to stop...
Waiting for lcores to finish...

  ---------------------- Forward statistics for port 0  ----------------------
  RX-packets: 40406067       RX-dropped: 118457        RX-total: 40524524
  TX-packets: 40287753       TX-dropped: 0             TX-total: 40287753
  ----------------------------------------------------------------------------

  +++++++++++++++ Accumulated forward statistics for all ports+++++++++++++++
  RX-packets: 40406067       RX-dropped: 118457        RX-total: 40524524
  TX-packets: 40287753       TX-dropped: 0             TX-total: 40287753
  ++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++

Done.
testpmd> quit

Stopping port 0...
Stopping ports...
Done

Shutting down port 0...
Closing ports...
Done

Bye...
+ true
+ sleep inf

Comment 4 elevin 2020-12-03 09:56:40 UTC
Client Version: 4.6.0-202005061824-29e9a33
Server Version: 4.7.0-0.nightly-2020-11-18-085225
Kubernetes Version: v1.19.2+62d8418
Repo: quay.io/openshift-kni
cnf-test Image: 4.7

===========================================================

 dpdk Validate a DPDK workload running inside a pod 
  Should forward and receive packets
  /go/src/github.com/openshift-kni/cnf-features-deploy/functests/dpdk/dpdk.go:169
STEP: Parsing output from the DPDK application

• [SLOW TEST:55.908 seconds]
dpdk
/go/src/github.com/openshift-kni/cnf-features-deploy/functests/dpdk/dpdk.go:87
  Validate a DPDK workload running inside a pod
  /go/src/github.com/openshift-kni/cnf-features-deploy/functests/dpdk/dpdk.go:168
    Should forward and receive packets
    /go/src/github.com/openshift-kni/cnf-features-deploy/functests/dpdk/dpdk.go:169

Comment 5 Carlos Goncalves 2022-08-24 12:52:49 UTC
Bulk closing of all "CNF Platform Validation" component BZs assigned to CNF Network team members and in VERIFIED status for longer than 1 month.


Note You need to log in before you can comment on or make changes to this bug.