Bug 1494424
| Summary: | The pvp_tput throughput was 0 when i40e nic enable sriov and bind vf to dpdk | ||
|---|---|---|---|
| Product: | Red Hat Enterprise Linux Fast Datapath | Reporter: | liting <tli> |
| Component: | openvswitch2.11 | Assignee: | Aaron Conole <aconole> |
| Status: | CLOSED WORKSFORME | QA Contact: | Jean-Tsung Hsiao <jhsiao> |
| Severity: | medium | Docs Contact: | |
| Priority: | high | ||
| Version: | FDP 19.C | CC: | aconole, ailan, atragler, ctrautma, fbaudin, fhallal, hewang, jhsiao, marjones, qding, ralongi, tli |
| Target Milestone: | --- | ||
| Target Release: | --- | ||
| Hardware: | Unspecified | ||
| OS: | Unspecified | ||
| Whiteboard: | |||
| Fixed In Version: | Doc Type: | If docs needed, set a value | |
| Doc Text: | Story Points: | --- | |
| Clone Of: | Environment: | ||
| Last Closed: | 2020-03-17 17:31:07 UTC | Type: | Bug |
| Regression: | --- | Mount Type: | --- |
| Documentation: | --- | CRM: | |
| Verified Versions: | Category: | --- | |
| oVirt Team: | --- | RHEL 7.3 requirements from Atomic Host: | |
| Cloudforms Team: | --- | Target Upstream Version: | |
| Embargoed: | |||
|
Description
liting
2017-09-22 08:54:32 UTC
Could this be related to bug 1489263? I tested OVS Vanilla with i40evf host loopback with no issues. If I try i40evf with OVS/DPDK host loopback version openvswitch-2.7.2-10.git20170914.el7fdp.x86_64 binding to vfio-pci I get error messages when adding the devices to the bridge 2017-10-03T16:31:50Z|00072|dpdk|ERR|PMD: i40evf_dev_configure(): VF can't disable HW CRC Strip 2017-10-03T16:31:50Z|00073|netdev_dpdk|WARN|Interface dpdk0 eth_dev setup error Invalid argument 2017-10-03T16:31:50Z|00074|netdev_dpdk|ERR|Interface dpdk0(rxq:1 txq:3) configure error: Invalid argument 2017-10-03T16:31:50Z|00075|dpif_netdev|ERR|Failed to set interface dpdk0 new configuration 2017-10-03T16:31:50Z|00076|bridge|WARN|could not add network device dpdk0 to ofproto (No such device) 2017-10-03T16:31:50Z|00077|dpdk|ERR|PMD: i40evf_dev_configure(): VF can't disable HW CRC Strip 2017-10-03T16:31:50Z|00078|netdev_dpdk|WARN|Interface dpdk1 eth_dev setup error Invalid argument 2017-10-03T16:31:50Z|00079|netdev_dpdk|ERR|Interface dpdk1(rxq:1 txq:3) configure error: Invalid argument 2017-10-03T16:31:50Z|00080|dpif_netdev|ERR|Failed to set interface dpdk1 new configuration 2017-10-03T16:31:50Z|00081|bridge|WARN|could not add network device dpdk1 to ofproto (No such device) 2017-10-03T16:31:50Z|00082|dpdk|ERR|PMD: i40evf_dev_configure(): VF can't disable HW CRC Strip 2017-10-03T16:31:50Z|00083|netdev_dpdk|WARN|Interface dpdk0 eth_dev setup error Invalid argument 2017-10-03T16:31:50Z|00084|netdev_dpdk|ERR|Interface dpdk0(rxq:1 txq:3) configure error: Invalid argument 2017-10-03T16:31:50Z|00085|dpif_netdev|ERR|Failed to set interface dpdk0 new configuration 2017-10-03T16:31:50Z|00086|bridge|WARN|could not add network device dpdk0 to ofproto (No such device) I tried with DPDK 16.11.2 for TESTPMD host loopback with no success. 0 traffic passed. I tried 17.08 TESTPMD host loopback and noticed the following errors... [DEBUG] 2017-10-03 13:07:38,313 : (src.dpdk.testpmd_proc) - cmd : /bin/bash -c "sudo -E /usr/bin/testpmd -l 5,7,9 -n 4 --socket-mem 1024,1024 -- -i" EAL: Detected 48 lcore(s) EAL: Probing VFIO support... EAL: VFIO support initialized EAL: PCI device 0000:04:00.0 on NUMA socket 0 EAL: probe driver: 8086:10fb net_ixgbe EAL: PCI device 0000:04:00.1 on NUMA socket 0 EAL: probe driver: 8086:10fb net_ixgbe EAL: PCI device 0000:82:00.0 on NUMA socket 1 EAL: probe driver: 8086:1572 net_i40e EAL: PCI device 0000:82:00.1 on NUMA socket 1 EAL: probe driver: 8086:1572 net_i40e EAL: PCI device 0000:82:02.0 on NUMA socket 1 EAL: probe driver: 8086:154c net_i40e_vf EAL: using IOMMU type 1 (Type 1) EAL: PCI device 0000:82:0a.0 on NUMA socket 1 EAL: probe driver: 8086:154c net_i40e_vf Interactive-mode selected USER1: create a new mbuf pool <mbuf_pool_socket_0>: n=163456, size=2176, socket=0 USER1: create a new mbuf pool <mbuf_pool_socket_1>: n=163456, size=2176, socket=1 Configuring Port 0 (socket 1) i40evf_execute_vf_cmd(): No response for 27 i40evf_enable_vlan_strip(): Failed to execute command of VIRTCHNL_OP_ENABLE_VLAN_STRIPPING Port 0: 32:EA:BD:50:0A:6F Configuring Port 1 (socket 1) i40evf_execute_vf_cmd(): No response for 27 i40evf_enable_vlan_strip(): Failed to execute command of VIRTCHNL_OP_ENABLE_VLAN_STRIPPING Port 1: 3E:B1:DF:69:09:56 Checking link statuses... Done Let us know if you need more info. This was test on latest 7.4 Kernel. Possibly related to: https://mail.openvswitch.org/pipermail/ovs-dev/2017-October/339555.html Will provide a test RPM today/tomorrow. I update xxv nic firmware to 6.80 version and run the same case(both host and guest use rhel7.6, dpdk use dpdk-18.11.2-1.el7_6), the testpmd start failed inside guest. [root@dell-per730-52 ~]# ethtool -i p5p1 driver: i40e version: 2.3.2-k firmware-version: 6.80 0x80003d05 1.2007.0 expansion-rom-version: bus-info: 0000:07:00.0 supports-statistics: yes supports-test: yes supports-eeprom-access: yes supports-register-dump: yes supports-priv-flags: yes [root@dell-per730-52 ~]# lspci -s 0000:07:00.0 07:00.0 Ethernet controller: Intel Corporation Ethernet Controller XXV710 for 25GbE SFP28 (rev 02) testpmd start failed inside guest as following. [root@localhost bin]# [DEBUG] 2019-08-13 00:10:23,026 : (qemu_pci_passthrough) - vnf_0_cmd : ./testpmd -l 0,1,2 -n 4 --socket-mem 1024 --legacy-mem -- --burst=64 -i --rxd=2048 --txd=2048 --nb-cores=2 --rxq=1 --txq=1 --disable-rss ./testpmd -l 0,1,2 -n 4 --socket-mem 1024 --legacy-mem -- --burst=64 -i --rxd=2048 --txd=2048 --nb-cores=2 --rxq=1 --txq=1 --disable-rss EAL: Detected 3 lcore(s) EAL: Detected 1 NUMA nodes net_mlx5: cannot load glue library: /lib64/libmlx5.so.1: version `MLX5_1.6' not found (required by /usr/lib64/dpdk-pmds-glue/librte_pmd_mlx5_glue.so.18.11.0) net_mlx5: cannot initialize PMD due to missing run-time dependency on rdma-core libraries (libibverbs, libmlx5) EAL: Multi-process socket /var/run/dpdk/rte/mp_socket EAL: Probing VFIO support... EAL: VFIO support initialized EAL: PCI device 0000:02:00.0 on NUMA socket -1 EAL: Invalid NUMA socket, default to 0 EAL: probe driver: 8086:154c net_i40e_vf EAL: using IOMMU type 1 (Type 1) i40evf_check_api_version(): fail to execute command OP_VERSION i40evf_init_vf(): check_api version failed i40evf_dev_init(): Init vf failed EAL: Releasing pci mapped resource for 0000:02:00.0 EAL: Calling pci_unmap_resource for 0000:02:00.0 at 0x940000000 EAL: Calling pci_unmap_resource for 0000:02:00.0 at 0x940010000 EAL: Requested device 0000:02:00.0 cannot be used EAL: PCI device 0000:03:00.0 on NUMA socket -1 EAL: Invalid NUMA socket, default to 0 EAL: probe driver: 8086:154c net_i40e_vf EAL: using IOMMU type 1 (Type 1) i40evf_check_api_version(): fail to execute command OP_VERSION i40evf_init_vf(): check_api version failed i40evf_dev_init(): Init vf failed EAL: Releasing pci mapped resource for 0000:03:00.0 EAL: Calling pci_unmap_resource for 0000:03:00.0 at 0x940014000 EAL: Calling pci_unmap_resource for 0000:03:00.0 at 0x940024000 EAL: Requested device 0000:03:00.0 cannot be used testpmd: No probed ethernet devices Interactive-mode selected testpmd: create a new mbuf pool <mbuf_pool_socket_0>: n=163456, size=2176, socket=0 testpmd: preferred mempool ops selected: ring_mp_mc Done I have XXV710 with 6.01 firmware. The PvP 64 bytes 0-loss sriov test using Xena GUI was successful --- got 36.6 Mpps, 600 seconds search/validation/3 iterations. The key was setting spoofchk off for both VFs. None have mentioned this setting for this Bug. I have been tested XXV710 with firmware version is 6.01 and Xena as a packet generator , The result is PASS --- got 36 Mpps Moreover , It was failed while XXV710's firmware is not 6.01 , I have been tried with 6.80 and 7.10 , Both are failed . |