Bug 2223892 - BMC57504 card: there is always packet loss and cannot got the final throughput when running performance case on rhel8.6
Summary: BMC57504 card: there is always packet loss and cannot got the final throughpu...
Keywords:
Status: CLOSED NOTABUG
Alias: None
Product: Red Hat Enterprise Linux Fast Datapath
Classification: Red Hat
Component: openvswitch2.17
Version: FDP 23.C
Hardware: Unspecified
OS: Unspecified
unspecified
unspecified
Target Milestone: ---
: ---
Assignee: OVS Triage
QA Contact: liting
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2023-07-19 08:13 UTC by liting
Modified: 2023-08-01 02:38 UTC (History)
4 users (show)

Fixed In Version:
Doc Type: If docs needed, set a value
Doc Text:
Clone Of:
Environment:
Last Closed: 2023-08-01 02:38:41 UTC
Target Upstream Version:
Embargoed:


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
Red Hat Issue Tracker FD-3038 0 None None None 2023-07-19 08:13:42 UTC

Description liting 2023-07-19 08:13:29 UTC
Description of problem:
BMC57504 card: there is always packet loss and cannot got the final throughput when running performance case on rhel8.6

Version-Release number of selected component (if applicable):
kernel-4.18.0-372.64.1.el8_6
openvswitch2.17-2.17.0-88.el8fdp.x86_64
dpdk-21.11-3.el8.x86_64

How reproducible:


Steps to Reproduce:
Run ovs dpdk pvp performance case.
such as 1queue 2pmd case:
    Bridge ovsbr0
        datapath_type: netdev
        Port dpdk0
            Interface dpdk0
                type: dpdk
                options: {dpdk-devargs="0000:86:00.0", n_rxq="1", n_rxq_desc="1024", n_txq_desc="1024"}
        Port ovsbr0
            Interface ovsbr0
                type: internal
        Port vhost1
            Interface vhost1
                type: dpdkvhostuserclient
                options: {vhost-server-path="/tmp/vhostuser/vhost1"}
        Port vhost0
            Interface vhost0
                type: dpdkvhostuserclient
                options: {vhost-server-path="/tmp/vhostuser/vhost0"}
        Port dpdk1
            Interface dpdk1
                type: dpdk
                options: {dpdk-devargs="0000:86:00.1", n_rxq="1", n_rxq_desc="1024", n_txq_desc="1024"}
    ovs_version: "2.17.6"
ovs config:
{dpdk-init="true", dpdk-lcore-mask="0x1", dpdk-socket-mem="0,4096", pmd-cpu-mask=a000000000a000000000, userspace-tso-enable="false", vhost-iommu-support="true"}

And start testpmd inside guest:
dpdk-testpmd -l 0-2 -n 1 --socket-mem 1024 -- -i --forward-mode=io --burst=32 --rxd=8192 --txd=8192 --max-pkt-len=9600 --mbuf-size=9728 --nb-cores=2 --rxq=1 --txq=1 --mbcache=512  --auto-start

Use trex send traffic:
./binary-search.py --traffic-generator=trex-txrx --frame-size=64 --num-flows=1024 --max-loss-pct=0 --search-runtime=10 --validation-runtime=60 --rate-tolerance=10 --runtime-tolerance=10 --rate=25 --rate-unit=% --duplicate-packet-failure=retry-to-fail --negative-packet-loss=retry-to-fail --warmup-trial --warmup-trial-runtime=10 --rate=25 --rate-unit=% --one-shot=0 --use-src-ip-flows=1 --use-dst-ip-flows=1 --use-src-mac-flows=1 --use-dst-mac-flows=1 --send-teaching-measurement --send-teaching-warmup --teaching-warmup-packet-type=generic --teaching-warmup-packet-rate=1000 --use-device-stats

Actual results:
It cannot got the final throughput due to the packet loss.
Following 1queue 2pmd job, lossrate configure 0.002, it cannot got result with running traffic trial test 128 time.
https://beaker.engineering.redhat.com/jobs/8083597
And for sriov pvp case, lossrate configure 0.002, it also cannot got result with running traffic trial test 62 time.
https://beaker.engineering.redhat.com/jobs/8083321
And for sriov pvp case, lossrate configure 0.002, it also cannot got result with running traffic trial test 97 time.
https://beaker.engineering.redhat.com/jobs/8079520

Expected results:
It can got normal throughput when running the ovs dpdk performance, sriov pvp case.

Additional info:

Comment 5 liting 2023-07-31 02:13:05 UTC
After directly connect the anl154 and anl151 and bypass the netscout, and used following command turn off the lldp and rerun the ovs dpdk vhostuser pvp case again. It can got result.
https://beaker.engineering.redhat.com/recipes/14307463
https://beaker-archive.hosts.prod.psi.bos.redhat.com/beaker-logs/2023/07/81164/8116480/14307463/163670133/bnxt_25.html

Comment 6 liting 2023-08-01 02:38:41 UTC
Because it can got normal performance result after turn off the lldp. so close this bug.


Note You need to log in before you can comment on or make changes to this bug.