Bug 2223303
| Summary: | BMC57504 card: there is always packet loss and cannot got the final throughput when running ovs dpdk pvp performance case on rhel9.2 | ||
|---|---|---|---|
| Product: | Red Hat Enterprise Linux Fast Datapath | Reporter: | liting <tli> |
| Component: | openvswitch2.17 | Assignee: | OVS Triage <ovs-triage> |
| Status: | CLOSED NOTABUG | QA Contact: | liting <tli> |
| Severity: | unspecified | Docs Contact: | |
| Priority: | unspecified | ||
| Version: | FDP 23.C | CC: | ctrautma, jhsiao, ralongi |
| Target Milestone: | --- | ||
| Target Release: | --- | ||
| Hardware: | Unspecified | ||
| OS: | Unspecified | ||
| Whiteboard: | |||
| Fixed In Version: | Doc Type: | If docs needed, set a value | |
| Doc Text: | Story Points: | --- | |
| Clone Of: | Environment: | ||
| Last Closed: | 2023-07-31 02:08:50 UTC | Type: | Bug |
| Regression: | --- | Mount Type: | --- |
| Documentation: | --- | CRM: | |
| Verified Versions: | Category: | --- | |
| oVirt Team: | --- | RHEL 7.3 requirements from Atomic Host: | |
| Cloudforms Team: | --- | Target Upstream Version: | |
| Embargoed: | |||
For testpmd as switch case, it also has this issue. https://beaker.engineering.redhat.com/jobs/8079517 Run testpmd as switch more time, it can got about 10mpps with loss rate 0.002 sometime. https://beaker.engineering.redhat.com/jobs/8083290 https://beaker-archive.hosts.prod.psi.bos.redhat.com/beaker-logs/2023/07/80832/8083290/14250948/163037522/bnxt_25.html After directly connect the anl154 and anl151 and bypass the netscout, and used following command turn off the lldp and rerun the ovs dpdk vhostuser pvp case again. It can got result.
mstconfig -y -d 0000:b1:00.0 s LLDP_NB_TX_MODE_P1=0
mstconfig -y -d 0000:b1:00.0 s LLDP_NB_TX_MODE_P2=0
mstconfig -y -d 0000:b1:00.0 s LLDP_NB_DCBX_P1=0
mstconfig -y -d 0000:b1:00.0 s LLDP_NB_RX_MODE_P1=0
mstconfig -y -d 0000:b1:00.1 s LLDP_NB_TX_MODE_P1=0
mstconfig -y -d 0000:b1:00.1 s LLDP_NB_TX_MODE_P2=0
mstconfig -y -d 0000:b1:00.1 s LLDP_NB_DCBX_P1=0
mstfwreset -y -d 0000:b1:00.0 reset
mstfwreset -y -d 0000:b1:00.1 reset
https://beaker.engineering.redhat.com/jobs/8111917
https://beaker-archive.hosts.prod.psi.bos.redhat.com/beaker-logs/2023/07/81119/8111917/14300823/163619324/bnxt_25.html
Because it can got normal performance result after turn off the lldp. so close this bug. |
Description of problem: BMC57504 card: there is always packet loss and cannot got the final throughput when running ovs dpdk pvp performance case Version-Release number of selected component (if applicable): openvswitch2.17-2.17.0-70.el9fdp.x86_64.rpm kernel-5.14.0-284.18.1.el9_2 How reproducible: Steps to Reproduce: Run ovs dpdk pvp performance case. such as 1queue 2pmd case: Bridge ovsbr0 datapath_type: netdev Port dpdk0 Interface dpdk0 type: dpdk options: {dpdk-devargs="0000:86:00.0", n_rxq="1", n_rxq_desc="1024", n_txq_desc="1024"} Port ovsbr0 Interface ovsbr0 type: internal Port vhost1 Interface vhost1 type: dpdkvhostuserclient options: {vhost-server-path="/tmp/vhostuser/vhost1"} Port vhost0 Interface vhost0 type: dpdkvhostuserclient options: {vhost-server-path="/tmp/vhostuser/vhost0"} Port dpdk1 Interface dpdk1 type: dpdk options: {dpdk-devargs="0000:86:00.1", n_rxq="1", n_rxq_desc="1024", n_txq_desc="1024"} ovs_version: "2.17.6" ovs config: {dpdk-init="true", dpdk-lcore-mask="0x1", dpdk-socket-mem="0,4096", pmd-cpu-mask=a000000000a000000000, userspace-tso-enable="false", vhost-iommu-support="true"} And start testpmd inside guest: dpdk-testpmd -l 0-2 -n 1 --socket-mem 1024 -- -i --forward-mode=io --burst=32 --rxd=8192 --txd=8192 --max-pkt-len=9600 --mbuf-size=9728 --nb-cores=2 --rxq=1 --txq=1 --mbcache=512 --auto-start Use trex send traffic: ./binary-search.py --traffic-generator=trex-txrx --frame-size=64 --num-flows=1024 --max-loss-pct=0 --search-runtime=10 --validation-runtime=60 --rate-tolerance=10 --runtime-tolerance=10 --rate=25 --rate-unit=% --duplicate-packet-failure=retry-to-fail --negative-packet-loss=retry-to-fail --warmup-trial --warmup-trial-runtime=10 --rate=25 --rate-unit=% --one-shot=0 --use-src-ip-flows=1 --use-dst-ip-flows=1 --use-src-mac-flows=1 --use-dst-mac-flows=1 --send-teaching-measurement --send-teaching-warmup --teaching-warmup-packet-type=generic --teaching-warmup-packet-rate=1000 --use-device-stats Actual results: It cannot got the final throughput due to the packet loss. Run 1 queue 2pmd 64byte case loss rate 0 two times, both cannot got the result. The two jobs as follows. Following job 1queue 2pmd cannot got result with running traffic trial test 197 time. https://beaker.engineering.redhat.com/jobs/8065043 Following job 1queue 2pmd cannot got result with running traffic trial test 307 time. https://beaker.engineering.redhat.com/jobs/8070974 Changed to the lossrate to 0.002, and 1queue 4pmd viommu 64byte got 6.3mpps. The job as follows. https://beaker.engineering.redhat.com/jobs/8071321 https://beaker-archive.hosts.prod.psi.bos.redhat.com/beaker-logs/2023/07/80713/8071321/14233010/162918551/bnxt_25.html And then changed to the lossrate to 0.002, and 1queue 2pmd viommu 64byte cannot got the result with running traffic trial test 163 time. The job as follows. https://beaker.engineering.redhat.com/jobs/8073352 And then changed to the lossrate to 0.002, and 1queue 4pmd 128byte got 5.4mpps, 1queue 4pmd 256byte got 4.8mpps. and then 1queue 4pmd 1500byte cannot got result with running traffic trial test 340 time. The job as follows. https://beaker.engineering.redhat.com/jobs/8073498 Expected results: It can got normal throughput when running the ovs dpdk performance case. Additional info: