Bug 1832709
Summary: | perftest commands fail with 8.3 builds | ||
---|---|---|---|
Product: | Red Hat Enterprise Linux 8 | Reporter: | Selvin Xavier (Broadcom) <sxavier> |
Component: | perftest | Assignee: | Honggang LI <honli> |
Status: | CLOSED ERRATA | QA Contact: | Afom T. Michael <tmichael> |
Severity: | unspecified | Docs Contact: | |
Priority: | unspecified | ||
Version: | 8.3 | CC: | bchae, dledford, hwkernel-mgr, rdma-dev-team, tmichael, zguo |
Target Milestone: | rc | Keywords: | Regression |
Target Release: | 8.3 | ||
Hardware: | Unspecified | ||
OS: | Unspecified | ||
Whiteboard: | |||
Fixed In Version: | perftest-4.4-2.el8 | Doc Type: | If docs needed, set a value |
Doc Text: | Story Points: | --- | |
Clone Of: | Environment: | ||
Last Closed: | 2020-11-04 01:38:09 UTC | Type: | Bug |
Regression: | --- | Mount Type: | --- |
Documentation: | --- | CRM: | |
Verified Versions: | Category: | --- | |
oVirt Team: | --- | RHEL 7.3 requirements from Atomic Host: | |
Cloudforms Team: | --- | Target Upstream Version: | |
Embargoed: |
Description
Selvin Xavier (Broadcom)
2020-05-07 07:11:42 UTC
(In reply to Selvin Xavier (Broadcom) from comment #0) > ib commands fail with following error. > > [root@rdma-dev-26 perftest (master)]$ ib_send_bw -d bnxt_re0 -x 3 In-box perftest-4.4-1.el8 was based on upstream release 4.4-0.23.g89e176a, which is bad on branch rdma-core-dc-support . > Cloned the perftest from upstream git and compiled and the test > is passing with the compiled perftest. Here is the output [705ed2e7b981f3b5830efd63f9f2b5c4ea643b80] Made ibv_wr_api default during the build The commit in branch rdma-core-dc-support introduces this bug. Hi, Selvin It seems bnxt does not support ibv_wr_api. Is it right? [test@rdma-dev-26 ~]$ rpm -qf $(which ib_send_bw) perftest-4.4-1.el8.x86_64 [test@rdma-dev-26 ~]$ ib_send_bw -d bnxt_re0 -x 3 --use_old_post_send 172.31.40.125 --------------------------------------------------------------------------------------- Send BW Test Dual-port : OFF Device : bnxt_re0 Number of qps : 1 Transport type : IB Connection type : RC Using SRQ : OFF PCIe relax order: ON TX depth : 128 CQ Moderation : 1 Mtu : 4096[B] Link type : Ethernet GID index : 3 Max inline data : 0[B] rdma_cm QPs : OFF Data ex. method : Ethernet --------------------------------------------------------------------------------------- local address: LID 0000 QPN 0x1010 PSN 0xcd6ac9 GID: 254:128:00:00:00:00:00:00:02:10:247:255:254:234:205:144 remote address: LID 0000 QPN 0x1010 PSN 0x233aec GID: 254:128:00:00:00:00:00:00:02:10:247:255:254:234:206:160 --------------------------------------------------------------------------------------- #bytes #iterations BW peak[MB/sec] BW average[MB/sec] MsgRate[Mpps] 65536 1000 11579.92 11579.73 0.185276 --------------------------------------------------------------------------------------- (In reply to Honggang LI from comment #2) > Hi, Selvin > It seems bnxt does not support ibv_wr_api. Is it right? Yes.. bnxt_re doesn't support this. > > [test@rdma-dev-26 ~]$ rpm -qf $(which ib_send_bw) > perftest-4.4-1.el8.x86_64 > > [test@rdma-dev-26 ~]$ ib_send_bw -d bnxt_re0 -x 3 --use_old_post_send > 172.31.40.125 > ----------------------------------------------------------------------------- > ---------- > Send BW Test > Dual-port : OFF Device : bnxt_re0 > Number of qps : 1 Transport type : IB > Connection type : RC Using SRQ : OFF > PCIe relax order: ON > TX depth : 128 > CQ Moderation : 1 > Mtu : 4096[B] > Link type : Ethernet > GID index : 3 > Max inline data : 0[B] > rdma_cm QPs : OFF > Data ex. method : Ethernet > ----------------------------------------------------------------------------- > ---------- > local address: LID 0000 QPN 0x1010 PSN 0xcd6ac9 > GID: 254:128:00:00:00:00:00:00:02:10:247:255:254:234:205:144 > remote address: LID 0000 QPN 0x1010 PSN 0x233aec > GID: 254:128:00:00:00:00:00:00:02:10:247:255:254:234:206:160 > ----------------------------------------------------------------------------- > ---------- > #bytes #iterations BW peak[MB/sec] BW average[MB/sec] > MsgRate[Mpps] > 65536 1000 11579.92 11579.73 0.185276 > ----------------------------------------------------------------------------- > ---------- *** Bug 1835013 has been marked as a duplicate of this bug. *** https://github.com/linux-rdma/perftest/pull/88 Upstream opened this PR to address this issue. Test with recent RHEL-8.3 build which includ perftest-4.4-2 passed as shown below. [root@rdma-dev-26 perftest]$ cat /etc/redhat-release Red Hat Enterprise Linux release 8.3 Beta (Ootpa) [root@rdma-dev-26 perftest]$ uname -r 4.18.0-205.el8.x86_64 [root@rdma-dev-26 perftest]$ rpm -qa | grep -Ei "rdma|perftest|verb|infiniband" | grep -v kernel rdma-core-devel-29.0-2.el8.x86_64 librdmacm-utils-29.0-2.el8.x86_64 libibverbs-29.0-2.el8.x86_64 librdmacm-29.0-2.el8.x86_64 perftest-4.4-2.el8.x86_64 libibverbs-utils-29.0-2.el8.x86_64 rdma-core-29.0-2.el8.x86_64 infiniband-diags-29.0-2.el8.x86_64 [root@rdma-dev-26 perftest]$ ip a s bnxt_roce 3: bnxt_roce: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 9000 qdisc mq state UP group default qlen 1000 link/ether 00:0a:f7:ea:cd:90 brd ff:ff:ff:ff:ff:ff inet 172.31.40.126/24 brd 172.31.40.255 scope global dynamic noprefixroute bnxt_roce valid_lft 3286sec preferred_lft 3286sec inet6 fe80::20a:f7ff:feea:cd90/64 scope link noprefixroute valid_lft forever preferred_lft forever [root@rdma-dev-26 perftest]$ ssh rdma-dev-25 "ip a s bnxt_roce" 2: bnxt_roce: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 9000 qdisc mq state UP group default qlen 1000 link/ether 00:0a:f7:ea:ce:a0 brd ff:ff:ff:ff:ff:ff inet 172.31.40.125/24 brd 172.31.40.255 scope global dynamic noprefixroute bnxt_roce valid_lft 3285sec preferred_lft 3285sec inet6 fe80::20a:f7ff:feea:cea0/64 scope link noprefixroute valid_lft forever preferred_lft forever [root@rdma-dev-26 perftest]$ ib_send_bw -d bnxt_re0 -x 3 172.31.40.125 --------------------------------------------------------------------------------------- Send BW Test Dual-port : OFF Device : bnxt_re0 Number of qps : 1 Transport type : IB Connection type : RC Using SRQ : OFF PCIe relax order: ON ibv_wr* API : OFF TX depth : 128 CQ Moderation : 1 Mtu : 4096[B] Link type : Ethernet GID index : 3 Max inline data : 0[B] rdma_cm QPs : OFF Data ex. method : Ethernet --------------------------------------------------------------------------------------- local address: LID 0000 QPN 0x1023 PSN 0x8a8494 GID: 254:128:00:00:00:00:00:00:02:10:247:255:254:234:205:144 remote address: LID 0000 QPN 0x1023 PSN 0xdf084d GID: 254:128:00:00:00:00:00:00:02:10:247:255:254:234:206:160 --------------------------------------------------------------------------------------- #bytes #iterations BW peak[MB/sec] BW average[MB/sec] MsgRate[Mpps] 65536 1000 11580.09 11579.90 0.185278 --------------------------------------------------------------------------------------- [root@rdma-dev-26 perftest]$ echo $? 0 [root@rdma-dev-26 perftest]$ ib_send_bw -a -c RC -F -d bnxt_re0 -p 1 -F 172.31.40.125 --------------------------------------------------------------------------------------- Send BW Test Dual-port : OFF Device : bnxt_re0 Number of qps : 1 Transport type : IB Connection type : RC Using SRQ : OFF PCIe relax order: ON ibv_wr* API : OFF TX depth : 128 CQ Moderation : 100 Mtu : 4096[B] Link type : Ethernet GID index : 7 Max inline data : 0[B] rdma_cm QPs : OFF Data ex. method : Ethernet --------------------------------------------------------------------------------------- local address: LID 0000 QPN 0x1024 PSN 0x605c7b GID: 00:00:00:00:00:00:00:00:00:00:255:255:172:31:45:126 remote address: LID 0000 QPN 0x1024 PSN 0x350a22 GID: 00:00:00:00:00:00:00:00:00:00:255:255:172:31:45:125 --------------------------------------------------------------------------------------- #bytes #iterations BW peak[MB/sec] BW average[MB/sec] MsgRate[Mpps] 2 1000 11.18 10.75 5.634525 4 1000 22.59 22.48 5.893671 8 1000 45.27 45.14 5.916270 16 1000 90.90 90.40 5.924762 32 1000 182.53 181.49 5.947070 64 1000 363.61 362.88 5.945405 128 1000 722.91 721.44 5.910014 256 1000 1417.79 1412.13 5.784068 512 1000 2617.45 2609.68 5.344623 1024 1000 4852.93 4779.83 4.894547 2048 1000 7561.44 7469.92 3.824598 4096 1000 10959.98 10829.60 2.772377 8192 1000 11403.00 11399.69 1.459160 16384 1000 11529.43 11527.47 0.737758 32768 1000 11605.21 11604.07 0.371330 65536 1000 11638.31 11638.18 0.186211 131072 1000 11657.84 11657.70 0.093262 262144 1000 11667.09 11666.91 0.046668 524288 1000 11671.27 11671.23 0.023342 1048576 1000 11673.01 11672.99 0.011673 2097152 1000 11674.45 11674.45 0.005837 4194304 1000 11675.06 11675.05 0.002919 8388608 1000 11675.33 11675.33 0.001459 --------------------------------------------------------------------------------------- [root@rdma-dev-26 perftest]$ echo $? 0 [root@rdma-dev-26 perftest]$ ib_send_bw -a -c RC -F -d bnxt_re0 -p 1 -F -R 172.31.40.125 --------------------------------------------------------------------------------------- Send BW Test Dual-port : OFF Device : bnxt_re0 Number of qps : 1 Transport type : IB Connection type : RC Using SRQ : OFF PCIe relax order: ON ibv_wr* API : OFF TX depth : 128 CQ Moderation : 100 Mtu : 4096[B] Link type : Ethernet GID index : 7 Max inline data : 0[B] rdma_cm QPs : ON Data ex. method : rdma_cm --------------------------------------------------------------------------------------- local address: LID 0000 QPN 0x1029 PSN 0x2d8404 GID: 00:00:00:00:00:00:00:00:00:00:255:255:172:31:45:126 remote address: LID 0000 QPN 0x1029 PSN 0xef3f8c GID: 00:00:00:00:00:00:00:00:00:00:255:255:172:31:45:125 --------------------------------------------------------------------------------------- #bytes #iterations BW peak[MB/sec] BW average[MB/sec] MsgRate[Mpps] 2 1000 9.04 8.55 4.482290 4 1000 18.68 18.67 4.894838 8 1000 37.60 37.58 4.926336 16 1000 75.08 75.01 4.915560 32 1000 148.21 145.75 4.776044 64 1000 297.39 297.19 4.869209 128 1000 599.66 598.78 4.905215 256 1000 1176.22 1175.90 4.816468 512 1000 2275.52 1347.42 2.759525 1024 1000 4413.74 4412.56 4.518459 2048 1000 7261.30 7207.98 3.690487 4096 1000 11021.94 10885.39 2.786659 8192 1000 11414.21 11412.15 1.460755 16384 1000 11543.54 11542.46 0.738717 32768 1000 11613.73 11613.03 0.371617 65536 1000 11649.20 11649.04 0.186385 131072 1000 11669.10 11668.97 0.093352 262144 1000 11676.77 11676.73 0.046707 524288 1000 11682.45 11682.39 0.023365 1048576 1000 11684.21 11684.21 0.011684 2097152 1000 11685.61 11685.59 0.005843 4194304 1000 11686.19 11686.17 0.002922 8388608 1000 11686.48 11686.48 0.001461 --------------------------------------------------------------------------------------- [root@rdma-dev-26 perftest]$ echo $? 0 [root@rdma-dev-26 perftest]$ [root@rdma-dev-26 perftest]$ ib_read_bw -a -c RC -F -d bnxt_re0 -p 1 -F 172.31.40.125 --------------------------------------------------------------------------------------- RDMA_Read BW Test Dual-port : OFF Device : bnxt_re0 Number of qps : 1 Transport type : IB Connection type : RC Using SRQ : OFF PCIe relax order: ON ibv_wr* API : OFF TX depth : 128 CQ Moderation : 100 Mtu : 4096[B] Link type : Ethernet GID index : 7 Outstand reads : 126 rdma_cm QPs : OFF Data ex. method : Ethernet --------------------------------------------------------------------------------------- local address: LID 0000 QPN 0x1025 PSN 0xc2696e OUT 0x7e RKey 0x02a901 VAddr 0x0014f29ab1c000 GID: 00:00:00:00:00:00:00:00:00:00:255:255:172:31:45:126 remote address: LID 0000 QPN 0x1025 PSN 0x9fd36e OUT 0x7e RKey 0x007f01 VAddr 0x00145fe6430000 GID: 00:00:00:00:00:00:00:00:00:00:255:255:172:31:45:125 --------------------------------------------------------------------------------------- #bytes #iterations BW peak[MB/sec] BW average[MB/sec] MsgRate[Mpps] 2 1000 0.00 0.00 0.000282 4 1000 20.97 20.94 5.488103 8 1000 41.87 41.73 5.469547 16 1000 84.83 84.63 5.546095 32 1000 168.41 168.13 5.509375 64 1000 339.32 338.75 5.550012 128 1000 671.17 669.91 5.487942 256 1000 1299.43 1297.78 5.315709 512 1000 2535.81 2533.86 5.189353 1024 1000 4727.55 4719.39 4.832654 2048 1000 7813.94 7803.74 3.995513 4096 1000 10680.02 10673.81 2.732495 8192 1000 11481.29 11480.49 1.469503 16384 1000 11580.72 11579.20 0.741069 32768 1000 11625.32 11625.32 0.372010 65536 1000 11651.33 11650.90 0.186414 131072 1000 11663.37 11663.29 0.093306 262144 1000 11670.24 11670.17 0.046681 524288 1000 11672.89 11672.86 0.023346 1048576 1000 11674.13 11674.09 0.011674 2097152 1000 11674.84 11674.82 0.005837 4194304 1000 11675.24 11675.23 0.002919 8388608 1000 11675.39 11675.39 0.001459 --------------------------------------------------------------------------------------- [root@rdma-dev-26 perftest]$ echo $? 0 [root@rdma-dev-26 perftest]$ ib_read_bw -a -c RC -F -d bnxt_re0 -p 1 -F -R 172.31.40.125 --------------------------------------------------------------------------------------- RDMA_Read BW Test Dual-port : OFF Device : bnxt_re0 Number of qps : 1 Transport type : IB Connection type : RC Using SRQ : OFF PCIe relax order: ON ibv_wr* API : OFF TX depth : 128 CQ Moderation : 100 Mtu : 4096[B] Link type : Ethernet GID index : 7 Outstand reads : 126 rdma_cm QPs : ON Data ex. method : rdma_cm --------------------------------------------------------------------------------------- local address: LID 0000 QPN 0x1027 PSN 0x20f694 GID: 00:00:00:00:00:00:00:00:00:00:255:255:172:31:45:126 remote address: LID 0000 QPN 0x1027 PSN 0x903e77 GID: 00:00:00:00:00:00:00:00:00:00:255:255:172:31:45:125 --------------------------------------------------------------------------------------- #bytes #iterations BW peak[MB/sec] BW average[MB/sec] MsgRate[Mpps] 2 1000 10.28 10.25 5.371893 4 1000 20.90 20.83 5.459619 8 1000 42.10 41.98 5.502374 16 1000 84.36 84.29 5.524239 32 1000 168.41 168.12 5.509034 64 1000 339.32 339.08 5.555449 128 1000 672.41 671.53 5.501143 256 1000 1301.75 1300.19 5.325592 512 1000 2544.60 2538.95 5.199772 1024 1000 4727.47 4722.99 4.836340 2048 1000 7845.29 7841.05 4.014619 4096 1000 10719.03 10714.85 2.743002 8192 1000 11492.58 11490.19 1.470745 16384 1000 11589.36 11588.63 0.741672 32768 1000 11636.96 11636.29 0.372361 65536 1000 11662.27 11662.03 0.186592 131072 1000 11674.58 11674.54 0.093396 262144 1000 11681.55 11681.39 0.046726 524288 1000 11684.08 11684.03 0.023368 1048576 1000 11685.37 11685.35 0.011685 2097152 1000 11686.07 11686.05 0.005843 4194304 1000 11686.40 11686.39 0.002922 8388608 1000 11686.59 11686.59 0.001461 --------------------------------------------------------------------------------------- [root@rdma-dev-26 perftest]$ echo $? 0 [root@rdma-dev-26 perftest]$ Test results for perftest on rdma-dev-26: 4.18.0-205.el8.x86_64, rdma-core-29.0-2.el8, bnxt, roce, & bnxt_re0 Result | Status | Test ---------+--------+------------------------------------ PASS | 0 | ib_read_bw RC PASS | 0 | ib_read_lat RC PASS | 0 | ib_send_bw RC PASS | 0 | ib_send_lat RC PASS | 0 | ib_write_bw RC PASS | 0 | ib_write_lat RC With DISTRO=RHEL-8.3.0-20200616.0, some perftest tests are failing as shown below. A beaker run (https://beaker.engineering.redhat.com/jobs/4391831) Test results for perftest on rdma-dev-26: 4.18.0-214.el8.x86_64, rdma-core-29.0-3.el8, bnxt, roce, & bnxt_re0 Result | Status | Test ---------+--------+------------------------------------ PASS | 0 | ib_read_bw RC FAIL | 17 | ib_read_lat RC PASS | 0 | ib_send_bw RC FAIL | 124 | ib_send_lat RC PASS | 0 | ib_write_bw RC PASS | 0 | ib_write_lat RC A manual run: Test results for perftest on rdma-dev-26: 4.18.0-214.el8.x86_64, rdma-core-29.0-3.el8, bnxt, roce, & bnxt_re0 Result | Status | Test ---------+--------+------------------------------------ FAIL | 17 | ib_read_bw RC PASS | 0 | ib_read_lat RC PASS | 0 | ib_send_bw RC PASS | 0 | ib_send_lat RC PASS | 0 | ib_write_bw RC PASS | 0 | ib_write_lat RC + [20-06-30 19:44:13] timeout 3m ib_read_bw -a -c RC -F -d bnxt_re0 -p 1 -F -R 172.31.40.125 --------------------------------------------------------------------------------------- RDMA_Read BW Test Dual-port : OFF Device : bnxt_re0 Number of qps : 1 Transport type : IB Connection type : RC Using SRQ : OFF PCIe relax order: ON ibv_wr* API : OFF TX depth : 128 CQ Moderation : 100 Mtu : 4096[B] Link type : Ethernet GID index : 7 Outstand reads : 126 rdma_cm QPs : ON Data ex. method : rdma_cm --------------------------------------------------------------------------------------- local address: LID 0000 QPN 0x101b PSN 0x68923b GID: 00:00:00:00:00:00:00:00:00:00:255:255:172:31:43:126 remote address: LID 0000 QPN 0x101b PSN 0x3102a6 GID: 00:00:00:00:00:00:00:00:00:00:255:255:172:31:43:125 --------------------------------------------------------------------------------------- #bytes #iterations BW peak[MB/sec] BW average[MB/sec] MsgRate[Mpps] 2 1000 8.24 8.14 4.269417 4 1000 16.74 16.61 4.353876 8 1000 33.13 33.00 4.325730 16 1000 68.55 68.27 4.474184 32 1000 136.08 133.33 4.369001 64 1000 273.79 271.72 4.451858 128 1000 548.41 540.10 4.424469 256 1000 1120.33 1106.85 4.533662 512 1000 2154.88 2138.97 4.380601 1024 1000 4150.81 4135.36 4.234611 2048 1000 7361.73 7343.88 3.760067 4096 1000 10488.51 10485.38 2.684257 8192 1000 11481.32 11476.67 1.469014 16384 1000 11586.38 11585.62 0.741480 32768 1000 11635.37 11634.35 0.372299 65536 1000 11660.82 11660.78 0.186573 131072 1000 11674.23 11674.22 0.093394 262144 1000 11681.03 11681.00 0.046724 524288 1000 11683.80 11683.79 0.023368 1048576 1000 650.50 639.22 0.000639 2097152 1000 282.49 65.09 0.000033 Completion with error at client Failed status 12: wr_id 0 syndrom 0x4700cc50 scnt=228, ccnt=100 + [20-06-30 19:45:07] RQA_check_result -r 17 -t 'ib_read_bw RC' (In reply to Afom T. Michael from comment #9) > 2097152 1000 282.49 65.09 0.000033 > Completion with error at client > Failed status 12: wr_id 0 syndrom 0x4700cc50 > scnt=228, ccnt=100 It is likely a bnxt_re specific issue. Please file a new bug. (In reply to Honggang LI from comment #10) > (In reply to Afom T. Michael from comment #9) > > > 2097152 1000 282.49 65.09 0.000033 > > Completion with error at client > > Failed status 12: wr_id 0 syndrom 0x4700cc50 > > scnt=228, ccnt=100 > > It is likely a bnxt_re specific issue. Please file a new bug. It seems like the issue is due to packet drop. Status 12 is retry exceeded error. So some of the stats under /sys/class/infiniband/bnxt_re0/ports/1/hw_counters would indicates this drops. Using -u 16 argument to ib_read_bw should make the test pass. Afom, I agree that this is a different issue. You can created another BZ and assign it to me. We will review the host hw_counters and see the actual cause of failure. We might have to configure the PFC on the host side also to avoid the packet drops. Moving to verified since test executed on RHEL-8.3 Beta & host with bnxt_re pass. [root@rdma-dev-26 ~]$ cat /etc/redhat-release Red Hat Enterprise Linux release 8.3 Beta (Ootpa) [root@rdma-dev-26 ~]$ uname -r 4.18.0-221.el8.x86_64 [root@rdma-dev-26 ~]$ rpm -qa | grep -E "rdma|ibverbs|perftest|infiniband-diags" perftest-4.4-2.el8.x86_64 infiniband-diags-29.0-3.el8.x86_64 rdma-core-devel-29.0-3.el8.x86_64 librdmacm-utils-29.0-3.el8.x86_64 rdma-core-29.0-3.el8.x86_64 libibverbs-29.0-3.el8.x86_64 kernel-kernel-infiniband-libibverbs-utils-0.1-38.noarch librdmacm-29.0-3.el8.x86_64 kernel-kernel-infiniband-perftest-1.1-57.noarch libibverbs-utils-29.0-3.el8.x86_64 [root@rdma-dev-26 ~]$ ibstatus Infiniband device 'bnxt_re0' port 1 status: default gid: fe80:0000:0000:0000:020a:f7ff:feea:cd90 base lid: 0x0 sm lid: 0x0 state: 4: ACTIVE phys state: 5: LinkUp rate: 100 Gb/sec (4X EDR) link_layer: Ethernet [root@rdma-dev-26 ~]$ ibv_devinfo hca_id: bnxt_re0 transport: InfiniBand (0) fw_ver: 214.0.194.0 node_guid: 020a:f7ff:feea:cd90 sys_image_guid: 020a:f7ff:feea:cd90 vendor_id: 0x14e4 vendor_part_id: 5652 hw_ver: 0x4540 phys_port_cnt: 1 port: 1 state: PORT_ACTIVE (4) max_mtu: 4096 (5) active_mtu: 4096 (5) sm_lid: 0 port_lid: 0 port_lmc: 0x00 link_layer: Ethernet [root@rdma-dev-26 ~]$ [root@rdma-dev-26 ~]$ lspci | grep Broadcom [...snip...] 04:00.0 Ethernet controller: Broadcom Inc. and subsidiaries BCM57454 NetXtreme-E 10Gb/25Gb/40Gb/50Gb/100Gb Ethernet (rev 01) [root@rdma-dev-26 ~]$ Server: [root@rdma-dev-25 perftest]$ timeout 3m ib_send_bw -a -c RC -F -d bnxt_re0 -p 1 -F -R ************************************ * Waiting for client to connect... * ************************************ --------------------------------------------------------------------------------------- Send BW Test Dual-port : OFF Device : bnxt_re0 Number of qps : 1 Transport type : IB Connection type : RC Using SRQ : OFF PCIe relax order: ON ibv_wr* API : OFF RX depth : 512 CQ Moderation : 100 Mtu : 4096[B] Link type : Ethernet GID index : 7 Max inline data : 0[B] rdma_cm QPs : ON Data ex. method : rdma_cm --------------------------------------------------------------------------------------- Waiting for client rdma_cm QP to connect Please run the same command with the IB/RoCE interface IP --------------------------------------------------------------------------------------- local address: LID 0000 QPN 0x1005 PSN 0xbfb47 GID: 00:00:00:00:00:00:00:00:00:00:255:255:172:31:43:125 remote address: LID 0000 QPN 0x1023 PSN 0x5d46ff GID: 00:00:00:00:00:00:00:00:00:00:255:255:172:31:43:126 --------------------------------------------------------------------------------------- #bytes #iterations BW peak[MB/sec] BW average[MB/sec] MsgRate[Mpps] 2 1000 0.00 9.08 4.761617 4 1000 0.00 19.66 5.153042 [...snip...] 4194304 1000 0.00 11687.24 0.002922 8388608 1000 0.00 11687.31 0.001461 --------------------------------------------------------------------------------------- [root@rdma-dev-25 perftest]$ echo $? 0 [root@rdma-dev-25 perftest]$ Client: [root@rdma-dev-26 perftest]$ ib_send_bw -a -c RC -F -d bnxt_re0 -p 1 -F -R 172.31.45.125 --------------------------------------------------------------------------------------- Send BW Test Dual-port : OFF Device : bnxt_re0 Number of qps : 1 Transport type : IB Connection type : RC Using SRQ : OFF PCIe relax order: ON ibv_wr* API : OFF TX depth : 128 CQ Moderation : 100 Mtu : 4096[B] Link type : Ethernet GID index : 7 Max inline data : 0[B] rdma_cm QPs : ON Data ex. method : rdma_cm --------------------------------------------------------------------------------------- local address: LID 0000 QPN 0x1023 PSN 0x5d46ff GID: 00:00:00:00:00:00:00:00:00:00:255:255:172:31:43:126 remote address: LID 0000 QPN 0x1005 PSN 0xbfb47 GID: 00:00:00:00:00:00:00:00:00:00:255:255:172:31:43:125 --------------------------------------------------------------------------------------- #bytes #iterations BW peak[MB/sec] BW average[MB/sec] MsgRate[Mpps] 2 1000 9.04 8.54 4.476192 4 1000 18.86 18.84 4.937861 [...snip...] 4194304 1000 11675.03 11675.03 0.002919 8388608 1000 11675.36 11675.36 0.001459 --------------------------------------------------------------------------------------- [root@rdma-dev-26 perftest]$ echo $? 0 [root@rdma-dev-26 perftest]$ Issue mentioned on comment #11 will be tracked on https://bugzilla.redhat.com/show_bug.cgi?id=1866984 Full output of test: [root@rdma-dev-26 perftest]$ ib_read_bw -a -c RC -F -d bnxt_re0 -p 1 -R 172.31.45.125 --------------------------------------------------------------------------------------- RDMA_Read BW Test Dual-port : OFF Device : bnxt_re0 Number of qps : 1 Transport type : IB Connection type : RC Using SRQ : OFF PCIe relax order: ON ibv_wr* API : OFF TX depth : 128 CQ Moderation : 100 Mtu : 4096[B] Link type : Ethernet GID index : 7 Outstand reads : 126 rdma_cm QPs : ON Data ex. method : rdma_cm --------------------------------------------------------------------------------------- local address: LID 0000 QPN 0x104b PSN 0x7595c4 GID: 00:00:00:00:00:00:00:00:00:00:255:255:172:31:43:126 remote address: LID 0000 QPN 0x102d PSN 0xc822b6 GID: 00:00:00:00:00:00:00:00:00:00:255:255:172:31:43:125 --------------------------------------------------------------------------------------- #bytes #iterations BW peak[MB/sec] BW average[MB/sec] MsgRate[Mpps] 2 1000 0.00 0.00 0.000023 4 1000 20.75 20.67 5.419447 [...snip...] 4194304 1000 11675.27 11675.26 0.002919 8388608 1000 11675.45 11675.45 0.001459 --------------------------------------------------------------------------------------- [root@rdma-dev-26 perftest]$ echo $? 0 [root@rdma-dev-26 perftest]$ ib_read_lat -a -c RC -F -d bnxt_re0 -p 1 -R 172.31.45.125 --------------------------------------------------------------------------------------- RDMA_Read Latency Test Dual-port : OFF Device : bnxt_re0 Number of qps : 1 Transport type : IB Connection type : RC Using SRQ : OFF PCIe relax order: ON ibv_wr* API : OFF TX depth : 1 Mtu : 4096[B] Link type : Ethernet GID index : 7 Outstand reads : 126 rdma_cm QPs : ON Data ex. method : rdma_cm --------------------------------------------------------------------------------------- local address: LID 0000 QPN 0x104d PSN 0xce782b GID: 00:00:00:00:00:00:00:00:00:00:255:255:172:31:43:126 remote address: LID 0000 QPN 0x102f PSN 0x2b88ba GID: 00:00:00:00:00:00:00:00:00:00:255:255:172:31:43:125 --------------------------------------------------------------------------------------- #bytes #iterations t_min[usec] t_max[usec] t_typical[usec] t_avg[usec] t_stdev[usec] 99% percentile[usec] 99.9% percentile[usec] 2 1000 7.11 2105440.03 7.29 44343.40 302254.01 2105215.04 2105440.03 4 1000 7.31 8.40 7.44 7.43 0.09 7.68 8.40 [...snip...] 4194304 1000 350.74 352.07 351.02 351.00 0.17 351.42 352.07 8388608 1000 693.32 694.63 693.59 693.56 0.16 693.95 694.63 --------------------------------------------------------------------------------------- [root@rdma-dev-26 perftest]$ echo $? 0 [root@rdma-dev-26 perftest]$ ib_send_bw -a -c RC -F -d bnxt_re0 -p 1 -R 172.31.45.125 --------------------------------------------------------------------------------------- Send BW Test Dual-port : OFF Device : bnxt_re0 Number of qps : 1 Transport type : IB Connection type : RC Using SRQ : OFF PCIe relax order: ON ibv_wr* API : OFF TX depth : 128 CQ Moderation : 100 Mtu : 4096[B] Link type : Ethernet GID index : 7 Max inline data : 0[B] rdma_cm QPs : ON Data ex. method : rdma_cm --------------------------------------------------------------------------------------- local address: LID 0000 QPN 0x104f PSN 0xb547a2 GID: 00:00:00:00:00:00:00:00:00:00:255:255:172:31:43:126 remote address: LID 0000 QPN 0x1031 PSN 0xed8726 GID: 00:00:00:00:00:00:00:00:00:00:255:255:172:31:43:125 --------------------------------------------------------------------------------------- #bytes #iterations BW peak[MB/sec] BW average[MB/sec] MsgRate[Mpps] 2 1000 10.51 10.09 5.291853 4 1000 21.45 21.14 5.541919 [...snip...] 4194304 1000 1036.45 1034.69 0.000259 8388608 1000 1046.28 1030.09 0.000129 --------------------------------------------------------------------------------------- [root@rdma-dev-26 perftest]$ echo $? 0 [root@rdma-dev-26 perftest]$ ib_send_lat -a -c RC -F -d bnxt_re0 -p 1 -R 172.31.45.125 --------------------------------------------------------------------------------------- Send Latency Test Dual-port : OFF Device : bnxt_re0 Number of qps : 1 Transport type : IB Connection type : RC Using SRQ : OFF PCIe relax order: ON ibv_wr* API : OFF TX depth : 1 Mtu : 4096[B] Link type : Ethernet GID index : 7 Max inline data : 96[B] rdma_cm QPs : ON Data ex. method : rdma_cm --------------------------------------------------------------------------------------- local address: LID 0000 QPN 0x1051 PSN 0x253a85 GID: 00:00:00:00:00:00:00:00:00:00:255:255:172:31:43:126 remote address: LID 0000 QPN 0x1033 PSN 0x17594d GID: 00:00:00:00:00:00:00:00:00:00:255:255:172:31:43:125 --------------------------------------------------------------------------------------- #bytes #iterations t_min[usec] t_max[usec] t_typical[usec] t_avg[usec] t_stdev[usec] 99% percentile[usec] 99.9% percentile[usec] 2 1000 3.71 4.66 3.78 3.78 0.07 4.06 4.66 4 1000 3.70 4.85 3.78 3.77 0.08 3.96 4.85 [...snip...] 4194304 1000 347.88 349.51 348.05 348.06 0.10 348.30 349.51 8388608 1000 690.40 697.40 690.52 690.55 0.22 691.20 697.40 --------------------------------------------------------------------------------------- [root@rdma-dev-26 perftest]$ echo $? 0 [root@rdma-dev-26 perftest]$ ib_write_bw -a -c RC -F -d bnxt_re0 -p 1 -R 172.31.45.125 --------------------------------------------------------------------------------------- RDMA_Write BW Test Dual-port : OFF Device : bnxt_re0 Number of qps : 1 Transport type : IB Connection type : RC Using SRQ : OFF PCIe relax order: ON ibv_wr* API : OFF TX depth : 128 CQ Moderation : 100 Mtu : 4096[B] Link type : Ethernet GID index : 7 Max inline data : 0[B] rdma_cm QPs : ON Data ex. method : rdma_cm --------------------------------------------------------------------------------------- local address: LID 0000 QPN 0x1053 PSN 0x9f5156 GID: 00:00:00:00:00:00:00:00:00:00:255:255:172:31:43:126 remote address: LID 0000 QPN 0x1035 PSN 0x98c39e GID: 00:00:00:00:00:00:00:00:00:00:255:255:172:31:43:125 --------------------------------------------------------------------------------------- #bytes #iterations BW peak[MB/sec] BW average[MB/sec] MsgRate[Mpps] 2 5000 9.65 9.47 4.964491 4 5000 19.57 19.49 5.110027 [...snip...] 4194304 5000 11675.49 11675.48 0.002919 8388608 5000 11675.55 11675.54 0.001459 --------------------------------------------------------------------------------------- [root@rdma-dev-26 perftest]$ echo $? 0 [root@rdma-dev-26 perftest]$ ib_write_lat -a -c RC -F -d bnxt_re0 -p 1 -R 172.31.45.125 --------------------------------------------------------------------------------------- RDMA_Write Latency Test Dual-port : OFF Device : bnxt_re0 Number of qps : 1 Transport type : IB Connection type : RC Using SRQ : OFF PCIe relax order: OFF ibv_wr* API : OFF TX depth : 1 Mtu : 4096[B] Link type : Ethernet GID index : 7 Max inline data : 96[B] rdma_cm QPs : ON Data ex. method : rdma_cm --------------------------------------------------------------------------------------- local address: LID 0000 QPN 0x1057 PSN 0x3f971a GID: 00:00:00:00:00:00:00:00:00:00:255:255:172:31:43:126 remote address: LID 0000 QPN 0x1039 PSN 0x702810 GID: 00:00:00:00:00:00:00:00:00:00:255:255:172:31:43:125 --------------------------------------------------------------------------------------- #bytes #iterations t_min[usec] t_max[usec] t_typical[usec] t_avg[usec] t_stdev[usec] 99% percentile[usec] 99.9% percentile[usec] 2 1000 3.52 5.34 3.53 3.54 0.06 3.88 5.34 4 1000 3.51 3.93 3.53 3.53 0.03 3.71 3.93 [...snip...] 4194304 1000 347.80 348.77 347.95 347.96 0.07 348.16 348.77 8388608 1000 690.27 691.51 690.36 690.37 0.07 690.71 691.51 --------------------------------------------------------------------------------------- [root@rdma-dev-26 perftest]$ echo $? 0 [root@rdma-dev-26 perftest]$ Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory (rdma-core bug fix and enhancement update), and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://access.redhat.com/errata/RHEA-2020:4456 |