RHEL Engineering is moving the tracking of its product development work on RHEL 6 through RHEL 9 to Red Hat Jira (issues.redhat.com). If you're a Red Hat customer, please continue to file support cases via the Red Hat customer portal. If you're not, please head to the "RHEL project" in Red Hat Jira and file new tickets here. Individual Bugzilla bugs in the statuses "NEW", "ASSIGNED", and "POST" are being migrated throughout September 2023. Bugs of Red Hat partners with an assigned Engineering Partner Manager (EPM) are migrated in late September as per pre-agreed dates. Bugs against components "kernel", "kernel-rt", and "kpatch" are only migrated if still in "NEW" or "ASSIGNED". If you cannot log in to RH Jira, please consult article #7032570. That failing, please send an e-mail to the RH Jira admins at rh-issues@redhat.com to troubleshoot your issue as a user management inquiry. The email creates a ServiceNow ticket with Red Hat. Individual Bugzilla bugs that are migrated will be moved to status "CLOSED", resolution "MIGRATED", and set with "MigratedToJIRA" in "Keywords". The link to the successor Jira issue will be found under "Links", have a little "two-footprint" icon next to it, and direct you to the "RHEL project" in Red Hat Jira (issue links are of type "https://issues.redhat.com/browse/RHEL-XXXX", where "X" is a digit). This same link will be available in a blue banner at the top of the page informing you that that bug has been migrated.
Bug 1885898 - [RHEL8.3] ib_send_lat RC & ib_write_lat RC fails on QEDR IW
Summary: [RHEL8.3] ib_send_lat RC & ib_write_lat RC fails on QEDR IW
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: Red Hat Enterprise Linux 8
Classification: Red Hat
Component: perftest
Version: 8.3
Hardware: Unspecified
OS: Unspecified
unspecified
unspecified
Target Milestone: rc
: 8.4
Assignee: Honggang LI
QA Contact: Brian Chae
URL:
Whiteboard:
Depends On:
Blocks: 1842946
TreeView+ depends on / blocked
 
Reported: 2020-10-07 09:25 UTC by Brian Chae
Modified: 2021-05-18 14:44 UTC (History)
6 users (show)

Fixed In Version: perftest-4.4-7.el8
Doc Type: If docs needed, set a value
Doc Text:
Clone Of:
Environment:
Last Closed: 2021-05-18 14:44:44 UTC
Type: Bug
Target Upstream Version:
Embargoed:


Attachments (Terms of Use)
client log for perftest result showing successful results on RHEL8.3 eSNAP runs for QEDR IW (51.37 KB, text/plain)
2020-10-07 09:25 UTC, Brian Chae
no flags Details
client log showing perftest failures with the same RHEL8.3 eSNAP3 build (48.21 KB, text/plain)
2020-10-07 09:30 UTC, Brian Chae
no flags Details

Description Brian Chae 2020-10-07 09:25:30 UTC
Created attachment 1719667 [details]
client log for perftest result showing successful results on RHEL8.3 eSNAP runs for QEDR IW

Description of problem:

The two perftests with "-c RC" option fail due to the following error:

ERRNO: Cannot allocate memory.

This error takes place only on QEDR IW device, consistently. Started noticing this behavior from RHEL8.3 RC builds - RHEL-8.3.0-20200929.2 and used to pass on eSNAP#3 (RHEL-8.3.0-20200909.1) & iSNAP (RHEL-8.3.0-20200701.2) runs. However, this same behavior is now observed on these two builds, as well.


Version-Release number of selected component (if applicable):

DISTRO=RHEL-8.3.0-20200929.2

+ [20-10-01 07:26:08] cat /etc/redhat-release
Red Hat Enterprise Linux release 8.3 (Ootpa)

+ [20-10-01 07:26:08] uname -a
Linux rdma-dev-03.lab.bos.redhat.com 4.18.0-240.el8.x86_64 #1 SMP Wed Sep 23 05:13:10 EDT 2020 x86_64 x86_64 x86_64 GNU/Linux
+ [20-10-01 07:26:08] cat /proc/cmdline
BOOT_IMAGE=(hd0,msdos1)/vmlinuz-4.18.0-240.el8.x86_64 root=UUID=18b21d2b-587e-4557-9f99-6e775ff7846a ro console=tty0 rd_NO_PLYMOUTH intel_iommu=on iommu=on crashkernel=auto resume=UUID=8f1a1816-cfa1-4d8c-a60d-b3e37600f835 console=ttyS1,115200

+ [20-10-01 07:26:08] rpm -q rdma-core linux-firmware
rdma-core-29.0-3.el8.x86_64
linux-firmware-20200619-99.git3890db36.el8.noarch
+ [20-10-01 07:26:08] tail /sys/class/infiniband/qedr0/fw_ver /sys/class/infiniband/qedr1/fw_ver
==> /sys/class/infiniband/qedr0/fw_ver <==
8. 42. 2. 0

==> /sys/class/infiniband/qedr1/fw_ver <==
8. 42. 2. 0

08:00.0 Ethernet controller: QLogic Corp. FastLinQ QL45000 Series 25GbE Controller (rev 10)
08:00.1 Ethernet controller: QLogic Corp. FastLinQ QL45000 Series 25GbE Controller (rev 10)

How reproducible:

100%

Steps to Reproduce:
1. setup RDMA server and client hosts with above software and QEDR IW devices
+ [20-10-01 07:26:08] ibstat
CA 'qedr0'
	CA type: FastLinQ QL1656 RoCE
	Number of ports: 1
	Firmware version: 8. 42. 2. 0
	Hardware version: 0x1
	Node GUID: 0x020e1efffed41dfe
	System image GUID: 0x020e1efffed41dfe
	Port 1:
		State: Down
		Physical state: Disabled
		Rate: 2.5
		Base lid: 0
		LMC: 0
		SM lid: 0
		Capability mask: 0x00000000
		Port GUID: 0x020e1efffed41dfe
		Link layer: Ethernet
+ [20-10-01 07:26:08] ibstatus
Infiniband device 'qedr0' port 1 status:
	default gid:	 fe80:0000:0000:0000:020e:1eff:fed4:1dfe
	base lid:	 0x0
	sm lid:		 0x0
	state:		 1: DOWN
	phys state:	 3: Disabled
	rate:		 2.5 Gb/sec (1X SDR)
	link_layer:	 Ethernet

Infiniband device 'qedr1' port 1 status:
	default gid:	 000e:1ed4:1dff:0000:0000:0000:0000:0000
	base lid:	 0x0
	sm lid:		 0x0
	state:		 4: ACTIVE
	phys state:	 5: LinkUp
	rate:		 25 Gb/sec (1X EDR)
	link_layer:	 Ethernet

+ [20-10-01 07:26:08] ip addr show
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
    inet6 ::1/128 scope host 
       valid_lft forever preferred_lft forever
2: lom_1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq state UP group default qlen 1000
    link/ether 14:18:77:32:69:34 brd ff:ff:ff:ff:ff:ff
    inet 10.16.45.170/21 brd 10.16.47.255 scope global dynamic noprefixroute lom_1
       valid_lft 84877sec preferred_lft 84877sec
    inet6 2620:52:0:102f:1618:77ff:fe32:6934/64 scope global dynamic noprefixroute 
       valid_lft 2591913sec preferred_lft 604713sec
    inet6 fe80::1618:77ff:fe32:6934/64 scope link noprefixroute 
       valid_lft forever preferred_lft forever
3: lom_2: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc mq state DOWN group default qlen 1000
    link/ether 14:18:77:32:69:35 brd ff:ff:ff:ff:ff:ff
4: qede_off: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc mq state DOWN group default qlen 1000
    link/ether 00:0e:1e:d4:1d:fe brd ff:ff:ff:ff:ff:ff
5: qede_iw: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 9000 qdisc mq state UP group default qlen 1000
    link/ether 00:0e:1e:d4:1d:ff brd ff:ff:ff:ff:ff:ff
    inet 172.31.50.103/24 brd 172.31.50.255 scope global dynamic noprefixroute qede_iw
       valid_lft 2071sec preferred_lft 2071sec
    inet6 fe80::20e:1eff:fed4:1dff/64 scope link noprefixroute 
       valid_lft forever preferred_lft forever
6: qede_iw.52@qede_iw: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 9000 qdisc noqueue state UP group default qlen 1000
    link/ether 00:0e:1e:d4:1d:ff brd ff:ff:ff:ff:ff:ff
    inet 172.31.52.103/24 brd 172.31.52.255 scope global dynamic noprefixroute qede_iw.52
       valid_lft 2074sec preferred_lft 2074sec
    inet6 fe80::20e:1eff:fed4:1dff/64 scope link noprefixroute 
       valid_lft forever preferred_lft forever
7: qede_iw.51@qede_iw: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 9000 qdisc noqueue state UP group default qlen 1000
    link/ether 00:0e:1e:d4:1d:ff brd ff:ff:ff:ff:ff:ff
    inet 172.31.51.103/24 brd 172.31.51.255 scope global dynamic noprefixroute qede_iw.51
       valid_lft 2075sec preferred_lft 2075sec
    inet6 fe80::20e:1eff:fed4:1dff/64 scope link noprefixroute 

2. On the server side, issue the following perftest command

timeout 3m ib_send_lat -a -c RC -F -d qedr1 -p 1 -F -R

3.On the client side, issue the following command

timeout 3m ib_send_lat -a -c RC -F -d qedr1 -p 1 -F -R 172.31.50.102



Actual results:

client side:


timeout 3m ib_send_lat -a -c RC -F -d qedr1 -p 1 -F -R 172.31.50.102
[qelr_create_qp:747]create qp: failed on ibv_cmd_create_qp with 22
Couldn't create rdma old QP - Cannot allocate memory
Requested QP size might be too big. Try reducing TX depth and/or inline size.
Current TX depth is 1 and inline size is 236 .
Unable to create QP.
Failed to create QP.
ERRNO: Cannot allocate memory.
Failed to handle RDMA CM event.
ERRNO: Cannot allocate memory.
Failed to connect RDMA CM events.
ERRNO: Cannot allocate memory.
timeout: the monitored command dumped core



Expected results:

timeout 3m ib_send_lat -a -c RC -F -d qedr1 -p 1 -F -R -I 128 172.31.50.102
---------------------------------------------------------------------------------------
                    Send Latency Test
 Dual-port       : OFF		Device         : qedr1
 Number of qps   : 1		Transport type : IW
 Connection type : RC		Using SRQ      : OFF
 PCIe relax order: ON
 ibv_wr* API     : OFF
 TX depth        : 1
 Mtu             : 4096[B]
 Link type       : Ethernet
 GID index       : 0
 Max inline data : 128[B]
 rdma_cm QPs	 : ON
 Data ex. method : rdma_cm
---------------------------------------------------------------------------------------
 local address: LID 0000 QPN 0x0291 PSN 0x17953e
 GID: 00:14:30:212:29:255:00:00:00:00:00:00:00:00:00:00
 remote address: LID 0000 QPN 0x0291 PSN 0x2f4a6
 GID: 00:14:30:212:34:127:00:00:00:00:00:00:00:00:00:00
---------------------------------------------------------------------------------------
 #bytes #iterations    t_min[usec]    t_max[usec]  t_typical[usec]    t_avg[usec]    t_stdev[usec]   99% percentile[usec]   99.9% percentile[usec] 
 2       1000          6.79           11.30        6.94     	       7.00        	0.38   		9.57    		11.30  
 4       1000          6.77           12.85        6.98     	       7.04        	0.59   		11.20   		12.85  
 8       1000          6.78           12.80        6.94     	       6.98        	0.39   		9.41    		12.80  
 16      1000          6.77           13.21        6.93     	       7.00        	0.48   		10.07   		13.21  
 32      1000          6.80           13.12        6.96     	       7.02        	0.44   		9.71    		13.12  
 64      1000          6.90           13.52        7.04     	       7.09        	0.42   		9.27    		13.52  
 128     1000          6.97           12.89        7.09     	       7.15        	0.34   		8.81    		12.89  
 256     1000          6.10           12.29        6.29     	       6.34        	0.68   		10.69   		12.29  
 512     1000          6.20           12.47        6.38     	       6.38        	0.43   		9.48    		12.47  
 1024    1000          6.46           12.55        6.59     	       6.65        	0.48   		10.48   		12.55  
 2048    1000          7.23           11.94        7.41     	       7.39        	0.29   		8.73    		11.94  
 4096    1000          7.73           13.74        7.87     	       7.90        	0.33   		9.49    		13.74  
 8192    1000          9.29           15.55        9.46     	       9.49        	0.56   		13.45   		15.55  
 16384   1000          13.76          19.44        13.94    	       14.02       	0.62   		18.48   		19.44  
 32768   1000          19.80          25.57        19.97    	       20.00       	0.38   		22.10   		25.57  
 65536   1000          31.41          37.52        31.63    	       31.72       	0.66   		35.73   		37.52  
 131072  1000          54.66          60.88        54.87    	       54.94       	0.64   		59.59   		60.88  
 262144  1000          101.05         106.20       101.30   	       101.34      	0.44   		103.84  		106.20 
 524288  1000          193.84         199.73       194.13   	       194.24      	0.59   		197.90  		199.73 
 1048576 1000          379.42         386.05       379.89   	       379.94      	0.49   		383.21  		386.05 
 2097152 1000          750.90         757.39       751.41   	       751.47      	0.51   		754.87  		757.39 
 4194304 1000          1495.11        1501.08      1495.80  	       1495.91     	0.64   		1499.68 		1501.08
 8388608 1000          2981.55        2987.35      2982.55  	       2982.62     	0.58   		2986.25 		2987.35
---------------------------------------------------------------------------------------


Additional info:

These two commands used run fine on RHEL-8.3.0-20200909.1 and prior builds; but now, the same failures are observed in them, as well. Refer to the attached test logs run with RHEL8.3.0-20200909.1

Comment 1 Brian Chae 2020-10-07 09:30:36 UTC
Created attachment 1719668 [details]
client log showing perftest failures with the same RHEL8.3 eSNAP3 build

Comment 2 Honggang LI 2020-10-09 08:03:58 UTC
https://beaker-archive.host.prod.eng.bos.redhat.com/beaker-logs/2020/10/46033/4603393/8867554/116191934/541668587/dmesg.log

[  307.409450] iwpm_register_pid: Unable to send a nlmsg (client = 2)
[  350.785441] [qedr_check_qp_attrs:1202(qedr1)]create qp: unsupported inline data size=0xec requested (max_inline=0x80)
[  350.898268] traps: ib_send_lat[30527] trap stack segment ip:7f374b039651 sp:7ffcf1ae0130 error:0 in librdmacm.so.1.2.29.0[7f374b030000+18000]
[  599.459433] [qedr_check_qp_attrs:1202(qedr1)]create qp: unsupported inline data size=0xdc requested (max_inline=0x80)
[  599.572268] traps: ib_write_lat[30751] trap stack segment ip:7ff66d1a6651 sp:7ffd27bf1ed0 error:0 in librdmacm.so.1.2.29.0[7ff66d19d000+18000]

Comment 3 Honggang LI 2020-10-09 08:12:24 UTC
It seems the perftest beaker case had been updated. The flag '-I 128' had been removed.


 657 + [20-09-09 19:34:32] timeout 3m ib_send_lat -a -c RC -F -d qedr1 -p 1 -F -R -I 128 172.31.50.102                                            
 658 ---------------------------------------------------------------------------------------                                                      
 659                     Send Latency Test                                                                                                        
 660  Dual-port       : OFF          Device         : qedr1                                                                                       
 661  Number of qps   : 1            Transport type : IW                                                                                          
 662  Connection type : RC           Using SRQ      : OFF                                                                                         
 663  PCIe relax order: ON                                                                                                                        
 664  ibv_wr* API     : OFF                                                                                                                       
 665  TX depth        : 1                                                                                                                         
 666  Mtu             : 4096[B]                                                                                                                   
 667  Link type       : Ethernet                                                                                                                  
 668  GID index       : 0                                                                                                                         
 669  Max inline data : 128[B]                                                                                                                    
 670  rdma_cm QPs     : ON                                                                                                                        
 671  Data ex. method : rdma_cm                                                                                                                   
 672 ---------------------------------------------------------------------------------------                                                      
 673  local address: LID 0000 QPN 0x0291 PSN 0x17953e                                                                                             
 674  GID: 00:14:30:212:29:255:00:00:00:00:00:00:00:00:00:00                                                                                      
 675  remote address: LID 0000 QPN 0x0291 PSN 0x2f4a6                                                                                             
 676  GID: 00:14:30:212:34:127:00:00:00:00:00:00:00:00:00:00                                                                                      
 677 ---------------------------------------------------------------------------------------                                                      
 678  #bytes #iterations    t_min[usec]    t_max[usec]  t_typical[usec]    t_avg[usec]    t_stdev[usec]   99% percentile[usec]   99.9% percentile[     usec]                                                                                                                                        
 679  2       1000          6.79           11.30        6.94                7.00             0.38            9.57                    11.30        
 680  4       1000          6.77           12.85        6.98                7.04             0.59            11.20                   12.85 



655 + [20-10-06 14:54:52] timeout 3m ib_send_lat -a -c RC -F -d qedr1 -p 1 -F -R 172.31.50.102                                                    
656 [qelr_create_qp:747]create qp: failed on ibv_cmd_create_qp with 22                                                                            
657 Couldn't create rdma old QP - Cannot allocate memory                                                                                          
658 Requested QP size might be too big. Try reducing TX depth and/or inline size.                                                                 
659 Current TX depth is 1 and inline size is 236 .
                                             ^^^                                                                                               
660 Unable to create QP.                                                                                                                          
661 Failed to create QP.                                                                                                                          
662 ERRNO: Cannot allocate memory.                                                                                                                
663 Failed to handle RDMA CM event.                                                                                                               
664 ERRNO: Cannot allocate memory.                                                                                                                
665 Failed to connect RDMA CM events.                                                                                                             
666 ERRNO: Cannot allocate memory.              

[  350.785441] [qedr_check_qp_attrs:1202(qedr1)]create qp: unsupported inline data size=0xec requested (max_inline=0x80)
                                                                       ^^^^^^^^^^^^^^^^^^^^^

[  350.898268] traps: ib_send_lat[30527] trap stack segment ip:7f374b039651 sp:7ffcf1ae0130 error:0 in librdmacm.so.1.2.29.0[7f374b030000+18000]                                                                                                  
667 timeout: the monitored command dumped core 

0xec == 236. Invalid max inline parameter was used.

Comment 4 Honggang LI 2020-10-09 09:20:50 UTC
Because the flag '-I 128' had been removed from beaker case, the default optimal values for inline were used.
Unfortunately default optimal values are not suitable for qedr1 device in rdma-dev-02/03.

132 /* Optimal Values for Inline */  
133 #define DEF_INLINE_WRITE (220)
134 #define DEF_INLINE_SEND_RC_UC (236)  <====
135 #define DEF_INLINE_SEND_XRC (236)    <====
136 #define DEF_INLINE_SEND_UD (188)
137 #define DEF_INLINE_DC (150)

1762│         if (user_param->inline_size == DEF_INLINE) {
1763│
1764│                 if (user_param->tst ==LAT) {
1765│
1766│                         switch(user_param->verb) {
1767│
1768│                                 case WRITE: user_param->inline_size = (user_param->connection_type == DC)? DEF_INLINE_DC : DEF_INLINE_WRITE;
1769│                                 case SEND : user_param->inline_size = (user_param->connection_type == DC)? DEF_INLINE_DC : (user_param->conn
1770├>                                            ((user_param->connection_type == XRC) ? DEF_INLINE_SEND_XRC : DEF_INLINE_SEND_RC_UC) ; break;
1771│                                 default   : user_param->inline_size = 0;
1772│                         }
1773│                         if (current_dev == NETXTREME)
1774│                                 user_param->inline_size = 96;
1775│                         else if (current_dev == EFA)
1776│                                 user_param->inline_size = 32;
1777│
1778│                 } else {
/usr/src/debug/perftest-4.4-3.el8.x86_64/src/perftest_parameters.c                                                                                


Hardware watchpoint 2: user_param.inline_size

Old value = -1
New value = 236
ctx_set_max_inline (context=context@entry=0x55555578c410, user_param=user_param@entry=0x7fffffffd160) at src/perftest_parameters.c:1770

Comment 5 Honggang LI 2020-10-09 09:49:54 UTC
(In reply to Honggang LI from comment #2)
> https://beaker-archive.host.prod.eng.bos.redhat.com/beaker-logs/2020/10/
> 46033/4603393/8867554/116191934/541668587/dmesg.log

1788                 if (user_param->tst ==LAT) {
1789 
1790                         switch(user_param->verb) {
1791 
1792                                 case WRITE: user_param->inline_size = (user_param->connection_type == DC)? DEF_INLINE_DC : DEF_INLINE_WRITE;      break;
1793                                 case SEND : user_param->inline_size = (user_param->connection_type == DC)? DEF_INLINE_DC : (user_param->conne     ction_type == UD)? DEF_INLINE_SEND_UD :


> [  350.785441] [qedr_check_qp_attrs:1202(qedr1)]create qp: unsupported
> inline data size=0xec requested (max_inline=0x80)
> [  350.898268] traps: ib_send_lat[30527] trap stack segment ip:7f374b039651
> sp:7ffcf1ae0130 error:0 in librdmacm.so.1.2.29.0[7f374b030000+18000]

For ib_send_lat, inline_size had been set to 0xec in line 1793.

> [  599.459433] [qedr_check_qp_attrs:1202(qedr1)]create qp: unsupported
> inline data size=0xdc requested (max_inline=0x80)
> [  599.572268] traps: ib_write_lat[30751] trap stack segment ip:7ff66d1a6651
> sp:7ffd27bf1ed0 error:0 in librdmacm.so.1.2.29.0[7ff66d19d000+18000]

For ib_write_lat, inline_size had been set to DEF_INLINE_WRITE (0xdc) in line 1792.

Comment 6 Honggang LI 2020-10-09 10:16:14 UTC
https://github.com/linux-rdma/perftest/pull/110

Opened this PR in upstream repo to address this issue.

Comment 7 Honggang LI 2020-10-12 00:19:24 UTC
(In reply to Honggang LI from comment #6)
> https://github.com/linux-rdma/perftest/pull/110
> 
> Opened this PR in upstream repo to address this issue.

Upstream merged this PR. Set devel+ flag.

Comment 8 Honggang LI 2020-10-14 08:25:30 UTC
perftest will be updated to latest upstream release v4.4-0.32.

https://github.com/linux-rdma/perftest/releases/tag/v4.4-0.32

Comment 13 Brian Chae 2020-12-01 21:57:16 UTC
The perftests, ib_send_lat RC and ib_write_lat RC, were tested fine, as shown below.

BUILD : RHEL-8.4.0-20201117.n.0
+ [20-11-19 13:18:22] rpm -q perftest
perftest-4.4-7.el8.x86_64

+ [20-11-19 13:19:09] timeout 3m ib_send_lat -a -c RC -d qedr1 -i 1 -F -R 172.31.50.102
---------------------------------------------------------------------------------------
                    Send Latency Test
 Dual-port       : OFF		Device         : qedr1
 Number of qps   : 1		Transport type : IW
 Connection type : RC		Using SRQ      : OFF
 PCIe relax order: ON
 ibv_wr* API     : OFF
 TX depth        : 1
 Mtu             : 4096[B]
 Link type       : Ethernet
 GID index       : 0
 Max inline data : 128[B]
 rdma_cm QPs	 : ON
 Data ex. method : rdma_cm
---------------------------------------------------------------------------------------
 local address: LID 0000 QPN 0x0291 PSN 0x1dcb33
 GID: 00:14:30:212:29:255:00:00:00:00:00:00:00:00:00:00
 remote address: LID 0000 QPN 0x0291 PSN 0x47a464
 GID: 00:14:30:212:34:127:00:00:00:00:00:00:00:00:00:00
---------------------------------------------------------------------------------------
 #bytes #iterations    t_min[usec]    t_max[usec]  t_typical[usec]    t_avg[usec]    t_stdev[usec]   99% percentile[usec]   99.9% percentile[usec] 
 2       1000          6.78           12.40        6.96     	       7.00        	0.37   		8.93    		12.40  
 4       1000          6.78           12.58        6.95     	       7.01        	0.45   		9.95    		12.58  
 8       1000          6.78           12.92        6.99     	       7.09        	0.71   		11.70   		12.92  
 16      1000          6.78           12.28        6.96     	       7.04        	0.59   		11.60   		12.28  
 32      1000          6.81           12.41        6.99     	       7.10        	0.66   		11.21   		12.41  
 64      1000          6.89           12.44        7.03     	       7.09        	0.40   		9.37    		12.44  
 128     1000          6.97           13.03        7.15     	       7.19        	0.50   		10.12   		13.03  
 256     1000          6.10           11.22        6.28     	       6.28        	0.45   		9.60    		11.22  
 512     1000          6.19           11.94        6.37     	       6.38        	0.46   		9.23    		11.94  
 1024    1000          6.46           12.21        6.64     	       6.66        	0.52   		11.11   		12.21  
 2048    1000          7.23           13.32        7.41     	       7.46        	0.60   		11.69   		13.32  
 4096    1000          7.74           12.80        7.92     	       7.96        	0.55   		11.81   		12.80  
 8192    1000          9.28           15.39        9.47     	       9.55        	0.68   		13.93   		15.39  
 16384   1000          13.75          21.61        13.94    	       14.09       	0.80   		18.84   		21.61  
 32768   1000          19.80          24.85        20.00    	       20.05       	0.51   		23.48   		24.85  
 65536   1000          31.41          37.31        31.62    	       31.66       	0.45   		35.07   		37.31  
 131072  1000          54.64          60.64        54.88    	       54.96       	0.62   		59.16   		60.64  
 262144  1000          101.08         106.87       101.30   	       101.35      	0.44   		103.99  		106.87 
 524288  1000          193.84         199.97       194.14   	       194.25      	0.61   		198.46  		199.97 
 1048576 1000          379.45         385.24       379.90   	       379.95      	0.48   		382.75  		385.24 
 2097152 1000          750.87         756.66       751.41   	       751.44      	0.40   		752.63  		756.66 
 4194304 1000          1495.11        1502.81      1495.79  	       1495.91     	0.74   		1500.16 		1502.81
 8388608 1000          2981.63        2988.71      2982.57  	       2982.66     	0.68   		2986.76 		2988.71
---------------------------------------------------------------------------------------
+ [20-11-19 13:19:32] RQA_check_result -r 0 -t 'ib_send_lat RC'



+ [20-11-19 13:19:33] timeout 3m ib_write_bw -a -c RC -d qedr1 -i 1 -F -R 172.31.50.102
---------------------------------------------------------------------------------------
                    RDMA_Write BW Test
 Dual-port       : OFF		Device         : qedr1
 Number of qps   : 1		Transport type : IW
 Connection type : RC		Using SRQ      : OFF
 PCIe relax order: ON
 ibv_wr* API     : OFF
 TX depth        : 128
 CQ Moderation   : 100
 Mtu             : 4096[B]
 Link type       : Ethernet
 GID index       : 0
 Max inline data : 0[B]
 rdma_cm QPs	 : ON
 Data ex. method : rdma_cm
---------------------------------------------------------------------------------------
 local address: LID 0000 QPN 0x0291 PSN 0xd64872
 GID: 00:14:30:212:29:255:00:00:00:00:00:00:00:00:00:00
 remote address: LID 0000 QPN 0x0291 PSN 0xefd942
 GID: 00:14:30:212:34:127:00:00:00:00:00:00:00:00:00:00
---------------------------------------------------------------------------------------
 #bytes     #iterations    BW peak[MB/sec]    BW average[MB/sec]   MsgRate[Mpps]
 2          5000             2.17               2.17   		   1.138841
 4          5000             4.32               4.30   		   1.127106
 8          5000             8.62               8.59   		   1.126137
 16         5000             17.26              17.22  		   1.128691
 32         5000             34.18              33.97  		   1.113094
 64         5000             69.42              68.98  		   1.130109
 128        5000             139.22             138.93 		   1.138081
 256        5000             274.07             273.73 		   1.121205
 512        5000             551.86             551.56 		   1.129586
 1024       5000             1082.19            1081.88		   1.107849
 2048       5000             1958.63            1957.06		   1.002013
 4096       5000             2437.75            2435.73		   0.623547
 8192       5000             2416.06            2416.03		   0.309252
 16384      5000             2624.63            2624.56		   0.167972
 32768      5000             2625.36            2625.31		   0.084010
 65536      5000             2681.40            2681.37		   0.042902
 131072     5000             2710.40            2710.37		   0.021683
 262144     5000             2725.14            2725.12		   0.010900
 524288     5000             2732.58            2732.57		   0.005465
 1048576    5000             2736.28            2736.28		   0.002736
 2097152    5000             2738.13            2738.13		   0.001369
 4194304    5000             2738.17            2738.17		   0.000685
 8388608    5000             2738.64            2738.64		   0.000342
---------------------------------------------------------------------------------------
+ [20-11-19 13:20:09] RQA_check_result -r 0 -t 'ib_write_bw RC'


Also, all QEDR IW perftest passed

Test results for perftest on rdma-dev-03:
4.18.0-249.el8.x86_64, rdma-core-32.0-1.el8, qede, iw, & qedr1
    Result | Status | Test
  ---------+--------+------------------------------------
      PASS |      0 | ib_read_bw RC
      PASS |      0 | ib_read_lat RC
      PASS |      0 | ib_send_bw RC
      PASS |      0 | ib_send_lat RC
      PASS |      0 | ib_write_bw RC
      PASS |      0 | ib_write_lat RC

Checking for failures and known issues:
  no test failures

Comment 15 errata-xmlrpc 2021-05-18 14:44:44 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory (RDMA stack bug fix and enhancement update), and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHBA-2021:1594


Note You need to log in before you can comment on or make changes to this bug.