Bug 2095307

Summary: [RHEL9.1] all mvapich2 benchmarks fail with "create qp: failed on ibv_cmd_create_qp with 22" error on QEDR IW / ROCE device
Product: Red Hat Enterprise Linux 9 Reporter: Brian Chae <bchae>
Component: mvapich2Assignee: Kamal Heib <kheib>
Status: ASSIGNED --- QA Contact: Infiniband QE <infiniband-qe>
Severity: unspecified Docs Contact:
Priority: unspecified    
Version: 9.1CC: hwkernel-mgr, kheib, rdma-dev-team
Target Milestone: rc   
Target Release: ---   
Hardware: Unspecified   
OS: Unspecified   
Whiteboard:
Fixed In Version: Doc Type: If docs needed, set a value
Doc Text:
Story Points: ---
Clone Of: Environment:
Last Closed: Type: Bug
Regression: --- Mount Type: ---
Documentation: --- CRM:
Verified Versions: Category: ---
oVirt Team: --- RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: --- Target Upstream Version:
Embargoed:

Description Brian Chae 2022-06-09 13:56:29 UTC
Description of problem:

When tested on QEDR IW or ROCE device, all mvapich2 benchmarks fail with the following error message.

"[create_qp:2753]create qp: failed on ibv_cmd_create_qp with 22"


Version-Release number of selected component (if applicable):

Clients: rdma-dev-03
Servers: rdma-dev-02

DISTRO=RHEL-9.1.0-20220609.0

+ [22-06-09 06:31:40] cat /etc/redhat-release
Red Hat Enterprise Linux release 9.1 Beta (Plow)

+ [22-06-09 06:31:40] uname -a
Linux rdma-dev-03.rdma.lab.eng.rdu2.redhat.com 5.14.0-106.el9.x86_64 #1 SMP PREEMPT_DYNAMIC Tue Jun 7 07:22:29 EDT 2022 x86_64 x86_64 x86_64 GNU/Linux

+ [22-06-09 06:31:40] cat /proc/cmdline
BOOT_IMAGE=(hd0,msdos1)/vmlinuz-5.14.0-106.el9.x86_64 root=/dev/mapper/rhel_rdma--dev--03-root ro console=tty0 rd_NO_PLYMOUTH intel_iommu=on iommu=on crashkernel=1G-4G:192M,4G-64G:256M,64G-:512M resume=/dev/mapper/rhel_rdma--dev--03-swap rd.lvm.lv=rhel_rdma-dev-03/root rd.lvm.lv=rhel_rdma-dev-03/swap console=ttyS1,115200

+ [22-06-09 06:31:40] rpm -q rdma-core linux-firmware
rdma-core-37.2-1.el9.x86_64
linux-firmware-20220509-126.el9.noarch

+ [22-06-09 06:31:40] tail /sys/class/infiniband/qedr0/fw_ver /sys/class/infiniband/qedr1/fw_ver
==> /sys/class/infiniband/qedr0/fw_ver <==
8.59.1.0

==> /sys/class/infiniband/qedr1/fw_ver <==
8.59.1.0

+ [22-06-09 06:31:40] lspci
+ [22-06-09 06:31:40] grep -i -e ethernet -e infiniband -e omni -e ConnectX
02:00.0 Ethernet controller: Broadcom Inc. and subsidiaries NetXtreme BCM5720 Gigabit Ethernet PCIe
02:00.1 Ethernet controller: Broadcom Inc. and subsidiaries NetXtreme BCM5720 Gigabit Ethernet PCIe
08:00.0 Ethernet controller: QLogic Corp. FastLinQ QL45000 Series 25GbE Controller (rev 10)
08:00.1 Ethernet controller: QLogic Corp. FastLinQ QL45000 Series 25GbE Controller (rev 10)



Installed:
  mpitests-mvapich2-5.8-1.el9.x86_64         mvapich2-2.3.6-3.el9.x86_64        



How reproducible:

100%

Steps to Reproduce:
. bring up the RDMA hosts mentioned above with RHEL9.1 build
2. set up RDMA hosts for mvapich2 benchamrk tests
3. run one of the mvapich2 benchmark with "mpirun" command, as the following:

timeout --preserve-status --kill-after=5m 3m mpirun -hostfile /root/hfile_one_core -np 2 mpitests-IMB-MPI1 PingPong -time 1.5

Actual results:

[rdma-dev-02.rdma.lab.eng.rdu2.redhat.com:mpi_rank_0][rdma_param_handle_heterogeneity] All nodes involved in the job were detected to be homogeneous in terms of processors and interconnects. Setting MV2_HOMOGENEOUS_CLUSTER=1 can improve job startup performance on such systems. The following link has more details on enhancing job startup performance. http://mvapich.cse.ohio-state.edu/performance/job-startup/.
[rdma-dev-02.rdma.lab.eng.rdu2.redhat.com:mpi_rank_0][rdma_param_handle_heterogeneity] To suppress this warning, please set MV2_SUPPRESS_JOB_STARTUP_PERFORMANCE_WARNING to 1
[create_qp:2752]create qp: failed on ibv_cmd_create_qp with 22 <<<=======
[cli_0]: aborting job:
Fatal error in PMPI_Init_thread:
Other MPI error, error stack:
MPIR_Init_thread(493)....: 
MPID_Init(419)...........: channel initialization failed
MPIDI_CH3_Init(550)......: 
MPIDI_CH3I_RDMA_init(446): 
rdma_iba_hca_init(1746)..: Failed to create qp for rank 0

+ [22-06-09 06:31:52] __MPI_check_result 143 mpitests-mvapich2 IMB-MPI1 PingPong mpirun /root/hfile_one_core

Expected results:

Normal execution of the benchmark with stats output

Additional info:

Comment 1 Brian Chae 2023-07-26 10:50:00 UTC
With RHEL-9.3.0 build same kind of return code but different failure cause as shown were observed.

+ [23-07-25 15:34:49] timeout --preserve-status --kill-after=5m 3m mpirun -hostfile /root/hfile_one_core -np 2 mpitests-IMB-MPI1 PingPong -time 1.5
[rdma-dev-02.rdma.lab.eng.rdu2.redhat.com:mpi_rank_0][rdma_param_handle_heterogeneity] All nodes involved in the job were detected to be homogeneous in terms of processors and interconnects. Setting MV2_HOMOGENEOUS_CLUSTER=1 can improve job startup performance on such systems. The following link has more details on enhancing job startup performance. http://mvapich.cse.ohio-state.edu/performance/job-startup/.
[rdma-dev-02.rdma.lab.eng.rdu2.redhat.com:mpi_rank_0][rdma_param_handle_heterogeneity] To suppress this warning, please set MV2_SUPPRESS_JOB_STARTUP_PERFORMANCE_WARNING to 1
Fatal error in PMPI_Init_thread:
Other MPI error, error stack:
MPIR_Init_thread(493)....: 
MPID_Init(419)...........: channel initialization failed
MPIDI_CH3_Init(601)......: 
MPIDI_CH3I_RDMA_init(446): 
rdma_iba_hca_init(1775)..: Failed to retrieve gid on rank 1

[cli_1]: aborting job:
Fatal error in PMPI_Init_thread:
Other MPI error, error stack:
MPIR_Init_thread(493)....: 
MPID_Init(419)...........: channel initialization failed
MPIDI_CH3_Init(601)......: 
MPIDI_CH3I_RDMA_init(446): 
rdma_iba_hca_init(1775)..: Failed to retrieve gid on rank 1

+ [23-07-25 15:34:49] __MPI_check_result 143 mpitests-mvapich2 IMB-MPI1 PingPong mpirun /root/hfile_one_core