Bug 2064298

Summary: [RHEL8.6] various number benchmarks fail on mlx4 IB/RoCE devices
Product: Red Hat Enterprise Linux 8 Reporter: Brian Chae <bchae>
Component: mvapich2Assignee: Kamal Heib <kheib>
Status: ASSIGNED --- QA Contact: Infiniband QE <infiniband-qe>
Severity: unspecified Docs Contact:
Priority: unspecified    
Version: 8.6CC: hwkernel-mgr, kheib, rdma-dev-team
Target Milestone: rc   
Target Release: ---   
Hardware: Unspecified   
OS: Unspecified   
Whiteboard:
Fixed In Version: Doc Type: If docs needed, set a value
Doc Text:
Story Points: ---
Clone Of: Environment:
Last Closed: Type: Bug
Regression: --- Mount Type: ---
Documentation: --- CRM:
Verified Versions: Category: ---
oVirt Team: --- RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: --- Target Upstream Version:
Embargoed:
Bug Depends On:    
Bug Blocks: 2026666    

Description Brian Chae 2022-03-15 14:06:06 UTC
Description of problem:

Various number of mvapich2 benchmarks fail with RC255, RC252, RC1 on mlx4 IB devices - sometime all of them fail and sometimes just some.


Version-Release number of selected component (if applicable):

DISTRO=RHEL-8.6.0-20220308.2

+ [22-03-10 22:58:48] cat /etc/redhat-release
Red Hat Enterprise Linux release 8.6 Beta (Ootpa)

+ [22-03-10 22:58:48] uname -a
Linux rdma-perf-01.rdma.lab.eng.rdu2.redhat.com 4.18.0-369.el8.x86_64 #1 SMP Mon Feb 21 10:56:06 EST 2022 x86_64 x86_64 x86_64 GNU/Linux

+ [22-03-10 22:58:48] cat /proc/cmdline
BOOT_IMAGE=(hd0,msdos1)/vmlinuz-4.18.0-369.el8.x86_64 root=/dev/mapper/rhel_rdma--perf--01-root ro intel_idle.max_cstate=0 intremap=no_x2apic_optout processor.max_cstate=0 intel_iommu=on iommu=on console=tty0 rd_NO_PLYMOUTH intel_idle.max_cstate=0 intremap=no_x2apic_optout processor.max_cstate=0 crashkernel=auto resume=/dev/mapper/rhel_rdma--perf--01-swap rd.lvm.lv=rhel_rdma-perf-01/root rd.lvm.lv=rhel_rdma-perf-01/swap console=ttyS1,115200n81

+ [22-03-10 22:58:48] rpm -q rdma-core linux-firmware
rdma-core-37.2-1.el8.x86_64
linux-firmware-20220210-106.git6342082c.el8.noarch

+ [22-03-10 22:58:48] tail /sys/class/infiniband/mlx4_0/fw_ver
2.42.5000

How reproducible:
100%

Steps to Reproduce:
1. bring up the RDMA hosts mentioned above with RHEL8.6 build
2. set up RDMA hosts for mvapich2 benchamrk tests
3. run one of the mvapich2 benchmark with "mpirun" command, as the following:

 timeout --preserve-status --kill-after=5m 3m mpirun -hostfile /root/hfile_one_core -np 2 mpitests-IMB-MPI1 PingPong -time 1.5


Actual results:

+ [22-03-10 22:59:06] timeout --preserve-status --kill-after=5m 3m mpirun -hostfile /root/hfile_one_core -np 2 mpitests-IMB-MPI1 PingPong -time 1.5
[rdma-perf-00.rdma.lab.eng.rdu2.redhat.com:mpi_rank_0][rdma_param_handle_heterogeneity] All nodes involved in the job were detected to be homogeneous in terms of processors and interconnects. Setting MV2_HOMOGENEOUS_CLUSTER=1 can improve job startup performance on such systems. The following link has more details on enhancing job startup performance. http://mvapich.cse.ohio-state.edu/performance/job-startup/.
[rdma-perf-00.rdma.lab.eng.rdu2.redhat.com:mpi_rank_0][rdma_param_handle_heterogeneity] To suppress this warning, please set MV2_SUPPRESS_JOB_STARTUP_PERFORMANCE_WARNING to 1
#----------------------------------------------------------------
#    Intel(R) MPI Benchmarks 2021.3, MPI-1 part
#----------------------------------------------------------------
# Date                  : Thu Mar 10 22:59:07 2022
# Machine               : x86_64
# System                : Linux
# Release               : 4.18.0-369.el8.x86_64
# Version               : #1 SMP Mon Feb 21 10:56:06 EST 2022
# MPI Version           : 3.1
# MPI Thread Environment: 


# Calling sequence was: 

# mpitests-IMB-MPI1 PingPong -time 1.5 

# Minimum message length in bytes:   0
# Maximum message length in bytes:   4194304
#
# MPI_Datatype                   :   MPI_BYTE 
# MPI_Datatype for reductions    :   MPI_FLOAT 
# MPI_Op                         :   MPI_SUM  
# 
# 

# List of Benchmarks to run:

# PingPong

#---------------------------------------------------
# Benchmarking PingPong 
# #processes = 2 
#---------------------------------------------------
       #bytes #repetitions      t[usec]   Mbytes/sec
            0         1000         1.75         0.00
            1         1000         2.07         0.48
            2         1000         1.89         1.06
            4         1000         2.06         1.94
            8         1000         1.89         4.23
           16         1000         1.94         8.25
           32         1000         1.92        16.70
           64         1000         1.96        32.58
          128         1000         2.08        61.44
          256         1000         2.84        90.15
          512         1000         3.00       170.93
         1024         1000         3.38       303.03
         2048         1000         4.30       476.41
[rdma-perf-00.rdma.lab.eng.rdu2.redhat.com:mpi_rank_0][handle_cqe] Send desc error in msg to 1, wc_opcode=0
[rdma-perf-00.rdma.lab.eng.rdu2.redhat.com:mpi_rank_0][handle_cqe] Msg from 1: wc.status=12 (transport retry counter exceeded), wc.wr_id=0x55a96d566ae0, wc.opcode=0, vbuf->phead->type=0 = MPIDI_CH3_PKT_EAGER_SEND
[rdma-perf-00.rdma.lab.eng.rdu2.redhat.com:mpi_rank_0][mv2_print_wc_status_error] IBV_WC_RETRY_EXC_ERR: This event is generated when a sender is unable to receive feedback from the receiver. This means that either the receiver just never ACKs sender messages in a specified time period, or it has been disconnected or it is in a bad state which prevents it from responding. If this happens when sending the first message, usually it means that the QP connection attributes are wrong or the remote side is not in a state that it can respond to messages. If this happens after sending the first message, usually it means that the remote QP is not available anymore or that there is congestion in the network preventing the packets from reaching on time. Relevant to: RC or DC QPs.
[rdma-perf-00.rdma.lab.eng.rdu2.redhat.com:mpi_rank_0][handle_cqe] src/mpid/ch3/channels/mrail/src/gen2/ibv_channel_manager.c:499: [] Got completion with error 12, vendor code=0x81, dest rank=1
: No data available (61)
[mpiexec.lab.eng.rdu2.redhat.com] HYDU_sock_write (utils/sock/sock.c:294): write error (Bad file descriptor)
[mpiexec.lab.eng.rdu2.redhat.com] HYD_pmcd_pmiserv_send_signal (pm/pmiserv/pmiserv_cb.c:177): unable to write data to proxy
[mpiexec.lab.eng.rdu2.redhat.com] ui_cmd_cb (pm/pmiserv/pmiserv_pmci.c:79): unable to send signal downstream
[mpiexec.lab.eng.rdu2.redhat.com] HYDT_dmxu_poll_wait_for_event (tools/demux/demux_poll.c:76): callback returned error status
[mpiexec.lab.eng.rdu2.redhat.com] HYD_pmci_wait_for_completion (pm/pmiserv/pmiserv_pmci.c:198): error waiting for event
[mpiexec.lab.eng.rdu2.redhat.com] main (ui/mpich/mpiexec.c:340): process manager error waiting for completion
+ [22-03-10 23:02:06] __MPI_check_result 255 mpitests-mvapich2 IMB-MPI1 PingPong mpirun /root/hfile_one_core


Expected results:
Normal completion of the benchmark tests with expected stat outputs

Additional info:

Comment 1 Brian Chae 2023-07-25 18:12:45 UTC
On RHEL9.3, the similar mvapich2 benchmark failures were observed on MLX5 RoCE devices - especially on MT27500/MT27520 Family [ConnectX-3/ConnectX-3 Pro ]

An example failure:

+ [23-07-21 10:05:22] timeout --preserve-status --kill-after=5m 3m mpirun -hostfile /root/hfile_one_core -np 2 mpitests-IMB-MPI1 Sendrecv -time 1.5
[rdma-perf-00.rdma.lab.eng.rdu2.redhat.com:mpi_rank_0][rdma_param_handle_heterogeneity] All nodes involved in the job were detected to be homogeneous in terms of processors and interconnects. Setting MV2_HOMOGENEOUS_CLUSTER=1 can improve job startup performance on such systems. The following link has more details on enhancing job startup performance. http://mvapich.cse.ohio-state.edu/performance/job-startup/.
[rdma-perf-00.rdma.lab.eng.rdu2.redhat.com:mpi_rank_0][rdma_param_handle_heterogeneity] To suppress this warning, please set MV2_SUPPRESS_JOB_STARTUP_PERFORMANCE_WARNING to 1
#----------------------------------------------------------------
#    Intel(R) MPI Benchmarks 2021.3, MPI-1 part
#----------------------------------------------------------------
# Date                  : Fri Jul 21 10:05:23 2023
# Machine               : x86_64
# System                : Linux
# Release               : 5.14.0-339.el9.x86_64
# Version               : #1 SMP PREEMPT_DYNAMIC Thu Jul 13 07:33:32 EDT 2023
# MPI Version           : 3.1
# MPI Thread Environment: 


# Calling sequence was: 

# mpitests-IMB-MPI1 Sendrecv -time 1.5 

# Minimum message length in bytes:   0
# Maximum message length in bytes:   4194304
#
# MPI_Datatype                   :   MPI_BYTE 
# MPI_Datatype for reductions    :   MPI_FLOAT 
# MPI_Op                         :   MPI_SUM  
# 
# 

# List of Benchmarks to run:

# Sendrecv

#-----------------------------------------------------------------------------
# Benchmarking Sendrecv 
# #processes = 2 
#-----------------------------------------------------------------------------
       #bytes #repetitions  t_min[usec]  t_max[usec]  t_avg[usec]   Mbytes/sec
            0         1000         1.97         1.98         1.97         0.00
            1         1000         2.05         2.05         2.05         0.97
            2         1000         2.07         2.07         2.07         1.93
            4         1000         2.05         2.05         2.05         3.91
            8         1000         2.05         2.05         2.05         7.79
           16         1000         2.07         2.07         2.07        15.43
           32         1000         2.10         2.10         2.10        30.52
           64         1000         2.12         2.13         2.12        60.09
          128         1000         2.23         2.23         2.23       114.58
          256         1000         3.29         3.29         3.29       155.74
          512         1000         3.39         3.39         3.39       301.85
         1024         1000         3.83         3.83         3.83       535.03
         2048         1000         4.76         4.77         4.76       859.51
[rdma-perf-00.rdma.lab.eng.rdu2.redhat.com:mpi_rank_0][handle_cqe] Send desc error in msg to 1, wc_opcode=0
[rdma-perf-00.rdma.lab.eng.rdu2.redhat.com:mpi_rank_0][handle_cqe] Msg from 1: wc.status=12 (transport retry counter exceeded), wc.wr_id=0x5627244dbae0, wc.opcode=0, vbuf->phead->type=0 = MPIDI_CH3_PKT_EAGER_SEND
[rdma-perf-00.rdma.lab.eng.rdu2.redhat.com:mpi_rank_0][mv2_print_wc_status_error] IBV_WC_RETRY_EXC_ERR: This event is generated when a sender is unable to receive feedback from the receiver. This means that either the receiver just never ACKs sender messages in a specified time period, or it has been disconnected or it is in a bad state which prevents it from responding. If this happens when sending the first message, usually it means that the QP connection attributes are wrong or the remote side is not in a state that it can respond to messages. If this happens after sending the first message, usually it means that the remote QP is not available anymore or that there is congestion in the network preventing the packets from reaching on time. Relevant to: RC or DC QPs.
[rdma-perf-00.rdma.lab.eng.rdu2.redhat.com:mpi_rank_0][handle_cqe] src/mpid/ch3/channels/mrail/src/gen2/ibv_channel_manager.c:497: [] Got completion with error 12, vendor code=0x81, dest rank=1
: Cannot allocate memory (12)
[rdma-perf-01.rdma.lab.eng.rdu2.redhat.com:mpi_rank_1][handle_cqe] Send desc error in msg to 0, wc_opcode=0
[rdma-perf-01.rdma.lab.eng.rdu2.redhat.com:mpi_rank_1][handle_cqe] Msg from 0: wc.status=12 (transport retry counter exceeded), wc.wr_id=0x558f8e40c050, wc.opcode=0, vbuf->phead->type=0 = MPIDI_CH3_PKT_EAGER_SEND
[rdma-perf-01.rdma.lab.eng.rdu2.redhat.com:mpi_rank_1][mv2_print_wc_status_error] IBV_WC_RETRY_EXC_ERR: This event is generated when a sender is unable to receive feedback from the receiver. This means that either the receiver just never ACKs sender messages in a specified time period, or it has been disconnected or it is in a bad state which prevents it from responding. If this happens when sending the first message, usually it means that the QP connection attributes are wrong or the remote side is not in a state that it can respond to messages. If this happens after sending the first message, usually it means that the remote QP is not available anymore or that there is congestion in the network preventing the packets from reaching on time. Relevant to: RC or DC QPs.
[rdma-perf-01.rdma.lab.eng.rdu2.redhat.com:mpi_rank_1][handle_cqe] src/mpid/ch3/channels/mrail/src/gen2/ibv_channel_manager.c:497: [] Got completion with error 12, vendor code=0x81, dest rank=0
: Cannot allocate memory (12)
+ [23-07-21 10:06:22] __MPI_check_result 252 mpitests-mvapich2 IMB-MPI1 Sendrecv mpirun /root/hfile_one_core