Bug 2064275 - [RHEL9.0] 1 openmpi benchmark fails on iRDMA device
Summary: [RHEL9.0] 1 openmpi benchmark fails on iRDMA device
Keywords:
Status: ASSIGNED
Alias: None
Product: Red Hat Enterprise Linux 9
Classification: Red Hat
Component: openmpi
Version: 9.0
Hardware: Unspecified
OS: Unspecified
unspecified
unspecified
Target Milestone: rc
: ---
Assignee: Kamal Heib
QA Contact: Infiniband QE
URL:
Whiteboard:
Depends On:
Blocks: 2064310
TreeView+ depends on / blocked
 
Reported: 2022-03-15 13:23 UTC by Brian Chae
Modified: 2023-08-16 07:28 UTC (History)
2 users (show)

Fixed In Version:
Doc Type: If docs needed, set a value
Doc Text:
Clone Of:
: 2064310 (view as bug list)
Environment:
Last Closed:
Type: Bug
Target Upstream Version:
Embargoed:


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
Red Hat Issue Tracker RHELPLAN-115607 0 None None None 2022-03-15 13:40:01 UTC

Description Brian Chae 2022-03-15 13:23:54 UTC
Description of problem:

The following OPENMPI benchmark fails with timeout, when OPENMPI is tested on iRDMA device

      FAIL |      1 | openmpi OSU get_acc_latency mpirun one_core


Version-Release number of selected component (if applicable):

DISTRO=RHEL-9.0.0-20220313.2

+ [22-03-14 23:02:13] cat /etc/redhat-release
Red Hat Enterprise Linux release 9.0 Beta (Plow)

+ [22-03-14 23:02:13] uname -a
Linux rdma-dev-31.rdma.lab.eng.rdu2.redhat.com 5.14.0-70.1.1.el9.x86_64 #1 SMP PREEMPT Tue Mar 8 22:22:02 EST 2022 x86_64 x86_64 x86_64 GNU/Linux

+ [22-03-14 23:02:13] cat /proc/cmdline
BOOT_IMAGE=(hd0,gpt2)/vmlinuz-5.14.0-70.1.1.el9.x86_64 root=UUID=d6795fa3-ceba-43e0-9ad1-fd9a60321130 ro crashkernel=1G-4G:192M,4G-64G:256M,64G-:512M resume=UUID=51c154cc-a9a3-41d3-9cf1-50e91c549757 console=ttyS0,115200n81

+ [22-03-14 23:02:13] rpm -q rdma-core linux-firmware
rdma-core-37.2-1.el9.x86_64
linux-firmware-20220209-125.el9.noarch

+ [22-03-14 23:02:13] tail /sys/class/infiniband/irdma0/fw_ver /sys/class/infiniband/irdma1/fw_ver
==> /sys/class/infiniband/irdma0/fw_ver <==
1.52

==> /sys/class/infiniband/irdma1/fw_ver <==
1.52
+ [22-03-14 23:02:13] lspci
+ [22-03-14 23:02:13] grep -i -e ethernet -e infiniband -e omni -e ConnectX
04:00.0 Ethernet controller: Broadcom Inc. and subsidiaries NetXtreme BCM5719 Gigabit Ethernet PCIe (rev 01)
04:00.1 Ethernet controller: Broadcom Inc. and subsidiaries NetXtreme BCM5719 Gigabit Ethernet PCIe (rev 01)
04:00.2 Ethernet controller: Broadcom Inc. and subsidiaries NetXtreme BCM5719 Gigabit Ethernet PCIe (rev 01)
04:00.3 Ethernet controller: Broadcom Inc. and subsidiaries NetXtreme BCM5719 Gigabit Ethernet PCIe (rev 01)
44:00.0 Ethernet controller: Intel Corporation Ethernet Controller E810-C for QSFP (rev 02)
44:00.1 Ethernet controller: Intel Corporation Ethernet Controller E810-C for QSFP (rev 02)


How reproducible:
100%

Steps to Reproduce:
1. bring up the RDMA hosts mentioned above with RHEL9.0 build
2. set up RDMA hosts for mvapich2 benchamrk tests
3. run the openmpi benchamrk command as the following on the client host

timeout --preserve-status --kill-after=5m 3m mpirun -hostfile /root/hfile_one_core -np 2 --allow-run-as-root --map-by node -mca btl_openib_warn_nonexistent_if 0 -mca btl_openib_if_include irdma0:1 -mca mtl '^psm2,psm,ofi' -mca btl '^openib' --mca mtl_base_verbose 100 --mca btl_openib_verbose 100 -mca pml ucx -mca osc ucx -x UCX_NET_DEVICES=i810_roce.45 --mca osc_ucx_verbose 100 --mca pml_ucx_verbose 100 /usr/lib64/openmpi/bin/mpitests-osu_get_acc_latency


Actual results:

+ [22-03-14 23:28:43] timeout --preserve-status --kill-after=5m 3m mpirun -hostfile /root/hfile_one_core -np 2 --allow-run-as-root --map-by node -mca btl_openib_warn_nonexistent_if 0 -mca btl_openib_if_include irdma0:1 -mca mtl '^psm2,psm,ofi' -mca btl '^openib' --mca mtl_base_verbose 100 --mca btl_openib_verbose 100 -mca pml ucx -mca osc ucx -x UCX_NET_DEVICES=i810_roce.45 --mca osc_ucx_verbose 100 --mca pml_ucx_verbose 100 /usr/lib64/openmpi/bin/mpitests-osu_get_acc_latency
[rdma-dev-30.rdma.lab.eng.rdu2.redhat.com:55166] pml_ucx.c:197 mca_pml_ucx_open: UCX version 1.11.2
[rdma-dev-30.rdma.lab.eng.rdu2.redhat.com:55166] pml_ucx.c:289 mca_pml_ucx_init
[rdma-dev-30.rdma.lab.eng.rdu2.redhat.com:55166] pml_ucx.c:114 Pack remote worker address, size 38
[rdma-dev-30.rdma.lab.eng.rdu2.redhat.com:55166] pml_ucx.c:114 Pack local worker address, size 141
[rdma-dev-30.rdma.lab.eng.rdu2.redhat.com:55166] pml_ucx.c:351 created ucp context 0x55add3790af0, worker 0x55add38b3480
[rdma-dev-31.rdma.lab.eng.rdu2.redhat.com:59288] pml_ucx.c:197 mca_pml_ucx_open: UCX version 1.11.2
[rdma-dev-31.rdma.lab.eng.rdu2.redhat.com:59288] pml_ucx.c:289 mca_pml_ucx_init
[rdma-dev-31.rdma.lab.eng.rdu2.redhat.com:59288] pml_ucx.c:114 Pack remote worker address, size 38
[rdma-dev-31.rdma.lab.eng.rdu2.redhat.com:59288] pml_ucx.c:114 Pack local worker address, size 141
[rdma-dev-31.rdma.lab.eng.rdu2.redhat.com:59288] pml_ucx.c:351 created ucp context 0x5635b9edfda0, worker 0x5635ba059400
[rdma-dev-31.rdma.lab.eng.rdu2.redhat.com:59288] pml_ucx.c:182 Got proc 0 address, size 141
[rdma-dev-31.rdma.lab.eng.rdu2.redhat.com:59288] pml_ucx.c:411 connecting to proc. 0
[rdma-dev-30.rdma.lab.eng.rdu2.redhat.com:55166] pml_ucx.c:182 Got proc 1 address, size 141
[rdma-dev-30.rdma.lab.eng.rdu2.redhat.com:55166] pml_ucx.c:411 connecting to proc. 1
# OSU MPI_Get_accumulate latency Test v5.8
# Window creation: MPI_Win_create
# Synchronization: MPI_Win_lock/unlock
# Size          Latency (us)
[rdma-dev-30.rdma.lab.eng.rdu2.redhat.com:55166] pml_ucx.c:182 Got proc 0 address, size 38
[rdma-dev-30.rdma.lab.eng.rdu2.redhat.com:55166] pml_ucx.c:411 connecting to proc. 0
[rdma-dev-31.rdma.lab.eng.rdu2.redhat.com:59288] pml_ucx.c:182 Got proc 1 address, size 38
[rdma-dev-31.rdma.lab.eng.rdu2.redhat.com:59288] pml_ucx.c:411 connecting to proc. 1
1                    1215.44
2                    1056.51
4                     981.62
8                     975.97
16                    979.40
32                   1325.45
64                   1051.62
128                  1103.67
256                  1098.76
512                  1108.47
1024                  897.03
2048                 1248.53
4096                 1275.95
8192                 1458.40
16384                1584.87
32768                1609.64
65536                1744.95
131072               1733.73
262144               1942.46
524288               2341.88
1048576              3303.85
mpirun: Forwarding signal 18 to job
2097152              5394.73
+ [22-03-14 23:31:47] __MPI_check_result 1 mpitests-openmpi OSU /usr/lib64/openmpi/bin/mpitests-osu_get_acc_latency mpirun /root/hfile_one_core


Expected results:

Normal benchmark test completion with expected latency stats for all message sizes

Additional info:


Note You need to log in before you can comment on or make changes to this bug.