Note: This bug is displayed in read-only format because
the product is no longer active in Red Hat Bugzilla.
RHEL Engineering is moving the tracking of its product development work on RHEL 6 through RHEL 9 to Red Hat Jira (issues.redhat.com). If you're a Red Hat customer, please continue to file support cases via the Red Hat customer portal. If you're not, please head to the "RHEL project" in Red Hat Jira and file new tickets here. Individual Bugzilla bugs in the statuses "NEW", "ASSIGNED", and "POST" are being migrated throughout September 2023. Bugs of Red Hat partners with an assigned Engineering Partner Manager (EPM) are migrated in late September as per pre-agreed dates. Bugs against components "kernel", "kernel-rt", and "kpatch" are only migrated if still in "NEW" or "ASSIGNED". If you cannot log in to RH Jira, please consult article #7032570. That failing, please send an e-mail to the RH Jira admins at rh-issues@redhat.com to troubleshoot your issue as a user management inquiry. The email creates a ServiceNow ticket with Red Hat. Individual Bugzilla bugs that are migrated will be moved to status "CLOSED", resolution "MIGRATED", and set with "MigratedToJIRA" in "Keywords". The link to the successor Jira issue will be found under "Links", have a little "two-footprint" icon next to it, and direct you to the "RHEL project" in Red Hat Jira (issue links are of type "https://issues.redhat.com/browse/RHEL-XXXX", where "X" is a digit). This same link will be available in a blue banner at the top of the page informing you that that bug has been migrated.
+++ This bug was initially created as a clone of Bug #2064275 +++
Description of problem:
The following OPENMPI benchmark fails with timeout, when OPENMPI is tested on iRDMA device
FAIL | 1 | openmpi OSU get_acc_latency mpirun one_core
Version-Release number of selected component (if applicable):
DISTRO=RHEL-9.0.0-20220313.2
+ [22-03-14 23:02:13] cat /etc/redhat-release
Red Hat Enterprise Linux release 9.0 Beta (Plow)
+ [22-03-14 23:02:13] uname -a
Linux rdma-dev-31.rdma.lab.eng.rdu2.redhat.com 5.14.0-70.1.1.el9.x86_64 #1 SMP PREEMPT Tue Mar 8 22:22:02 EST 2022 x86_64 x86_64 x86_64 GNU/Linux
+ [22-03-14 23:02:13] cat /proc/cmdline
BOOT_IMAGE=(hd0,gpt2)/vmlinuz-5.14.0-70.1.1.el9.x86_64 root=UUID=d6795fa3-ceba-43e0-9ad1-fd9a60321130 ro crashkernel=1G-4G:192M,4G-64G:256M,64G-:512M resume=UUID=51c154cc-a9a3-41d3-9cf1-50e91c549757 console=ttyS0,115200n81
+ [22-03-14 23:02:13] rpm -q rdma-core linux-firmware
rdma-core-37.2-1.el9.x86_64
linux-firmware-20220209-125.el9.noarch
+ [22-03-14 23:02:13] tail /sys/class/infiniband/irdma0/fw_ver /sys/class/infiniband/irdma1/fw_ver
==> /sys/class/infiniband/irdma0/fw_ver <==
1.52
==> /sys/class/infiniband/irdma1/fw_ver <==
1.52
+ [22-03-14 23:02:13] lspci
+ [22-03-14 23:02:13] grep -i -e ethernet -e infiniband -e omni -e ConnectX
04:00.0 Ethernet controller: Broadcom Inc. and subsidiaries NetXtreme BCM5719 Gigabit Ethernet PCIe (rev 01)
04:00.1 Ethernet controller: Broadcom Inc. and subsidiaries NetXtreme BCM5719 Gigabit Ethernet PCIe (rev 01)
04:00.2 Ethernet controller: Broadcom Inc. and subsidiaries NetXtreme BCM5719 Gigabit Ethernet PCIe (rev 01)
04:00.3 Ethernet controller: Broadcom Inc. and subsidiaries NetXtreme BCM5719 Gigabit Ethernet PCIe (rev 01)
44:00.0 Ethernet controller: Intel Corporation Ethernet Controller E810-C for QSFP (rev 02)
44:00.1 Ethernet controller: Intel Corporation Ethernet Controller E810-C for QSFP (rev 02)
How reproducible:
100%
Steps to Reproduce:
1. bring up the RDMA hosts mentioned above with RHEL9.0 build
2. set up RDMA hosts for mvapich2 benchamrk tests
3. run the openmpi benchamrk command as the following on the client host
timeout --preserve-status --kill-after=5m 3m mpirun -hostfile /root/hfile_one_core -np 2 --allow-run-as-root --map-by node -mca btl_openib_warn_nonexistent_if 0 -mca btl_openib_if_include irdma0:1 -mca mtl '^psm2,psm,ofi' -mca btl '^openib' --mca mtl_base_verbose 100 --mca btl_openib_verbose 100 -mca pml ucx -mca osc ucx -x UCX_NET_DEVICES=i810_roce.45 --mca osc_ucx_verbose 100 --mca pml_ucx_verbose 100 /usr/lib64/openmpi/bin/mpitests-osu_get_acc_latency
Actual results:
+ [22-03-14 23:28:43] timeout --preserve-status --kill-after=5m 3m mpirun -hostfile /root/hfile_one_core -np 2 --allow-run-as-root --map-by node -mca btl_openib_warn_nonexistent_if 0 -mca btl_openib_if_include irdma0:1 -mca mtl '^psm2,psm,ofi' -mca btl '^openib' --mca mtl_base_verbose 100 --mca btl_openib_verbose 100 -mca pml ucx -mca osc ucx -x UCX_NET_DEVICES=i810_roce.45 --mca osc_ucx_verbose 100 --mca pml_ucx_verbose 100 /usr/lib64/openmpi/bin/mpitests-osu_get_acc_latency
[rdma-dev-30.rdma.lab.eng.rdu2.redhat.com:55166] pml_ucx.c:197 mca_pml_ucx_open: UCX version 1.11.2
[rdma-dev-30.rdma.lab.eng.rdu2.redhat.com:55166] pml_ucx.c:289 mca_pml_ucx_init
[rdma-dev-30.rdma.lab.eng.rdu2.redhat.com:55166] pml_ucx.c:114 Pack remote worker address, size 38
[rdma-dev-30.rdma.lab.eng.rdu2.redhat.com:55166] pml_ucx.c:114 Pack local worker address, size 141
[rdma-dev-30.rdma.lab.eng.rdu2.redhat.com:55166] pml_ucx.c:351 created ucp context 0x55add3790af0, worker 0x55add38b3480
[rdma-dev-31.rdma.lab.eng.rdu2.redhat.com:59288] pml_ucx.c:197 mca_pml_ucx_open: UCX version 1.11.2
[rdma-dev-31.rdma.lab.eng.rdu2.redhat.com:59288] pml_ucx.c:289 mca_pml_ucx_init
[rdma-dev-31.rdma.lab.eng.rdu2.redhat.com:59288] pml_ucx.c:114 Pack remote worker address, size 38
[rdma-dev-31.rdma.lab.eng.rdu2.redhat.com:59288] pml_ucx.c:114 Pack local worker address, size 141
[rdma-dev-31.rdma.lab.eng.rdu2.redhat.com:59288] pml_ucx.c:351 created ucp context 0x5635b9edfda0, worker 0x5635ba059400
[rdma-dev-31.rdma.lab.eng.rdu2.redhat.com:59288] pml_ucx.c:182 Got proc 0 address, size 141
[rdma-dev-31.rdma.lab.eng.rdu2.redhat.com:59288] pml_ucx.c:411 connecting to proc. 0
[rdma-dev-30.rdma.lab.eng.rdu2.redhat.com:55166] pml_ucx.c:182 Got proc 1 address, size 141
[rdma-dev-30.rdma.lab.eng.rdu2.redhat.com:55166] pml_ucx.c:411 connecting to proc. 1
# OSU MPI_Get_accumulate latency Test v5.8
# Window creation: MPI_Win_create
# Synchronization: MPI_Win_lock/unlock
# Size Latency (us)
[rdma-dev-30.rdma.lab.eng.rdu2.redhat.com:55166] pml_ucx.c:182 Got proc 0 address, size 38
[rdma-dev-30.rdma.lab.eng.rdu2.redhat.com:55166] pml_ucx.c:411 connecting to proc. 0
[rdma-dev-31.rdma.lab.eng.rdu2.redhat.com:59288] pml_ucx.c:182 Got proc 1 address, size 38
[rdma-dev-31.rdma.lab.eng.rdu2.redhat.com:59288] pml_ucx.c:411 connecting to proc. 1
1 1215.44
2 1056.51
4 981.62
8 975.97
16 979.40
32 1325.45
64 1051.62
128 1103.67
256 1098.76
512 1108.47
1024 897.03
2048 1248.53
4096 1275.95
8192 1458.40
16384 1584.87
32768 1609.64
65536 1744.95
131072 1733.73
262144 1942.46
524288 2341.88
1048576 3303.85
mpirun: Forwarding signal 18 to job
2097152 5394.73
+ [22-03-14 23:31:47] __MPI_check_result 1 mpitests-openmpi OSU /usr/lib64/openmpi/bin/mpitests-osu_get_acc_latency mpirun /root/hfile_one_core
Expected results:
Normal benchmark test completion with expected latency stats for all message sizes
Additional info:
Comment 2RHEL Program Management
2023-09-15 07:28:50 UTC
After evaluating this issue, there are no plans to address it further or fix it in an upcoming release. Therefore, it is being closed. If plans change such that this issue will be fixed in an upcoming release, then the bug can be reopened.