This bug has been migrated to another issue tracking site. It has been closed here and may no longer be being monitored.

If you would like to get updates for this issue, or to participate in it, you may do so at Red Hat Issue Tracker .
RHEL Engineering is moving the tracking of its product development work on RHEL 6 through RHEL 9 to Red Hat Jira (issues.redhat.com). If you're a Red Hat customer, please continue to file support cases via the Red Hat customer portal. If you're not, please head to the "RHEL project" in Red Hat Jira and file new tickets here. Individual Bugzilla bugs in the statuses "NEW", "ASSIGNED", and "POST" are being migrated throughout September 2023. Bugs of Red Hat partners with an assigned Engineering Partner Manager (EPM) are migrated in late September as per pre-agreed dates. Bugs against components "kernel", "kernel-rt", and "kpatch" are only migrated if still in "NEW" or "ASSIGNED". If you cannot log in to RH Jira, please consult article #7032570. That failing, please send an e-mail to the RH Jira admins at rh-issues@redhat.com to troubleshoot your issue as a user management inquiry. The email creates a ServiceNow ticket with Red Hat. Individual Bugzilla bugs that are migrated will be moved to status "CLOSED", resolution "MIGRATED", and set with "MigratedToJIRA" in "Keywords". The link to the successor Jira issue will be found under "Links", have a little "two-footprint" icon next to it, and direct you to the "RHEL project" in Red Hat Jira (issue links are of type "https://issues.redhat.com/browse/RHEL-XXXX", where "X" is a digit). This same link will be available in a blue banner at the top of the page informing you that that bug has been migrated.
Bug 2090726 - [RHEL8.7] OSU acc_latency fails when openmpi benchmarks run on QEDR ROCE device
Summary: [RHEL8.7] OSU acc_latency fails when openmpi benchmarks run on QEDR ROCE device
Keywords:
Status: CLOSED MIGRATED
Alias: None
Product: Red Hat Enterprise Linux 8
Classification: Red Hat
Component: openmpi
Version: 8.7
Hardware: Unspecified
OS: Unspecified
unspecified
unspecified
Target Milestone: rc
: ---
Assignee: Kamal Heib
QA Contact: Infiniband QE
URL:
Whiteboard:
Depends On:
Blocks: 2089955
TreeView+ depends on / blocked
 
Reported: 2022-05-26 12:41 UTC by Brian Chae
Modified: 2023-09-21 14:44 UTC (History)
2 users (show)

Fixed In Version:
Doc Type: If docs needed, set a value
Doc Text:
Clone Of:
Environment:
Last Closed: 2023-09-21 14:44:17 UTC
Type: Bug
Target Upstream Version:
Embargoed:
pm-rhel: mirror+


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
Red Hat Issue Tracker   RHEL-6187 0 None Migrated None 2023-09-21 14:44:13 UTC
Red Hat Issue Tracker RHELPLAN-123465 0 None None None 2022-05-26 12:46:16 UTC

Description Brian Chae 2022-05-26 12:41:32 UTC
Description of problem:

OSU acc_latency benchmark fails with following error message:

rdma-dev-02.rdma.lab.eng.rdu2.redhat.com:rank0.mpitests-osu_acc_latency: Unable to create UD QP on qedr0
rdma-dev-02.rdma.lab.eng.rdu2.redhat.com:rank0.mpitests-osu_acc_latency: Unable to initialize verbs
rdma-dev-02.rdma.lab.eng.rdu2.redhat.com:rank0: PSM3 can't open nic unit: 0 (err=23)


Version-Release number of selected component (if applicable):


Clients: rdma-dev-02
Servers: rdma-perf-06

DISTRO=RHEL-8.7.0-20220524.0

+ [22-05-26 02:08:38] cat /etc/redhat-release
Red Hat Enterprise Linux release 8.7 Beta (Ootpa)

+ [22-05-26 02:08:38] uname -a
Linux rdma-dev-02.rdma.lab.eng.rdu2.redhat.com 4.18.0-393.el8.x86_64 #1 SMP Wed May 18 12:44:50 EDT 2022 x86_64 x86_64 x86_64 GNU/Linux

+ [22-05-26 02:08:38] cat /proc/cmdline
BOOT_IMAGE=(hd0,msdos1)/vmlinuz-4.18.0-393.el8.x86_64 root=UUID=fd7a6a9d-cd42-4b62-9933-1f5f3d4c927b ro console=tty0 rd_NO_PLYMOUTH intel_iommu=on iommu=on crashkernel=auto resume=UUID=9ea769dc-0bb3-455f-a1b3-d99cd5d33215 console=ttyS1,115200

+ [22-05-26 02:08:38] rpm -q rdma-core linux-firmware
rdma-core-37.2-1.el8.x86_64
linux-firmware-20220210-107.git6342082c.el8.noarch

+ [22-05-26 02:08:38] tail /sys/class/infiniband/qedr0/fw_ver /sys/class/infiniband/qedr1/fw_ver
==> /sys/class/infiniband/qedr0/fw_ver <==
8. 59. 1. 0

==> /sys/class/infiniband/qedr1/fw_ver <==
8. 59. 1. 0
+ [22-05-26 02:08:38] lspci
+ [22-05-26 02:08:38] grep -i -e ethernet -e infiniband -e omni -e ConnectX
02:00.0 Ethernet controller: Broadcom Inc. and subsidiaries NetXtreme BCM5720 Gigabit Ethernet PCIe
02:00.1 Ethernet controller: Broadcom Inc. and subsidiaries NetXtreme BCM5720 Gigabit Ethernet PCIe
08:00.0 Ethernet controller: QLogic Corp. FastLinQ QL45000 Series 25GbE Controller (rev 10)
08:00.1 Ethernet controller: QLogic Corp. FastLinQ QL45000 Series 25GbE Controller (rev 10)

Installed:
  mpitests-openmpi-5.8-1.el8.x86_64         openmpi-1:4.1.1-3.el8.x86_64       
  openmpi-devel-1:4.1.1-3.el8.x86_64       




How reproducible:
100%


Steps to Reproduce:
1. With the above build on qedr roce device
2. set up both RDMA server and client for openmpi
3. On the client side, run the following benchmark command

imeout --preserve-status --kill-after=5m 3m mpirun -hostfile /root/hfile_one_core -np 2 --allow-run-as-root --map-by node -mca btl_openib_warn_nonexistent_if 0 -mca btl_openib_if_include qedr0:1 -mca mtl '^psm2,psm,ofi' -mca btl '^openib' --mca mtl_base_verbose 100 --mca btl_openib_verbose 100 -mca pml ucx -mca osc ucx -x UCX_NET_DEVICES=qede_roce.45 --mca osc_ucx_verbose 100 --mca pml_ucx_verbose 100 /usr/lib64/openmpi/bin/mpitests-osu_acc_latency


Actual results:

[create_qp:2753]create qp: failed on ibv_cmd_create_qp with 22
[create_qp:2753]create qp: failed on ibv_cmd_create_qp with 22
[create_qp:2753]create qp: failed on ibv_cmd_create_qp with 22
[create_qp:2753]create qp: failed on ibv_cmd_create_qp with 22
[create_qp:2753]create qp: failed on ibv_cmd_create_qp with 22
[create_qp:2753]create qp: failed on ibv_cmd_create_qp with 22
[create_qp:2753]create qp: failed on ibv_cmd_create_qp with 22
[create_qp:2753]create qp: failed on ibv_cmd_create_qp with 22
[create_qp:2753]create qp: failed on ibv_cmd_create_qp with 22
[create_qp:2753]create qp: failed on ibv_cmd_create_qp with 22
[create_qp:2753]create qp: failed on ibv_cmd_create_qp with 22
[create_qp:2753]create qp: failed on ibv_cmd_create_qp with 22
[create_qp:2753]create qp: failed on ibv_cmd_create_qp with 22
[create_qp:2753]create qp: failed on ibv_cmd_create_qp with 22
[create_qp:2753]create qp: failed on ibv_cmd_create_qp with 22
[create_qp:2753]create qp: failed on ibv_cmd_create_qp with 22
[create_qp:2753]create qp: failed on ibv_cmd_create_qp with 22
[create_qp:2753]create qp: failed on ibv_cmd_create_qp with 22
[create_qp:2753]create qp: failed on ibv_cmd_create_qp with 22
[create_qp:2753]create qp: failed on ibv_cmd_create_qp with 22
[create_qp:2753]create qp: failed on ibv_cmd_create_qp with 22
[create_qp:2753]create qp: failed on ibv_cmd_create_qp with 22
[create_qp:2753]create qp: failed on ibv_cmd_create_qp with 22
[create_qp:2753]create qp: failed on ibv_cmd_create_qp with 22
[create_qp:2753]create qp: failed on ibv_cmd_create_qp with 22
[create_qp:2753]create qp: failed on ibv_cmd_create_qp with 22
[create_qp:2753]create qp: failed on ibv_cmd_create_qp with 22
[create_qp:2753]create qp: failed on ibv_cmd_create_qp with 95
[create_qp:2753]create qp: failed on ibv_cmd_create_qp with 95
[create_qp:2753]create qp: failed on ibv_cmd_create_qp with 22
[create_qp:2753]create qp: failed on ibv_cmd_create_qp with 22
[create_qp:2753]create qp: failed on ibv_cmd_create_qp with 22
[create_qp:2753]create qp: failed on ibv_cmd_create_qp with 95
rdma-dev-02.rdma.lab.eng.rdu2.redhat.com:rank0.mpitests-osu_acc_latency: Unable to create UD QP on qedr0
rdma-dev-02.rdma.lab.eng.rdu2.redhat.com:rank0.mpitests-osu_acc_latency: Unable to initialize verbs
rdma-dev-02.rdma.lab.eng.rdu2.redhat.com:rank0: PSM3 can't open nic unit: 0 (err=23)
rdma-dev-02.rdma.lab.eng.rdu2.redhat.com:rank0.mpitests-osu_acc_latency: Unable to create UD QP on qedr0
[create_qp:2753]create qp: failed on ibv_cmd_create_qp with 95
rdma-dev-02.rdma.lab.eng.rdu2.redhat.com:rank0.mpitests-osu_acc_latency: Unable to initialize verbs
rdma-dev-02.rdma.lab.eng.rdu2.redhat.com:rank0: PSM3 can't open nic unit: 0 (err=23)
[create_qp:2753]create qp: failed on ibv_cmd_create_qp with 95
rdma-dev-02.rdma.lab.eng.rdu2.redhat.com:rank0.mpitests-osu_acc_latency: Unable to create UD QP on qedr0
rdma-dev-02.rdma.lab.eng.rdu2.redhat.com:rank0: PSM3 can't open nic unit: 0 (err=23)
rdma-dev-02.rdma.lab.eng.rdu2.redhat.com:rank0.mpitests-osu_acc_latency: Unable to initialize verbs
[rdma-dev-02.rdma.lab.eng.rdu2.redhat.com:71426] pml_ucx.c:197 mca_pml_ucx_open: UCX version 1.11.2
[create_qp:2753]create qp: failed on ibv_cmd_create_qp with 22
[rdma-dev-02.rdma.lab.eng.rdu2.redhat.com:71426] pml_ucx.c:289 mca_pml_ucx_init
[create_qp:2753]create qp: failed on ibv_cmd_create_qp with 22
[create_qp:2753]create qp: failed on ibv_cmd_create_qp with 95
[create_qp:2753]create qp: failed on ibv_cmd_create_qp with 95
[rdma-dev-02.rdma.lab.eng.rdu2.redhat.com:71426] pml_ucx.c:114 Pack remote worker address, size 38
[rdma-dev-02.rdma.lab.eng.rdu2.redhat.com:71426] pml_ucx.c:114 Pack local worker address, size 141
[rdma-dev-02.rdma.lab.eng.rdu2.redhat.com:71426] pml_ucx.c:351 created ucp context 0x56170ef84000, worker 0x56170efd7e50
[create_qp:2753]create qp: failed on ibv_cmd_create_qp with 22
[create_qp:2753]create qp: failed on ibv_cmd_create_qp with 22
[create_qp:2753]create qp: failed on ibv_cmd_create_qp with 22
[create_qp:2753]create qp: failed on ibv_cmd_create_qp with 22
[create_qp:2753]create qp: failed on ibv_cmd_create_qp with 22
[create_qp:2753]create qp: failed on ibv_cmd_create_qp with 22
[create_qp:2753]create qp: failed on ibv_cmd_create_qp with 22
[create_qp:2753]create qp: failed on ibv_cmd_create_qp with 22
[create_qp:2753]create qp: failed on ibv_cmd_create_qp with 22
[create_qp:2753]create qp: failed on ibv_cmd_create_qp with 22
[create_qp:2753]create qp: failed on ibv_cmd_create_qp with 22
[create_qp:2753]create qp: failed on ibv_cmd_create_qp with 22
[create_qp:2753]create qp: failed on ibv_cmd_create_qp with 22
[create_qp:2753]create qp: failed on ibv_cmd_create_qp with 22
[create_qp:2753]create qp: failed on ibv_cmd_create_qp with 22
[create_qp:2753]create qp: failed on ibv_cmd_create_qp with 22
[create_qp:2753]create qp: failed on ibv_cmd_create_qp with 95
[create_qp:2753]create qp: failed on ibv_cmd_create_qp with 95
[rdma-perf-06.rdma.lab.eng.rdu2.redhat.com:85543] pml_ucx.c:197 mca_pml_ucx_open: UCX version 1.11.2
[rdma-perf-06.rdma.lab.eng.rdu2.redhat.com:85543] pml_ucx.c:289 mca_pml_ucx_init
[rdma-perf-06.rdma.lab.eng.rdu2.redhat.com:85543] pml_ucx.c:114 Pack remote worker address, size 38
[rdma-perf-06.rdma.lab.eng.rdu2.redhat.com:85543] pml_ucx.c:114 Pack local worker address, size 141
[rdma-perf-06.rdma.lab.eng.rdu2.redhat.com:85543] pml_ucx.c:351 created ucp context 0x55e45dfd7160, worker 0x55e45e524ca0
[rdma-dev-02.rdma.lab.eng.rdu2.redhat.com:71426] pml_ucx.c:182 Got proc 0 address, size 141
[rdma-dev-02.rdma.lab.eng.rdu2.redhat.com:71426] pml_ucx.c:411 connecting to proc. 0
[rdma-perf-06.rdma.lab.eng.rdu2.redhat.com:85543] pml_ucx.c:182 Got proc 1 address, size 141
[rdma-perf-06.rdma.lab.eng.rdu2.redhat.com:85543] pml_ucx.c:411 connecting to proc. 1
# OSU MPI_Accumulate latency Test v5.8
# Window creation: MPI_Win_allocate
# Synchronization: MPI_Win_flush
# Size          Latency (us)
[rdma-perf-06.rdma.lab.eng.rdu2.redhat.com:85543] pml_ucx.c:182 Got proc 0 address, size 38
[rdma-perf-06.rdma.lab.eng.rdu2.redhat.com:85543] pml_ucx.c:411 connecting to proc. 0
[rdma-dev-02.rdma.lab.eng.rdu2.redhat.com:71426] pml_ucx.c:182 Got proc 1 address, size 38
[rdma-dev-02.rdma.lab.eng.rdu2.redhat.com:71426] pml_ucx.c:411 connecting to proc. 1
1                    2570.11
2                    2570.11
4                    2570.11
8                    2570.11
16                   2570.18
32                   2570.10
+ [22-05-26 02:41:36] __MPI_check_result 1 mpitests-openmpi OSU /usr/lib64/openmpi/bin/mpitests-osu_acc_latency mpirun /root/hfile_one_core


Expected results:

Normal execution with proper stats output

Additional info:

Comment 1 RHEL Program Management 2023-09-21 14:43:56 UTC
Issue migration from Bugzilla to Jira is in process at this time. This will be the last message in Jira copied from the Bugzilla bug.

Comment 2 RHEL Program Management 2023-09-21 14:44:17 UTC
This BZ has been automatically migrated to the issues.redhat.com Red Hat Issue Tracker. All future work related to this report will be managed there.

Due to differences in account names between systems, some fields were not replicated.  Be sure to add yourself to Jira issue's "Watchers" field to continue receiving updates and add others to the "Need Info From" field to continue requesting information.

To find the migrated issue, look in the "Links" section for a direct link to the new issue location. The issue key will have an icon of 2 footprints next to it, and begin with "RHEL-" followed by an integer.  You can also find this issue by visiting https://issues.redhat.com/issues/?jql= and searching the "Bugzilla Bug" field for this BZ's number, e.g. a search like:

"Bugzilla Bug" = 1234567

In the event you have trouble locating or viewing this issue, you can file an issue by sending mail to rh-issues. You can also visit https://access.redhat.com/articles/7032570 for general account information.


Note You need to log in before you can comment on or make changes to this bug.