Bug 2152217

Summary: [RHEL9.2] most of IMB-IO benchmarks fail due to NFS failure on iRDMA and QEDR ROCE
Product: Red Hat Enterprise Linux 9 Reporter: Brian Chae <bchae>
Component: openmpiAssignee: Kamal Heib <kheib>
Status: CLOSED MIGRATED QA Contact: Infiniband QE <infiniband-qe>
Severity: unspecified Docs Contact:
Priority: unspecified    
Version: 9.2CC: kheib, rdma-dev-team
Target Milestone: rcKeywords: MigratedToJIRA, Regression
Target Release: ---Flags: pm-rhel: mirror+
Hardware: Unspecified   
OS: Unspecified   
Whiteboard:
Fixed In Version: Doc Type: If docs needed, set a value
Doc Text:
Story Points: ---
Clone Of: Environment:
Last Closed: 2023-09-21 14:32:03 UTC Type: Bug
Regression: --- Mount Type: ---
Documentation: --- CRM:
Verified Versions: Category: ---
oVirt Team: --- RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: --- Target Upstream Version:
Embargoed:

Description Brian Chae 2022-12-09 19:33:19 UTC
Description of problem:

When run on E810 iRDMA ROCE, the following IMB-IO benchmarks failed

      FAIL |      1 | NFS mount cannot be set: IMB-IO benchmarks may fail
      PASS |      0 | openmpi IMB-IO S_Write_indv mpirun one_core
      PASS |      0 | openmpi IMB-IO S_Read_indv mpirun one_core
      PASS |      0 | openmpi IMB-IO S_Write_expl mpirun one_core
      PASS |      0 | openmpi IMB-IO S_Read_expl mpirun one_core
      FAIL |      1 | openmpi IMB-IO P_Write_indv mpirun one_core <<<===========
      FAIL |      1 | openmpi IMB-IO P_Read_indv mpirun one_core <<<===========
      FAIL |      1 | openmpi IMB-IO P_Write_expl mpirun one_core <<<===========
      FAIL |      1 | openmpi IMB-IO P_Read_expl mpirun one_core <<<===========
      FAIL |      1 | openmpi IMB-IO P_Write_shared mpirun one_core <<<=========
      FAIL |      1 | openmpi IMB-IO P_Read_shared mpirun one_core <<<===========
      PASS |      0 | openmpi IMB-IO P_Write_priv mpirun one_core
      PASS |      0 | openmpi IMB-IO P_Read_priv mpirun one_core
      PASS |      0 | openmpi IMB-IO C_Write_indv mpirun one_core
      FAIL |      1 | openmpi IMB-IO C_Read_indv mpirun one_core  <<<===========
      FAIL |      1 | openmpi IMB-IO C_Write_expl mpirun one_core <<<===========
      FAIL |      1 | openmpi IMB-IO C_Read_expl mpirun one_core <<<===========
      FAIL |      1 | openmpi IMB-IO C_Write_shared mpirun one_core<<<===========
      FAIL |      1 | openmpi IMB-IO C_Read_shared mpirun one_core<<<=========== 

This is a regression when compared with RHEL9.1 Beta compose build test result on OPENMPI, where no such issues found.


Version-Release number of selected component (if applicable):

Clients: rdma-dev-31
Servers: rdma-dev-30

DISTRO=RHEL-9.2.0-20221122.2

+ [22-12-06 01:03:14] cat /etc/redhat-release
Red Hat Enterprise Linux release 9.2 Beta (Plow)

+ [22-12-06 01:03:14] uname -a
Linux rdma-dev-31.rdma.lab.eng.rdu2.redhat.com 5.14.0-197.el9.x86_64 #1 SMP PREEMPT_DYNAMIC Wed Nov 16 14:31:27 EST 2022 x86_64 x86_64 x86_64 GNU/Linux

+ [22-12-06 01:03:14] cat /proc/cmdline
BOOT_IMAGE=(hd0,gpt2)/vmlinuz-5.14.0-197.el9.x86_64 root=UUID=d3cd314b-8715-4183-afb9-daafd8d9ad53 ro crashkernel=1G-4G:192M,4G-64G:256M,64G-:512M resume=UUID=2a841cae-212e-4b91-ad1d-097144a800dc console=ttyS0,115200n81

+ [22-12-06 01:03:14] rpm -q rdma-core linux-firmware
rdma-core-41.0-3.el9.x86_64
linux-firmware-20221012-128.el9.noarch

+ [22-12-06 01:03:14] tail /sys/class/infiniband/irdma0/fw_ver /sys/class/infiniband/irdma1/fw_ver
==> /sys/class/infiniband/irdma0/fw_ver <==
1.52

==> /sys/class/infiniband/irdma1/fw_ver <==
1.52
+ [22-12-06 01:03:14] lspci
+ [22-12-06 01:03:14] grep -i -e ethernet -e infiniband -e omni -e ConnectX
04:00.0 Ethernet controller: Broadcom Inc. and subsidiaries NetXtreme BCM5719 Gigabit Ethernet PCIe (rev 01)
04:00.1 Ethernet controller: Broadcom Inc. and subsidiaries NetXtreme BCM5719 Gigabit Ethernet PCIe (rev 01)
04:00.2 Ethernet controller: Broadcom Inc. and subsidiaries NetXtreme BCM5719 Gigabit Ethernet PCIe (rev 01)
04:00.3 Ethernet controller: Broadcom Inc. and subsidiaries NetXtreme BCM5719 Gigabit Ethernet PCIe (rev 01)
44:00.0 Ethernet controller: Intel Corporation Ethernet Controller E810-C for QSFP (rev 02)
44:00.1 Ethernet controller: Intel Corporation Ethernet Controller E810-C for QSFP (rev 02)

Installed:
  mpitests-openmpi-5.8-1.el9.x86_64         openmpi-1:4.1.1-5.el9.x86_64       
  openmpi-devel-1:4.1.1-5.el9.x86_64

How reproducible:


Steps to Reproduce:
1. Please, refer to the following beaker job URL

https://beaker.engineering.redhat.com/jobs/7269089

Take a look at the "RS:10921438" for openmpi testing

2.
3.
 
Actual results:


Expected results:


Additional info:

Comment 1 Brian Chae 2022-12-09 19:54:13 UTC
Similar openmpi failures are shown for E810 iRDMA iWARP, as well

mpi/openmpi test results on rdma-dev-30/rdma-dev-31 & Beaker job J:7269091:
5.14.0-197.el9.x86_64, rdma-core-41.0-3.el9, i40e, iw, E810-C & irdma1
    Result | Status | Test
  ---------+--------+------------------------------------
      FAIL |      1 | NFS mount cannot be set: IMB-IO benchmarks may fail
      PASS |      0 | openmpi IMB-IO S_Write_indv mpirun one_core
      PASS |      0 | openmpi IMB-IO S_Read_indv mpirun one_core
      PASS |      0 | openmpi IMB-IO S_Write_expl mpirun one_core
      PASS |      0 | openmpi IMB-IO S_Read_expl mpirun one_core
      FAIL |      1 | openmpi IMB-IO P_Write_indv mpirun one_core
      FAIL |      1 | openmpi IMB-IO P_Read_indv mpirun one_core
      PASS |      0 | openmpi IMB-IO P_Write_expl mpirun one_core
      FAIL |      1 | openmpi IMB-IO P_Read_expl mpirun one_core
      FAIL |      1 | openmpi IMB-IO P_Write_shared mpirun one_core
      FAIL |      1 | openmpi IMB-IO P_Read_shared mpirun one_core
      PASS |      0 | openmpi IMB-IO P_Write_priv mpirun one_core
      PASS |      0 | openmpi IMB-IO P_Read_priv mpirun one_core
      PASS |      0 | openmpi IMB-IO C_Write_indv mpirun one_core
      FAIL |      1 | openmpi IMB-IO C_Read_indv mpirun one_core
      PASS |      0 | openmpi IMB-IO C_Write_expl mpirun one_core
      FAIL |      1 | openmpi IMB-IO C_Read_expl mpirun one_core
      FAIL |      1 | openmpi IMB-IO C_Write_shared mpirun one_core
      FAIL |      1 | openmpi IMB-IO C_Read_shared mpirun one_core

Please, refer to 


https://beaker.engineering.redhat.com/recipes/12990759#task153124745 

https://beaker-archive.host.prod.eng.bos.redhat.com/beaker-logs/2022/11/72690/7269091/12990759/153124745/716409115/resultoutputfile.log

Comment 2 Brian Chae 2023-03-05 14:12:14 UTC
The same benchmarks failed on BXNT ROCE device, specifically, on QL41000

mpi/openmpi test results on rdma-perf-05/rdma-perf-04 & Beaker job J:7586497:
5.14.0-283.el9.x86_64, rdma-core-44.0-2.el9, qede, roce.45, QL41000 & qedr0
    Result | Status | Test
  ---------+--------+------------------------------------
      FAIL |      1 | NFS mount cannot be set: IMB-IO benchmarks may fail
      FAIL |      1 | openmpi IMB-IO P_Write_indv mpirun one_core
      FAIL |      1 | openmpi IMB-IO P_Read_indv mpirun one_core
      FAIL |      1 | openmpi IMB-IO P_Read_expl mpirun one_core
      FAIL |      1 | openmpi IMB-IO P_Write_shared mpirun one_core
      FAIL |      1 | openmpi IMB-IO P_Read_shared mpirun one_core
      FAIL |      1 | openmpi IMB-IO C_Write_indv mpirun one_core
      FAIL |      1 | openmpi IMB-IO C_Read_indv mpirun one_core
      FAIL |      1 | openmpi IMB-IO C_Write_expl mpirun one_core
      FAIL |      1 | openmpi IMB-IO C_Read_expl mpirun one_core
      FAIL |      1 | openmpi IMB-IO C_Write_shared mpirun one_core
      FAIL |      1 | openmpi IMB-IO C_Read_shared mpirun one_core
      FAIL |      1 | openmpi OSU get_acc_latency mpirun one_core

Please, refer to the following test log:

https://beaker.engineering.redhat.com/recipes/13484454#task157019276

Comment 3 Brian Chae 2023-03-05 14:13:23 UTC
Correction:

The same benchmarks failed on QEDR ROCE device, specifically, on QL41000

Comment 4 RHEL Program Management 2023-09-21 14:28:33 UTC
Issue migration from Bugzilla to Jira is in process at this time. This will be the last message in Jira copied from the Bugzilla bug.

Comment 5 RHEL Program Management 2023-09-21 14:32:03 UTC
This BZ has been automatically migrated to the issues.redhat.com Red Hat Issue Tracker. All future work related to this report will be managed there.

Due to differences in account names between systems, some fields were not replicated.  Be sure to add yourself to Jira issue's "Watchers" field to continue receiving updates and add others to the "Need Info From" field to continue requesting information.

To find the migrated issue, look in the "Links" section for a direct link to the new issue location. The issue key will have an icon of 2 footprints next to it, and begin with "RHEL-" followed by an integer.  You can also find this issue by visiting https://issues.redhat.com/issues/?jql= and searching the "Bugzilla Bug" field for this BZ's number, e.g. a search like:

"Bugzilla Bug" = 1234567

In the event you have trouble locating or viewing this issue, you can file an issue by sending mail to rh-issues. You can also visit https://access.redhat.com/articles/7032570 for general account information.