This bug has been migrated to another issue tracking site. It has been closed here and may no longer be being monitored.

If you would like to get updates for this issue, or to participate in it, you may do so at Red Hat Issue Tracker .
RHEL Engineering is moving the tracking of its product development work on RHEL 6 through RHEL 9 to Red Hat Jira (issues.redhat.com). If you're a Red Hat customer, please continue to file support cases via the Red Hat customer portal. If you're not, please head to the "RHEL project" in Red Hat Jira and file new tickets here. Individual Bugzilla bugs in the statuses "NEW", "ASSIGNED", and "POST" are being migrated throughout September 2023. Bugs of Red Hat partners with an assigned Engineering Partner Manager (EPM) are migrated in late September as per pre-agreed dates. Bugs against components "kernel", "kernel-rt", and "kpatch" are only migrated if still in "NEW" or "ASSIGNED". If you cannot log in to RH Jira, please consult article #7032570. That failing, please send an e-mail to the RH Jira admins at rh-issues@redhat.com to troubleshoot your issue as a user management inquiry. The email creates a ServiceNow ticket with Red Hat. Individual Bugzilla bugs that are migrated will be moved to status "CLOSED", resolution "MIGRATED", and set with "MigratedToJIRA" in "Keywords". The link to the successor Jira issue will be found under "Links", have a little "two-footprint" icon next to it, and direct you to the "RHEL project" in Red Hat Jira (issue links are of type "https://issues.redhat.com/browse/RHEL-XXXX", where "X" is a digit). This same link will be available in a blue banner at the top of the page informing you that that bug has been migrated.
Bug 2120820 - RHEL-8.7 ucx test 'openmpi ucx osu_bw' fail
Summary: RHEL-8.7 ucx test 'openmpi ucx osu_bw' fail
Keywords:
Status: CLOSED MIGRATED
Alias: None
Product: Red Hat Enterprise Linux 8
Classification: Red Hat
Component: ucx
Version: 8.7
Hardware: x86_64
OS: Linux
unspecified
unspecified
Target Milestone: rc
: ---
Assignee: Michal Schmidt
QA Contact: Afom T. Michael
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2022-08-23 19:22 UTC by Afom T. Michael
Modified: 2023-09-21 14:45 UTC (History)
3 users (show)

Fixed In Version:
Doc Type: If docs needed, set a value
Doc Text:
Clone Of:
Environment:
Last Closed: 2023-09-21 14:45:04 UTC
Type: Bug
Target Upstream Version:
Embargoed:
pm-rhel: mirror+


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
Red Hat Issue Tracker   RHEL-6168 0 None Migrated None 2023-09-21 14:40:15 UTC
Red Hat Issue Tracker RHELPLAN-132070 0 None None None 2022-08-23 19:25:59 UTC

Description Afom T. Michael 2022-08-23 19:22:41 UTC
Description of problem:
On RHEL-8.7.0, 'openmpi ucx osu_bw' test of our ucx test on hosts with Mellanox mlx4 MT27520 CX-3Pro failed as shown in Actual results section. The failure occurred when running test for RoCE fabric.


Version-Release number of selected component (if applicable):
DISTRO=RHEL-8.7.0-20220817.0
Red Hat Enterprise Linux release 8.7 Beta (Ootpa)
4.18.0-418.el8.x86_64 
rdma-core-41.0-1.el8.x86_64
linux-firmware-20220726-110.git150864a4.el8.noarch
+ [22-08-18 10:00:53] tail /sys/class/infiniband/mlx5_0/fw_ver /sys/class/infiniband/mlx5_1/fw_ver /sys/class/infiniband/mlx5_bond_0/fw_ver
==> /sys/class/infiniband/mlx5_0/fw_ver <==
12.28.2006

==> /sys/class/infiniband/mlx5_1/fw_ver <==
12.28.2006

==> /sys/class/infiniband/mlx5_bond_0/fw_ver <==
14.32.1010
+ [22-08-18 10:00:53] lspci
+ [22-08-18 10:00:53] grep -i -e ethernet -e infiniband -e omni -e ConnectX
02:00.0 Ethernet controller: Broadcom Inc. and subsidiaries NetXtreme BCM5720 Gigabit Ethernet PCIe
02:00.1 Ethernet controller: Broadcom Inc. and subsidiaries NetXtreme BCM5720 Gigabit Ethernet PCIe
03:00.0 Ethernet controller: Broadcom Inc. and subsidiaries NetXtreme BCM5720 Gigabit Ethernet PCIe
03:00.1 Ethernet controller: Broadcom Inc. and subsidiaries NetXtreme BCM5720 Gigabit Ethernet PCIe
04:00.0 Infiniband controller: Mellanox Technologies MT27700 Family [ConnectX-4]
04:00.1 Infiniband controller: Mellanox Technologies MT27700 Family [ConnectX-4]
05:00.0 Ethernet controller: Mellanox Technologies MT27710 Family [ConnectX-4 Lx]
05:00.1 Ethernet controller: Mellanox Technologies MT27710 Family [ConnectX-4 Lx]

How reproducible:
Seen it only once so far.

Steps to Reproduce:
1. Install RHEL-8.7.0-20220817.0 on rdma-virt-02/03
2. Install & execute kernel-kernel-infiniband-ucx test script
3. Watch ucx result on client side


Actual results:
+ [22-08-18 10:06:14] timeout --preserve-status --kill-after=5m 3m mpirun -hostfile /root/hfile_one_core -np 2 --allow-run-as-root --map-by node -mca btl '^vader,tcp,openib' -mca btl_openib_cpc_include rdmacm -mca btl_openib_receive_queues P,65536,256,192,128 -mca pml ucx -mca osc ucx -x UCX_NET_DEVICES=mlx5_bond_0:1 mpitests-osu_bw
# OSU MPI Bandwidth Test v5.8
# Size      Bandwidth (MB/s)
[rdma-virt-02:219082:0:219082] ib_mlx5_log.c:177  Transport retry count exceeded on mlx5_bond_0:1/RoCE (synd 0x15 vend 0x81 hw_synd 0/0)
[rdma-virt-02:219082:0:219082] ib_mlx5_log.c:177  RC QP 0x1379 wqe[0]: SEND --e [inl len 10] [rqpn 0x1379 dlid=0 sl=0 port=1 src_path_bits=0 dgid=::ffff:172.31.40.203 sgid_index=7 traffic_class=0]
==== backtrace (tid: 219082) ====
 0  /lib64/libucs.so.0(ucs_handle_error+0x2dc) [0x15108a68cedc]
 1  /lib64/libucs.so.0(ucs_fatal_error_message+0xb1) [0x15108a689d41]
 2  /lib64/libucs.so.0(ucs_log_default_handler+0xde4) [0x15108a68e6a4]
 3  /lib64/libucs.so.0(ucs_log_dispatch+0xe4) [0x15108a68e9c4]
 4  /lib64/ucx/libuct_ib.so.0(uct_ib_mlx5_completion_with_err+0x27a) [0x15108a40259a]
 5  /lib64/ucx/libuct_ib.so.0(+0x3c480) [0x15108a419480]
 6  /lib64/ucx/libuct_ib.so.0(uct_ib_mlx5_check_completion+0x4d) [0x15108a40403d]
 7  /lib64/ucx/libuct_ib.so.0(+0x3a48a) [0x15108a41748a]
 8  /lib64/libucp.so.0(ucp_worker_progress+0x2a) [0x15108ad53ada]
 9  /usr/lib64/openmpi/lib/libopen-pal.so.40(opal_progress+0x34) [0x1510a07f2f94]
10  /usr/lib64/openmpi/lib/libmpi.so.40(ompi_request_default_wait+0x12d) [0x1510a1e9659d]
11  /usr/lib64/openmpi/lib/libmpi.so.40(ompi_coll_base_barrier_intra_recursivedoubling+0x103) [0x1510a1f02643]
12  /usr/lib64/openmpi/lib/libmpi.so.40(MPI_Barrier+0xb0) [0x1510a1eadb70]
13  mpitests-osu_bw(+0x1fd0) [0x55d7f1079fd0]
14  /lib64/libc.so.6(__libc_start_main+0xe5) [0x1510a0f6ad85]
15  mpitests-osu_bw(+0x25de) [0x55d7f107a5de]
=================================
[rdma-virt-02:219082] *** Process received signal ***
[rdma-virt-02:219082] Signal: Aborted (6)
[rdma-virt-02:219082] Signal code:  (-6)
[rdma-virt-02:219082] [ 0] /lib64/libpthread.so.0(+0x12cf0)[0x1510a1308cf0]
[rdma-virt-02:219082] [ 1] /lib64/libc.so.6(gsignal+0x10f)[0x1510a0f7eaff]
[rdma-virt-02:219082] [ 2] /lib64/libc.so.6(abort+0x127)[0x1510a0f51ea5]
[rdma-virt-02:219082] [ 3] /lib64/libucs.so.0(+0x27d46)[0x15108a689d46]
[rdma-virt-02:219082] [ 4] /lib64/libucs.so.0(ucs_log_default_handler+0xde4)[0x15108a68e6a4]
[rdma-virt-02:219082] [ 5] /lib64/libucs.so.0(ucs_log_dispatch+0xe4)[0x15108a68e9c4]
[rdma-virt-02:219082] [ 6] /lib64/ucx/libuct_ib.so.0(uct_ib_mlx5_completion_with_err+0x27a)[0x15108a40259a]
[rdma-virt-02:219082] [ 7] /lib64/ucx/libuct_ib.so.0(+0x3c480)[0x15108a419480]
[rdma-virt-02:219082] [ 8] /lib64/ucx/libuct_ib.so.0(uct_ib_mlx5_check_completion+0x4d)[0x15108a40403d]
[rdma-virt-02:219082] [ 9] /lib64/ucx/libuct_ib.so.0(+0x3a48a)[0x15108a41748a]
[rdma-virt-02:219082] [10] /lib64/libucp.so.0(ucp_worker_progress+0x2a)[0x15108ad53ada]
[rdma-virt-02:219082] [11] /usr/lib64/openmpi/lib/libopen-pal.so.40(opal_progress+0x34)[0x1510a07f2f94]
[rdma-virt-02:219082] [12] /usr/lib64/openmpi/lib/libmpi.so.40(ompi_request_default_wait+0x12d)[0x1510a1e9659d]
[rdma-virt-02:219082] [13] /usr/lib64/openmpi/lib/libmpi.so.40(ompi_coll_base_barrier_intra_recursivedoubling+0x103)[0x1510a1f02643]
[rdma-virt-02:219082] [14] /usr/lib64/openmpi/lib/libmpi.so.40(MPI_Barrier+0xb0)[0x1510a1eadb70]
[rdma-virt-02:219082] [15] mpitests-osu_bw(+0x1fd0)[0x55d7f1079fd0]
[rdma-virt-02:219082] [16] /lib64/libc.so.6(__libc_start_main+0xe5)[0x1510a0f6ad85]
[rdma-virt-02:219082] [17] mpitests-osu_bw(+0x25de)[0x55d7f107a5de]
[rdma-virt-02:219082] *** End of error message ***
--------------------------------------------------------------------------
Primary job  terminated normally, but 1 process returned
a non-zero exit code. Per user-direction, the job has been aborted.
--------------------------------------------------------------------------
--------------------------------------------------------------------------
mpirun noticed that process rank 1 with PID 219082 on node 172.31.45.202 exited on signal 6 (Aborted).
--------------------------------------------------------------------------
+ [22-08-18 10:06:32] RQA_check_result -r 134 -t 'openmpi ucx osu_bw'
+ [22-08-18 10:06:32] local test_pass=0
+ [22-08-18 10:06:32] local test_skip=777
+ [22-08-18 10:06:32] test 4 -gt 0
+ [22-08-18 10:06:32] case $1 in
+ [22-08-18 10:06:32] local rc=134
+ [22-08-18 10:06:32] shift
+ [22-08-18 10:06:32] shift
+ [22-08-18 10:06:32] test 2 -gt 0
+ [22-08-18 10:06:32] case $1 in
+ [22-08-18 10:06:32] local 'msg=openmpi ucx osu_bw'
+ [22-08-18 10:06:32] shift
+ [22-08-18 10:06:32] shift
+ [22-08-18 10:06:32] test 0 -gt 0
+ [22-08-18 10:06:32] '[' -z 134 -o -z 'openmpi ucx osu_bw' ']'
+ [22-08-18 10:06:32] '[' -z /tmp/tmp.LwXAyOokgN/results_ucx-ucx-.txt ']'
+ [22-08-18 10:06:32] '[' -z /tmp/tmp.LwXAyOokgN/results_ucx-ucx-.txt ']'
+ [22-08-18 10:06:32] '[' 134 -eq 0 ']'
+ [22-08-18 10:06:32] '[' 134 -eq 777 ']'
+ [22-08-18 10:06:32] local test_result=FAIL
+ [22-08-18 10:06:32] export result=FAIL
+ [22-08-18 10:06:32] result=FAIL
+ [22-08-18 10:06:32] [[ ! -z '' ]]
+ [22-08-18 10:06:32] printf '%10s | %6s | %s\n' FAIL 134 'openmpi ucx osu_bw'
+ [22-08-18 10:06:32] set +x
---
- TEST RESULT FOR ucx
-   Test:   openmpi ucx osu_bw
-   Result: FAIL
-   Return: 134
---


Expected results:
Test to complete successfully.

Additional info:

Comment 2 Brian Chae 2022-11-28 14:32:24 UTC
Additional info on this issue...

This issue has been observed when RHEL8.8 was being tested for RDMA tier2 testing for CTC#1 on rdma-dev-19/20 pair.
This pair of hosts has the same mlx5 MT27710 CX-4Lx roce 25x2 LAG as described in the headline.
Also, rdma-virt-02/03 pair and rdma-dev-19/20 pair both are configured for BONDING.

I would suspect this issue related to the BONDING in mlx5 MT27710 CX-4Lx roce 25x2 LAG HCA configuration.

RHEL-8.8 build test result on openmpi ucx osu_bw.


+ [22-11-27 23:28:05] [[ roce.45 == *\r\o\c\e* ]]
+ [22-11-27 23:28:05] roce_params='-mca btl_openib_cpc_include rdmacm -mca btl_openib_receive_queues P,65536,256,192,128'
+ [22-11-27 23:28:05] rhts_sync_block -s ucx_openmpi_ready_ucx-roce.45-0 rdma-dev-19
/usr/bin/rhts_sync_block -s ucx_openmpi_ready_ucx-roce.45-0 rdma-dev-19 -- Blocking state(s) =  32_ucx_openmpi_ready_ucx-roce.45-0
+ [22-11-27 23:28:05] timeout --preserve-status --kill-after=5m 3m mpirun -hostfile /root/hfile_one_core -np 2 --allow-run-as-root --map-by node -mca btl '^vader,tcp,openib' -mca btl_openib_cpc_include rdmacm -mca btl_openib_receive_queues P,65536,256,192,128 -mca pml ucx -mca osc ucx -x UCX_NET_DEVICES=mlx5_bond_0:1 mpitests-osu_bw
# OSU MPI Bandwidth Test v5.8
# Size      Bandwidth (MB/s)
[rdma-dev-19:252032:0:252032] ib_mlx5_log.c:177  Transport retry count exceeded on mlx5_bond_0:1/RoCE (synd 0x15 vend 0x81 hw_synd 0/0)
[rdma-dev-19:252032:0:252032] ib_mlx5_log.c:177  RC QP 0x1451 wqe[0]: SEND --e [inl len 10] [rqpn 0x14f1 dlid=0 sl=0 port=1 src_path_bits=0 dgid=::ffff:172.31.43.120 sgid_index=7 traffic_class=0]
==== backtrace (tid: 252032) ====
 0  /lib64/libucs.so.0(ucs_handle_error+0x2dc) [0x15542fb46edc]
 1  /lib64/libucs.so.0(ucs_fatal_error_message+0xb1) [0x15542fb43d41]
 2  /lib64/libucs.so.0(ucs_log_default_handler+0xde4) [0x15542fb486a4]
 3  /lib64/libucs.so.0(ucs_log_dispatch+0xe4) [0x15542fb489c4]
 4  /lib64/ucx/libuct_ib.so.0(uct_ib_mlx5_completion_with_err+0x27a) [0x15542f8bc59a]
 5  /lib64/ucx/libuct_ib.so.0(+0x3c480) [0x15542f8d3480]
 6  /lib64/ucx/libuct_ib.so.0(uct_ib_mlx5_check_completion+0x4d) [0x15542f8be03d]
 7  /lib64/ucx/libuct_ib.so.0(+0x3a48a) [0x15542f8d148a]
 8  /lib64/libucp.so.0(ucp_worker_progress+0x2a) [0x15543020dada]
 9  /usr/lib64/openmpi/lib/libopen-pal.so.40(opal_progress+0x34) [0x1554463e2f94]
10  /usr/lib64/openmpi/lib/libmpi.so.40(ompi_request_default_wait+0x12d) [0x155447a8559d]
11  /usr/lib64/openmpi/lib/libmpi.so.40(ompi_coll_base_barrier_intra_recursivedoubling+0x103) [0x155447af1643]
12  /usr/lib64/openmpi/lib/libmpi.so.40(MPI_Barrier+0xb0) [0x155447a9cb70]
13  mpitests-osu_bw(+0x1fd0) [0x556bf7319fd0]
14  /lib64/libc.so.6(__libc_start_main+0xe5) [0x155446b5ad85]
15  mpitests-osu_bw(+0x25de) [0x556bf731a5de]
=================================
[rdma-dev-19:252032] *** Process received signal ***
[rdma-dev-19:252032] Signal: Aborted (6)
[rdma-dev-19:252032] Signal code:  (-6)
[rdma-dev-19:252032] [ 0] /lib64/libpthread.so.0(+0x12cf0)[0x155446ef7cf0]
[rdma-dev-19:252032] [ 1] /lib64/libc.so.6(gsignal+0x10f)[0x155446b6eacf]
[rdma-dev-19:252032] [ 2] /lib64/libc.so.6(abort+0x127)[0x155446b41ea5]
[rdma-dev-19:252032] [ 3] /lib64/libucs.so.0(+0x27d46)[0x15542fb43d46]
[rdma-dev-19:252032] [ 4] /lib64/libucs.so.0(ucs_log_default_handler+0xde4)[0x15542fb486a4]
[rdma-dev-19:252032] [ 5] /lib64/libucs.so.0(ucs_log_dispatch+0xe4)[0x15542fb489c4]
[rdma-dev-19:252032] [ 6] /lib64/ucx/libuct_ib.so.0(uct_ib_mlx5_completion_with_err+0x27a)[0x15542f8bc59a]
[rdma-dev-19:252032] [ 7] /lib64/ucx/libuct_ib.so.0(+0x3c480)[0x15542f8d3480]
[rdma-dev-19:252032] [ 8] /lib64/ucx/libuct_ib.so.0(uct_ib_mlx5_check_completion+0x4d)[0x15542f8be03d]
[rdma-dev-19:252032] [ 9] /lib64/ucx/libuct_ib.so.0(+0x3a48a)[0x15542f8d148a]
[rdma-dev-19:252032] [10] /lib64/libucp.so.0(ucp_worker_progress+0x2a)[0x15543020dada]
[rdma-dev-19:252032] [11] /usr/lib64/openmpi/lib/libopen-pal.so.40(opal_progress+0x34)[0x1554463e2f94]
[rdma-dev-19:252032] [12] /usr/lib64/openmpi/lib/libmpi.so.40(ompi_request_default_wait+0x12d)[0x155447a8559d]
[rdma-dev-19:252032] [13] /usr/lib64/openmpi/lib/libmpi.so.40(ompi_coll_base_barrier_intra_recursivedoubling+0x103)[0x155447af1643]
[rdma-dev-19:252032] [14] /usr/lib64/openmpi/lib/libmpi.so.40(MPI_Barrier+0xb0)[0x155447a9cb70]
[rdma-dev-19:252032] [15] mpitests-osu_bw(+0x1fd0)[0x556bf7319fd0]
[rdma-dev-19:252032] [16] /lib64/libc.so.6(__libc_start_main+0xe5)[0x155446b5ad85]
[rdma-dev-19:252032] [17] mpitests-osu_bw(+0x25de)[0x556bf731a5de]
[rdma-dev-19:252032] *** End of error message ***
--------------------------------------------------------------------------
Primary job  terminated normally, but 1 process returned
a non-zero exit code. Per user-direction, the job has been aborted.
--------------------------------------------------------------------------
--------------------------------------------------------------------------
mpirun noticed that process rank 1 with PID 252032 on node 172.31.45.119 exited on signal 6 (Aborted).
--------------------------------------------------------------------------
+ [22-11-27 23:28:22] RQA_check_result -r 134 -t 'openmpi ucx osu_bw'
+ [22-11-27 23:28:22] local test_pass=0
+ [22-11-27 23:28:22] local test_skip=777
+ [22-11-27 23:28:22] test 4 -gt 0
+ [22-11-27 23:28:22] case $1 in
+ [22-11-27 23:28:22] local rc=134
+ [22-11-27 23:28:22] shift
+ [22-11-27 23:28:22] shift
+ [22-11-27 23:28:22] test 2 -gt 0
+ [22-11-27 23:28:22] case $1 in
+ [22-11-27 23:28:22] local 'msg=openmpi ucx osu_bw'
+ [22-11-27 23:28:22] shift
+ [22-11-27 23:28:22] shift
+ [22-11-27 23:28:22] test 0 -gt 0
+ [22-11-27 23:28:22] '[' -z 134 -o -z 'openmpi ucx osu_bw' ']'
+ [22-11-27 23:28:22] '[' -z /tmp/tmp.kSY0K8Jro6/results_ucx-ucx-.txt ']'
+ [22-11-27 23:28:22] '[' -z /tmp/tmp.kSY0K8Jro6/results_ucx-ucx-.txt ']'
+ [22-11-27 23:28:22] '[' 134 -eq 0 ']'
+ [22-11-27 23:28:22] '[' 134 -eq 777 ']'
+ [22-11-27 23:28:22] local test_result=FAIL
+ [22-11-27 23:28:22] export result=FAIL
+ [22-11-27 23:28:22] result=FAIL
+ [22-11-27 23:28:22] [[ ! -z '' ]]
+ [22-11-27 23:28:22] printf '%10s | %6s | %s\n' FAIL 134 'openmpi ucx osu_bw'
+ [22-11-27 23:28:22] set +x
---
- TEST RESULT FOR ucx
-   Test:   openmpi ucx osu_bw
-   Result: FAIL
-   Return: 134
---
+ [22-11-27 23:28:22] rhts_sync_set -s ucx_openmpi_done_ucx-roce.45-0


++++++++++++++++++++++++++++++

Build & HW info:

Clients: rdma-dev-20
+ [22-11-27 23:26:53] echo 'Servers: rdma-dev-19'
Servers: rdma-dev-19
+ [22-11-27 23:26:53] RQA_system_info_for_debug
+ [22-11-27 23:26:53] grep -i distro /etc/motd
+ [22-11-27 23:26:53] tr -d ' '
DISTRO=RHEL-8.8.0-20221120.2
DISTRO=RHEL-8.8.0-20221120.2
+ [22-11-27 23:26:53] cat /etc/redhat-release
Red Hat Enterprise Linux release 8.8 Beta (Ootpa)
+ [22-11-27 23:26:53] uname -a
Linux rdma-dev-20.rdma.lab.eng.rdu2.redhat.com 4.18.0-438.el8.x86_64 #1 SMP Mon Nov 14 13:08:07 EST 2022 x86_64 x86_64 x86_64 GNU/Linux
+ [22-11-27 23:26:53] cat /proc/cmdline
BOOT_IMAGE=(hd0,msdos1)/vmlinuz-4.18.0-438.el8.x86_64 root=UUID=0eb0b9ac-1586-4a68-9aac-a731507d9489 ro intel_idle.max_cstate=0 processor.max_cstate=0 intel_iommu=on iommu=on console=tty0 rd_NO_PLYMOUTH crashkernel=auto resume=UUID=8f2cddac-ae95-40b5-b9a7-4ff569c52c21 console=ttyS1,115200n81
+ [22-11-27 23:26:53] rpm -q rdma-core linux-firmware
rdma-core-41.0-1.el8.x86_64
linux-firmware-20220726-110.git150864a4.el8.noarch
+ [22-11-27 23:26:53] tail /sys/class/infiniband/mlx5_2/fw_ver /sys/class/infiniband/mlx5_3/fw_ver /sys/class/infiniband/mlx5_bond_0/fw_ver
==> /sys/class/infiniband/mlx5_2/fw_ver <==
12.28.2006

==> /sys/class/infiniband/mlx5_3/fw_ver <==
12.28.2006

==> /sys/class/infiniband/mlx5_bond_0/fw_ver <==
14.31.1014
+ [22-11-27 23:26:53] lspci
+ [22-11-27 23:26:53] grep -i -e ethernet -e infiniband -e omni -e ConnectX
01:00.0 Ethernet controller: Broadcom Inc. and subsidiaries NetXtreme BCM5720 Gigabit Ethernet PCIe
01:00.1 Ethernet controller: Broadcom Inc. and subsidiaries NetXtreme BCM5720 Gigabit Ethernet PCIe
02:00.0 Ethernet controller: Broadcom Inc. and subsidiaries NetXtreme BCM5720 Gigabit Ethernet PCIe
02:00.1 Ethernet controller: Broadcom Inc. and subsidiaries NetXtreme BCM5720 Gigabit Ethernet PCIe
04:00.0 Ethernet controller: Mellanox Technologies MT27710 Family [ConnectX-4 Lx]
04:00.1 Ethernet controller: Mellanox Technologies MT27710 Family [ConnectX-4 Lx]
82:00.0 Infiniband controller: Mellanox Technologies MT27700 Family [ConnectX-4]
82:00.1 Infiniband controller: Mellanox Technologies MT27700 Family [ConnectX-4]
+ [22-11-27 23:26:53] lscpu

Comment 3 RHEL Program Management 2023-09-21 14:40:01 UTC
Issue migration from Bugzilla to Jira is in process at this time. This will be the last message in Jira copied from the Bugzilla bug.

Comment 4 RHEL Program Management 2023-09-21 14:45:04 UTC
This BZ has been automatically migrated to the issues.redhat.com Red Hat Issue Tracker. All future work related to this report will be managed there.

Due to differences in account names between systems, some fields were not replicated.  Be sure to add yourself to Jira issue's "Watchers" field to continue receiving updates and add others to the "Need Info From" field to continue requesting information.

To find the migrated issue, look in the "Links" section for a direct link to the new issue location. The issue key will have an icon of 2 footprints next to it, and begin with "RHEL-" followed by an integer.  You can also find this issue by visiting https://issues.redhat.com/issues/?jql= and searching the "Bugzilla Bug" field for this BZ's number, e.g. a search like:

"Bugzilla Bug" = 1234567

In the event you have trouble locating or viewing this issue, you can file an issue by sending mail to rh-issues. You can also visit https://access.redhat.com/articles/7032570 for general account information.


Note You need to log in before you can comment on or make changes to this bug.