Note: This bug is displayed in read-only format because
the product is no longer active in Red Hat Bugzilla.
RHEL Engineering is moving the tracking of its product development work on RHEL 6 through RHEL 9 to Red Hat Jira (issues.redhat.com). If you're a Red Hat customer, please continue to file support cases via the Red Hat customer portal. If you're not, please head to the "RHEL project" in Red Hat Jira and file new tickets here. Individual Bugzilla bugs in the statuses "NEW", "ASSIGNED", and "POST" are being migrated throughout September 2023. Bugs of Red Hat partners with an assigned Engineering Partner Manager (EPM) are migrated in late September as per pre-agreed dates. Bugs against components "kernel", "kernel-rt", and "kpatch" are only migrated if still in "NEW" or "ASSIGNED". If you cannot log in to RH Jira, please consult article #7032570. That failing, please send an e-mail to the RH Jira admins at rh-issues@redhat.com to troubleshoot your issue as a user management inquiry. The email creates a ServiceNow ticket with Red Hat. Individual Bugzilla bugs that are migrated will be moved to status "CLOSED", resolution "MIGRATED", and set with "MigratedToJIRA" in "Keywords". The link to the successor Jira issue will be found under "Links", have a little "two-footprint" icon next to it, and direct you to the "RHEL project" in Red Hat Jira (issue links are of type "https://issues.redhat.com/browse/RHEL-XXXX", where "X" is a digit). This same link will be available in a blue banner at the top of the page informing you that that bug has been migrated.
.Additionally install the `rdma-core` base-package to automatically enable kernel modules for RDMA hardware
In this RHEL 8 release, the-base package dependency for all sub-packages of `rdma-core` was removed. The change was to ensure that users of containers, who do not have Remote Direct Memory Access (RDMA) hardware, get a minimal container image size. As a consequence, after installing the latest `infiniband-diags` sub-package, there is no `/sys/bus/pci/devices/<DEVICE_ID>/net/` directory. To work around this problem, you need to additionally install the base `rdma-core` package. This package provides `systemd udev` rules for RDMA kernel modules loading. As a result, the problem no longer manifests in the described scenario.
Created attachment 1849140[details]
Detail_test_log-01062022
Description of problem:
It's found with infiniband-diags-37.1-1.el8.ppc64le, there is no subdirectory 'net' under directory '/sys/bus/pci/devices/0003:01:00.0/', so no the infiniband interface ib*. While once unindtall the package and install the old version infiniband-diags-35.0-1.el8.ppc64le, it works.
Version-Release number of selected component (if applicable):
Host compose: RHEL-8.6.0-20220104.3
Host kernel: 4.18.0-357.el8.ppc64le
infiniband-diags-37.1-1.el8.ppc64le
How reproducible:
100%
Steps to Reproduce:
1. Make sure the host is with infiniband card:
# lspci|grep Infini
0003:01:00.0 Infiniband controller: Mellanox Technologies MT28800 Family [ConnectX-5 Ex]
0003:01:00.1 Infiniband controller: Mellanox Technologies MT28800 Family [ConnectX-5 Ex]
[root@ibm-p9wr-08 ~]# ls /sys/bus/pci/devices/0003:01:00.0/net
2. Install the latest infiniband-diags package:
# yum install infiniband-diags -y
# modprobe ib_umad
3. Check the infiniband mode interface:
# ls /sys/bus/pci/devices/0003:01:00.0/net
ls: cannot access '/sys/bus/pci/devices/0003:01:00.0/net': No such file or directory
# ip address show
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
valid_lft forever preferred_lft forever
inet6 ::1/128 scope host
valid_lft forever preferred_lft forever
2: enP5p1s0f0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq state UP group default qlen 1000
link/ether 70:e2:84:14:28:62 brd ff:ff:ff:ff:ff:ff
inet 10.16.214.80/23 brd 10.16.215.255 scope global dynamic noprefixroute enP5p1s0f0
valid_lft 77709sec preferred_lft 77709sec
inet6 2620:52:0:10d6:72e2:84ff:fe14:2862/64 scope global dynamic noprefixroute
valid_lft 2591957sec preferred_lft 604757sec
inet6 fe80::72e2:84ff:fe14:2862/64 scope link noprefixroute
valid_lft forever preferred_lft forever
3: enP5p1s0f1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq state UP group default qlen 1000
link/ether 70:e2:84:14:28:63 brd ff:ff:ff:ff:ff:ff
4: enp1s0f0: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc mq state DOWN group default qlen 1000
link/ether 98:be:94:0d:f6:20 brd ff:ff:ff:ff:ff:ff
5: enp1s0f1: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc mq state DOWN group default qlen 1000
link/ether 98:be:94:0d:f6:21 brd ff:ff:ff:ff:ff:ff
6: enp1s0f2: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc mq state DOWN group default qlen 1000
link/ether 98:be:94:0d:f6:22 brd ff:ff:ff:ff:ff:ff
7: enp1s0f3: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc mq state DOWN group default qlen 1000
link/ether 98:be:94:0d:f6:23 brd ff:ff:ff:ff:ff:ff
Actual results:
There is no subdirectory 'net' under directory '/sys/bus/pci/devices/0003:01:00.0/', so no the infiniband interface ib*.
Expected results:
The interface exists.
# ls /sys/bus/pci/devices/0003:01:00.0/net
ib0
# ip address show
......
8: ib0: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 4092 qdisc mq state DOWN group default qlen 256
link/infiniband 00:00:11:07:fe:80:00:00:00:00:00:00:24:8a:07:03:00:a4:72:5c brd 00:ff:ff:ff:ff:12:40:1b:ff:ff:00:00:00:00:00:00:ff:ff:ff:ff
9: ib1: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 4092 qdisc mq state DOWN group default qlen 256
link/infiniband 00:00:19:07:fe:80:00:00:00:00:00:00:24:8a:07:03:00:a4:72:5d brd 00:ff:ff:ff:ff:12:40:1b:ff:ff:00:00:00:00:00:00:ff:ff:ff:ff
[root@ibm-p9wr-08 ~]# rpm -qa|grep infiniband
Additional info:
Hi,
I checked the attachment Detail_test_log-01062022 . It seems rdma-core was not installed.
Could you please try with infiniband-diags-37.1-1.el8.ppc64le and rdma-core-37.1-1.el8.ppc64le?
Comment 15RHEL Program Management
2023-07-06 07:28:06 UTC
After evaluating this issue, there are no plans to address it further or fix it in an upcoming release. Therefore, it is being closed. If plans change such that this issue will be fixed in an upcoming release, then the bug can be reopened.
Created attachment 1849140 [details] Detail_test_log-01062022 Description of problem: It's found with infiniband-diags-37.1-1.el8.ppc64le, there is no subdirectory 'net' under directory '/sys/bus/pci/devices/0003:01:00.0/', so no the infiniband interface ib*. While once unindtall the package and install the old version infiniband-diags-35.0-1.el8.ppc64le, it works. Version-Release number of selected component (if applicable): Host compose: RHEL-8.6.0-20220104.3 Host kernel: 4.18.0-357.el8.ppc64le infiniband-diags-37.1-1.el8.ppc64le How reproducible: 100% Steps to Reproduce: 1. Make sure the host is with infiniband card: # lspci|grep Infini 0003:01:00.0 Infiniband controller: Mellanox Technologies MT28800 Family [ConnectX-5 Ex] 0003:01:00.1 Infiniband controller: Mellanox Technologies MT28800 Family [ConnectX-5 Ex] [root@ibm-p9wr-08 ~]# ls /sys/bus/pci/devices/0003:01:00.0/net 2. Install the latest infiniband-diags package: # yum install infiniband-diags -y # modprobe ib_umad 3. Check the infiniband mode interface: # ls /sys/bus/pci/devices/0003:01:00.0/net ls: cannot access '/sys/bus/pci/devices/0003:01:00.0/net': No such file or directory # ip address show 1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000 link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 inet 127.0.0.1/8 scope host lo valid_lft forever preferred_lft forever inet6 ::1/128 scope host valid_lft forever preferred_lft forever 2: enP5p1s0f0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq state UP group default qlen 1000 link/ether 70:e2:84:14:28:62 brd ff:ff:ff:ff:ff:ff inet 10.16.214.80/23 brd 10.16.215.255 scope global dynamic noprefixroute enP5p1s0f0 valid_lft 77709sec preferred_lft 77709sec inet6 2620:52:0:10d6:72e2:84ff:fe14:2862/64 scope global dynamic noprefixroute valid_lft 2591957sec preferred_lft 604757sec inet6 fe80::72e2:84ff:fe14:2862/64 scope link noprefixroute valid_lft forever preferred_lft forever 3: enP5p1s0f1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq state UP group default qlen 1000 link/ether 70:e2:84:14:28:63 brd ff:ff:ff:ff:ff:ff 4: enp1s0f0: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc mq state DOWN group default qlen 1000 link/ether 98:be:94:0d:f6:20 brd ff:ff:ff:ff:ff:ff 5: enp1s0f1: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc mq state DOWN group default qlen 1000 link/ether 98:be:94:0d:f6:21 brd ff:ff:ff:ff:ff:ff 6: enp1s0f2: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc mq state DOWN group default qlen 1000 link/ether 98:be:94:0d:f6:22 brd ff:ff:ff:ff:ff:ff 7: enp1s0f3: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc mq state DOWN group default qlen 1000 link/ether 98:be:94:0d:f6:23 brd ff:ff:ff:ff:ff:ff Actual results: There is no subdirectory 'net' under directory '/sys/bus/pci/devices/0003:01:00.0/', so no the infiniband interface ib*. Expected results: The interface exists. # ls /sys/bus/pci/devices/0003:01:00.0/net ib0 # ip address show ...... 8: ib0: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 4092 qdisc mq state DOWN group default qlen 256 link/infiniband 00:00:11:07:fe:80:00:00:00:00:00:00:24:8a:07:03:00:a4:72:5c brd 00:ff:ff:ff:ff:12:40:1b:ff:ff:00:00:00:00:00:00:ff:ff:ff:ff 9: ib1: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 4092 qdisc mq state DOWN group default qlen 256 link/infiniband 00:00:19:07:fe:80:00:00:00:00:00:00:24:8a:07:03:00:a4:72:5d brd 00:ff:ff:ff:ff:12:40:1b:ff:ff:00:00:00:00:00:00:ff:ff:ff:ff [root@ibm-p9wr-08 ~]# rpm -qa|grep infiniband Additional info: