Bug 1945997

Summary: [RHEL8.4] core dump due to segmentation. fault while running "/usr/sbin/dump_fts" command
Product: Red Hat Enterprise Linux 8 Reporter: Brian Chae <bchae>
Component: rdma-coreAssignee: Honggang LI <honli>
Status: CLOSED ERRATA QA Contact: Brian Chae <bchae>
Severity: unspecified Docs Contact:
Priority: unspecified    
Version: 8.4CC: rdma-dev-team
Target Milestone: pre-dev-freezeKeywords: Triaged
Target Release: 8.5   
Hardware: Unspecified   
OS: Unspecified   
Whiteboard:
Fixed In Version: rdma-core-35.0-1.el8 Doc Type: If docs needed, set a value
Doc Text:
Story Points: ---
Clone Of: Environment:
Last Closed: 2021-11-09 19:41:27 UTC Type: Bug
Regression: --- Mount Type: ---
Documentation: --- CRM:
Verified Versions: Category: ---
oVirt Team: --- RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: --- Target Upstream Version:
Embargoed:
Bug Depends On:    
Bug Blocks: 1903942    
Attachments:
Description Flags
compress core dump file while running "/usr/sbin/dump_fts" command none

Description Brian Chae 2021-04-03 11:52:17 UTC
Description of problem:

Core is dumped consistently while running "/usr/sbin/dump_fts" for infiniban-diags for RDMA hosts.


Version-Release number of selected component (if applicable):

DISTRO=RHEL-8.4.0-20210309.1

+ [21-04-02 16:29:07] cat /etc/redhat-release
Red Hat Enterprise Linux release 8.4 Beta (Ootpa)

+ [21-04-02 16:29:07] uname -a
Linux rdma-virt-02.lab.bos.redhat.com 4.18.0-293.el8.x86_64 #1 SMP Mon Mar 1 10:04:09 EST 2021 x86_64 x86_64 x86_64 GNU/Linux

+ [21-04-02 16:29:07] cat /proc/cmdline
BOOT_IMAGE=(hd0,msdos1)/vmlinuz-4.18.0-293.el8.x86_64 root=UUID=34b90771-4181-41fd-88ae-42b3bbd44439 ro intel_idle.max_cstate=0 processor.max_cstate=0 intel_iommu=on iommu=on console=tty0 rd_NO_PLYMOUTH crashkernel=auto resume=UUID=d9d59ac4-2aa3-4f3c-b5e6-81b2812a7afa console=ttyS1,115200n81

+ [21-04-02 16:29:07] rpm -q rdma-core linux-firmware
rdma-core-32.0-4.el8.x86_64
linux-firmware-20201218-102.git05789708.el8.noarch

+ [21-04-02 16:29:07] tail /sys/class/infiniband/mlx5_0/fw_ver /sys/class/infiniband/mlx5_1/fw_ver /sys/class/infiniband/mlx5_bond_0/fw_ver
==> /sys/class/infiniband/mlx5_0/fw_ver <==
12.28.2006

==> /sys/class/infiniband/mlx5_1/fw_ver <==
12.28.2006

==> /sys/class/infiniband/mlx5_bond_0/fw_ver <==
14.29.2002
+ [21-04-02 16:29:07] lspci
+ [21-04-02 16:29:07] grep -i -e ethernet -e infiniband -e omni -e ConnectX
02:00.0 Ethernet controller: Broadcom Inc. and subsidiaries NetXtreme BCM5720 2-port Gigabit Ethernet PCIe
02:00.1 Ethernet controller: Broadcom Inc. and subsidiaries NetXtreme BCM5720 2-port Gigabit Ethernet PCIe
03:00.0 Ethernet controller: Broadcom Inc. and subsidiaries NetXtreme BCM5720 2-port Gigabit Ethernet PCIe
03:00.1 Ethernet controller: Broadcom Inc. and subsidiaries NetXtreme BCM5720 2-port Gigabit Ethernet PCIe
04:00.0 Infiniband controller: Mellanox Technologies MT27700 Family [ConnectX-4]
04:00.1 Infiniband controller: Mellanox Technologies MT27700 Family [ConnectX-4]
05:00.0 Ethernet controller: Mellanox Technologies MT27710 Family [ConnectX-4 Lx]
05:00.1 Ethernet controller: Mellanox Technologies MT27710 Family [ConnectX-4 Lx]

[root@rdma-virt-02 var]$ ibstatus
Infiniband device 'mlx5_0' port 1 status:
        default gid:     fe80:0000:0000:0000:e41d:2d03:00e7:0ff6
        base lid:        0x13
        sm lid:          0xd
        state:           4: ACTIVE
        phys state:      5: LinkUp
        rate:            56 Gb/sec (4X FDR)
        link_layer:      InfiniBand

Infiniband device 'mlx5_1' port 1 status:
        default gid:     fe80:0000:0000:0001:e41d:2d03:00e7:0ff7
        base lid:        0x26
        sm lid:          0x1
        state:           4: ACTIVE
        phys state:      5: LinkUp
        rate:            100 Gb/sec (4X EDR)
        link_layer:      InfiniBand

Infiniband device 'mlx5_bond_0' port 1 status:
        default gid:     fe80:0000:0000:0000:e61d:2dff:fefd:a72a
        base lid:        0x0
        sm lid:          0x0
        state:           4: ACTIVE
        phys state:      5: LinkUp
        rate:            25 Gb/sec (1X EDR)
        link_layer:      Ethernet





How reproducible:

100%

Steps to Reproduce:
1. Have RHEL8.4 is up booted, up and running in one of RDMA lab hosts - rdma-virt-02 in this case
2. run the following IB diag command

/usr/sbin/dump_fts -P 1


3.

Actual results:
Running python3 As root: 
TIME                            PID   UID   GID SIG COREFILE  EXE
Fri 2021-04-02 16:30:50 EDT   35026     0     0  11 present   /usr/sbin/dump_fts
total 72
-rw-r-----. 1 root root 66731 Apr  2 16:30 core.dump_fts.0.e2b000c7c8fb45baa1e1f29de0cd318f.35026.1617395450000000.lz4 <<<=======

b'Red Hat Enterprise Linux release 8.4 Beta (Ootpa)\n'

Firmware Bug, please contact your hardware vendor.
Apr 02 12:17:22 rdma-virt-02.lab.bos.redhat.com kernel: ACPI: [Firmware Bug]: BIOS _OSI(Linux) query ignored

Errors found during boot. Please check if it's a bug.
Errors found during boot. Please check if it's a bug.
Errors found during boot. Please check if it's a bug.
Errors found during boot. Please check if it's a bug.
Errors found during boot. Please check if it's a bug.
Errors found during boot. Please check if it's a bug.
Errors found during boot. Please check if it's a bug.
Errors found during boot. Please check if it's a bug.
Errors found during boot. Please check if it's a bug.
Errors found during boot. Please check if it's a bug.
Apr 02 12:17:23 rdma-virt-02.lab.bos.redhat.com kernel: ERST: Error Record Serialization Table (ERST) support is initialized.

Apr 02 16:17:33 rdma-virt-02.lab.bos.redhat.com kernel: ACPI Error: No handler for Region [SYSI] (00000000547761c4) [IPMI] (20200925/evregion-132)

Apr 02 16:17:33 rdma-virt-02.lab.bos.redhat.com kernel: ACPI Error: Region IPMI (ID=7) has no handler (20200925/exfldio-265)

Apr 02 16:17:33 rdma-virt-02.lab.bos.redhat.com kernel: ACPI Error: Aborting method \_SB.PMI0._GHL due to previous error (AE_NOT_EXIST) (20200925/psparse-531)

Apr 02 16:17:33 rdma-virt-02.lab.bos.redhat.com kernel: ACPI Error: Aborting method \_SB.PMI0._PMC due to previous error (AE_NOT_EXIST) (20200925/psparse-531)

Apr 02 16:17:33 rdma-virt-02.lab.bos.redhat.com kernel: ACPI Error: AE_NOT_EXIST, Evaluating _PMC (20200925/power_meter-756)

Apr 02 16:18:51 rdma-virt-02.lab.bos.redhat.com NetworkManager[1480]: <error> [1617394731.9644] platform-linux: sysctl: failed to set 'bonding/ad_actor_system' to '00:00:00:00:00:00': (22) Invalid argument

Apr 02 16:30:50 rdma-virt-02.lab.bos.redhat.com kernel: dump_fts[35026]: segfault at 20 ip 000055692c235b7c sp 00007ffda925a4c0 error 4 in dump_fts[55692c233000+8000] <<<=============================



Expected results:

Should have successfully run the command without any core dump


Additional info:


GDB core decode
===============

[root@rdma-virt-02 coredump]$ gdb /tmp/core.dump_fts.0.e2b000c7c8fb45baa1e1f29de0cd318f.35026.1617395450000000
GNU gdb (GDB) Red Hat Enterprise Linux 8.2-15.el8
Copyright (C) 2018 Free Software Foundation, Inc.
License GPLv3+: GNU GPL version 3 or later <http://gnu.org/licenses/gpl.html>
This is free software: you are free to change and redistribute it.
There is NO WARRANTY, to the extent permitted by law.
Type "show copying" and "show warranty" for details.
This GDB was configured as "x86_64-redhat-linux-gnu".
Type "show configuration" for configuration details.
For bug reporting instructions, please see:
<http://www.gnu.org/software/gdb/bugs/>.
Find the GDB manual and other documentation resources online at:
    <http://www.gnu.org/software/gdb/documentation/>.

For help, type "help".
Type "apropos word" to search for commands related to "word"...
[New LWP 35026]
Reading symbols from /usr/sbin/dump_fts...Reading symbols from /usr/lib/debug/usr/sbin/dump_fts-32.0-4.el8.x86_64.debug...done.
done.
Core was generated by `/usr/sbin/dump_fts -P 1'.
Program terminated with signal SIGSEGV, Segmentation fault.
#0  dump_lid (str_len=200, portguid=0x7ffda925a558, base_port_lid=<synthetic pointer>, 
    last_port_lid=<synthetic pointer>, fabric=0x55692cdbb720, valid=1, lid=33, 
    str=0x7ffda925a680 ": (Channel Adapter portguid 0x248a07030049d4f1: 'rdma-dev-21 mlx5_2')")
    at infiniband-diags/dump_fts.c:272
272             nodeguid = port->node->guid;
(gdb) where
#0  dump_lid (str_len=200, portguid=0x7ffda925a558, base_port_lid=<synthetic pointer>, 
    last_port_lid=<synthetic pointer>, fabric=0x55692cdbb720, valid=1, lid=33, 
    str=0x7ffda925a680 ": (Channel Adapter portguid 0x248a07030049d4f1: 'rdma-dev-21 mlx5_2')")
    at infiniband-diags/dump_fts.c:272
#1  dump_unicast_tables (node=0x55692cdc2b10, startl=0, endl=58, mad_port=0x55692cdc9ce0, 
    fabric=0x55692cdbb720) at infiniband-diags/dump_fts.c:366
#2  0x00007f7afb56abd0 in ibnd_iter_nodes_type (fabric=<optimized out>, 
    func=0x55692c236690 <process_switch>, node_type=<optimized out>, user_data=0x55692cdbb720)
    at libibnetdisc/ibnetdisc.c:930
#3  0x000055692c235577 in main (argc=<optimized out>, argv=<optimized out>)
    at infiniband-diags/dump_fts.c:481
(gdb)

Comment 1 Brian Chae 2021-04-03 11:56:23 UTC
Created attachment 1768795 [details]
compress core dump file while running "/usr/sbin/dump_fts" command

Comment 5 Brian Chae 2021-04-11 23:38:47 UTC
Hi, Honggang, that is fine with me. Also, this issue does not seem to be functionally critical to RMDA.

Thanks.

-Brian

Comment 11 Brian Chae 2021-06-02 19:27:23 UTC
Verification has been conducted with RDMA tier1 and tier2 tests.

1. build and packages




DISTRO=RHEL-8.5.0-20210521.n.1
DISTRO=RHEL-8.5.0-20210521.n.1
Red Hat Enterprise Linux release 8.5 Beta (Ootpa)
Linux rdma-virt-00.lab.bos.redhat.com 4.18.0-305.8.el8.x86_64 #1 SMP Mon May 17 14:15:59 EDT 2021 x86_64 x86_64 x86_64 GNU/Linux
BOOT_IMAGE=(hd0,msdos1)/vmlinuz-4.18.0-305.8.el8.x86_64 root=UUID=51f91589-a448-484f-b451-3c7bd1e32e4c ro intel_idle.max_cstate=0 processor.max_cstate=0 intel_iommu=on iommu=on console=tty0 rd_NO_PLYMOUTH crashkernel=auto resume=UUID=b6513717-4f93-43b5-886d-aac2d5dc42e5 console=ttyS1,115200n81
rdma-core-35.0-1.el8.x86_64
linux-firmware-20201218-102.git05789708.el8.noarch
==> /sys/class/infiniband/mlx4_0/fw_ver <==
2.42.5000

==> /sys/class/infiniband/mlx4_1/fw_ver <==
2.42.5000
02:00.0 Ethernet controller: Broadcom Inc. and subsidiaries NetXtreme BCM5720 2-port Gigabit Ethernet PCIe
02:00.1 Ethernet controller: Broadcom Inc. and subsidiaries NetXtreme BCM5720 2-port Gigabit Ethernet PCIe
03:00.0 Ethernet controller: Broadcom Inc. and subsidiaries NetXtreme BCM5720 2-port Gigabit Ethernet PCIe
03:00.1 Ethernet controller: Broadcom Inc. and subsidiaries NetXtreme BCM5720 2-port Gigabit Ethernet PCIe
04:00.0 Network controller: Mellanox Technologies MT27520 Family [ConnectX-3 Pro]
04:00.1 Network controller: Mellanox Technologies MT27500/MT27520 Family [ConnectX-3/ConnectX-3 Pro Virtual Function]
04:00.2 Network controller: Mellanox Technologies MT27500/MT27520 Family [ConnectX-3/ConnectX-3 Pro Virtual Function]
04:00.3 Network controller: Mellanox Technologies MT27500/MT27520 Family [ConnectX-3/ConnectX-3 Pro Virtual Function]
04:00.4 Network controller: Mellanox Technologies MT27500/MT27520 Family [ConnectX-3/ConnectX-3 Pro Virtual Function]
04:00.5 Network controller: Mellanox Technologies MT27500/MT27520 Family [ConnectX-3/ConnectX-3 Pro Virtual Function]
04:00.6 Network controller: Mellanox Technologies MT27500/MT27520 Family [ConnectX-3/ConnectX-3 Pro Virtual Function]
04:00.7 Network controller: Mellanox Technologies MT27500/MT27520 Family [ConnectX-3/ConnectX-3 Pro Virtual Function]
06:00.0 Network controller: Mellanox Technologies MT27520 Family [ConnectX-3 Pro]
06:00.1 Network controller: Mellanox Technologies MT27500/MT27520 Family [ConnectX-3/ConnectX-3 Pro Virtual Function]
06:00.2 Network controller: Mellanox Technologies MT27500/MT27520 Family [ConnectX-3/ConnectX-3 Pro Virtual Function]
06:00.3 Network controller: Mellanox Technologies MT27500/MT27520 Family [ConnectX-3/ConnectX-3 Pro Virtual Function]
06:00.4 Network controller: Mellanox Technologies MT27500/MT27520 Family [ConnectX-3/ConnectX-3 Pro Virtual Function]
06:00.5 Network controller: Mellanox Technologies MT27500/MT27520 Family [ConnectX-3/ConnectX-3 Pro Virtual Function]
06:00.6 Network controller: Mellanox Technologies MT27500/MT27520 Family [ConnectX-3/ConnectX-3 Pro Virtual Function]
06:00.7 Network controller: Mellanox Technologies MT27500/MT27520 Family [ConnectX-3/ConnectX-3 Pro Virtual Function]

2. HCAs tested

MLX4 IB0, MLX5 IB, MLX4 ROCE, MLX5 ROCE, BXNT ROCE, CXGB4 IW, HFI OPA

3. RDMA hosts tested

rdma-dev-10/11, rdma-dev-12/13, rdma-dev19/20, rdma-dev-21/22, rdma-qe-06/07, rdma-perf-00/01, rdma-perf-02/03, rdma-virt-00/01, rdma-virt-02/03 pairs

4. result

No core dumps from running "/usr/sbin/dump_fts" in any of the above RDMA hosts, as part of fabtests suite

Comment 13 errata-xmlrpc 2021-11-09 19:41:27 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory (RDMA stack bug fix and enhancement update), and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHBA-2021:4412