Description of problem: ==================== I had enabled brickmultiplexing and had the below volumes (all distributed-* with volume names indicating type of replica) [root@dhcp35-192 glusterfs]# gluster v list cross3 distrep ecvol ecx rep2 rep3 I was playing with killing of brick process and glusterd crash when the core was found (sorry that i don't have exact steps to hit the issue) Version-Release number of selected component (if applicable): =============================================================== [root@dhcp35-192 glusterfs]# rpm -qa|grep gluster glusterfs-geo-replication-3.10.0-1.el7.x86_64 glusterfs-libs-3.10.0-1.el7.x86_64 glusterfs-fuse-3.10.0-1.el7.x86_64 glusterfs-server-3.10.0-1.el7.x86_64 python2-glusterfs-api-1.1-1.el7.noarch glusterfs-extra-xlators-3.10.0-1.el7.x86_64 python2-gluster-3.10.0-1.el7.x86_64 glusterfs-3.10.0-1.el7.x86_64 glusterfs-api-3.10.0-1.el7.x86_64 glusterfs-cli-3.10.0-1.el7.x86_64 glusterfs-rdma-3.10.0-1.el7.x86_64 glusterfs-client-xlators-3.10.0-1.el7.x86_64
Created attachment 1265090 [details] core
[root@dhcp35-192 dir1]# file /core.1388 /core.1388: ELF 64-bit LSB core file x86-64, version 1 (SYSV), SVR4-style, from '/usr/sbin/glusterfsd -s 10.70.35.192 --volfile-id cross3.10.70.35.192.rhs-brick', real uid: 0, effective uid: 0, real gid: 0, effective gid: 0, execfn: '/usr/sbin/glusterfsd', platform: 'x86_64' [root@dhcp35-192 dir1]# gdb /usr/sbin/glusterfsd /core.1388 GNU gdb (GDB) Red Hat Enterprise Linux 7.6.1-94.el7 Copyright (C) 2013 Free Software Foundation, Inc. License GPLv3+: GNU GPL version 3 or later <http://gnu.org/licenses/gpl.html> This is free software: you are free to change and redistribute it. There is NO WARRANTY, to the extent permitted by law. Type "show copying" and "show warranty" for details. This GDB was configured as "x86_64-redhat-linux-gnu". For bug reporting instructions, please see: <http://www.gnu.org/software/gdb/bugs/>... Reading symbols from /usr/sbin/glusterfsd...Reading symbols from /usr/lib/debug/usr/sbin/glusterfsd.debug...done. done. [New LWP 1392] [New LWP 1409] [New LWP 1415] [New LWP 1413] [New LWP 1408] [New LWP 1407] [New LWP 1388] [New LWP 1394] [New LWP 1403] [New LWP 1389] [New LWP 1390] [New LWP 1391] [New LWP 1416] [New LWP 1414] [New LWP 1412] [New LWP 1411] [New LWP 1410] [New LWP 1406] [New LWP 1405] [New LWP 1393] [Thread debugging using libthread_db enabled] Using host libthread_db library "/lib64/libthread_db.so.1". Core was generated by `/usr/sbin/glusterfsd -s 10.70.35.192 --volfile-id cross3.10.70.35.192.rhs-brick'. Program terminated with signal 11, Segmentation fault. #0 0x00007fcac6857700 in glusterfs_graph_attach (orig_graph=0x0, path=<optimized out>) at graph.c:1086 1086 glusterfs_xlator_link (orig_graph->top, graph->top); Missing separate debuginfos, use: debuginfo-install glibc-2.17-157.el7_3.1.x86_64 keyutils-libs-1.5.8-3.el7.x86_64 krb5-libs-1.14.1-27.el7_3.x86_64 libacl-2.2.51-12.el7.x86_64 libaio-0.3.109-13.el7.x86_64 libattr-2.4.46-12.el7.x86_64 libcom_err-1.42.9-9.el7.x86_64 libgcc-4.8.5-11.el7.x86_64 libselinux-2.5-6.el7.x86_64 libuuid-2.23.2-33.el7.x86_64 openssl-libs-1.0.1e-60.el7_3.1.x86_64 pcre-8.32-15.el7_2.1.x86_64 sqlite-3.7.17-8.el7.x86_64 zlib-1.2.7-17.el7.x86_64 (gdb) bt #0 0x00007fcac6857700 in glusterfs_graph_attach (orig_graph=0x0, path=<optimized out>) at graph.c:1086 #1 0x00007fcac6d1e5da in glusterfs_handle_attach (req=0x7fcab4003490) at glusterfsd-mgmt.c:842 #2 0x00007fcac6858620 in synctask_wrap (old_task=<optimized out>) at syncop.c:375 #3 0x00007fcac4f16cf0 in ?? () from /lib64/libc.so.6 #4 0x0000000000000000 in ?? ()
Are you sure this isn't the same bug already fixed by https://review.gluster.org/#/c/16888/?
This bug reported is against a version of Gluster that is no longer maintained (or has been EOL'd). See https://www.gluster.org/release-schedule/ for the versions currently maintained. As a result this bug is being closed. If the bug persists on a maintained version of gluster or against the mainline gluster repository, request that it be reopened and the Version field be marked appropriately.