Bugzilla (bugzilla.redhat.com) will be under maintenance for infrastructure upgrades and will not be available on July 31st between 12:30 AM - 05:30 AM UTC. We appreciate your understanding and patience. You can follow status.redhat.com for details.
Bug 1564033 - Glusterd daemon crashing while gathering profile information for a volume
Summary: Glusterd daemon crashing while gathering profile information for a volume
Keywords:
Status: CLOSED INSUFFICIENT_DATA
Alias: None
Product: Red Hat Gluster Storage
Classification: Red Hat
Component: glusterd
Version: rhgs-3.3
Hardware: Unspecified
OS: Unspecified
unspecified
medium
Target Milestone: ---
: ---
Assignee: Atin Mukherjee
QA Contact: Bala Konda Reddy M
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2018-04-05 07:31 UTC by nravinas
Modified: 2018-05-10 14:19 UTC (History)
4 users (show)

Fixed In Version:
Doc Type: If docs needed, set a value
Doc Text:
Clone Of:
Environment:
Last Closed: 2018-04-11 14:11:09 UTC
Target Upstream Version:


Attachments (Terms of Use)

Description nravinas 2018-04-05 07:31:25 UTC
Description of problem:

Six node gluster, running the following version:

grep -i gluster installed-rpms

gluster-nagios-addons-0.2.10-2.el7rhgs.x86_64               Fri Jan 19 10:29:41 2018
gluster-nagios-common-0.2.4-1.el7rhgs.noarch                Tue Jul 11 14:12:48 2017
glusterfs-3.8.4-54.el7rhgs.x86_64                           Wed Jan 24 11:35:45 2018
glusterfs-api-3.8.4-54.el7rhgs.x86_64                       Wed Jan 24 11:35:45 2018
glusterfs-cli-3.8.4-54.el7rhgs.x86_64                       Wed Jan 24 11:35:45 2018
glusterfs-client-xlators-3.8.4-54.el7rhgs.x86_64            Wed Jan 24 11:35:45 2018
glusterfs-events-3.8.4-54.el7rhgs.x86_64                    Fri Jan 26 11:29:25 2018
glusterfs-fuse-3.8.4-54.el7rhgs.x86_64                      Wed Jan 24 11:35:45 2018
glusterfs-ganesha-3.8.4-54.el7rhgs.x86_64                   Wed Jan 24 11:35:53 2018
glusterfs-geo-replication-3.8.4-54.el7rhgs.x86_64           Wed Jan 24 11:35:58 2018
glusterfs-libs-3.8.4-54.el7rhgs.x86_64                      Wed Jan 24 11:35:43 2018
glusterfs-rdma-3.8.4-54.el7rhgs.x86_64                      Wed Jan 24 11:36:00 2018
glusterfs-server-3.8.4-54.el7rhgs.x86_64                    Wed Jan 24 11:35:45 2018

Node names are:

s51l017.cmc.be
s51l018.cmc.be
s51l019.cmc.be
s51l020.cmc.be
s51l021.cmc.be
s51l022.cmc.be

On node s51l017.cmc.be, the glusterd daemon crashed on 2018-03-30 11:50:19
This is the backtrace gathered by the glusterd.log:

...

[2018-03-30 11:50:17.950735] I [MSGID: 106494] [glusterd-handler.c:3087:__glusterd_handle_cli_profile_volume] 0-management: Received volume profile req for volume orafin_smbq
pending frames:
frame : type(0) op(0)
frame : type(0) op(0)
patchset: git://git.gluster.com/glusterfs.git
signal received: 11
time of crash:
2018-03-30 11:50:19
configuration details:
argp 1
backtrace 1
dlfcn 1
libpthread 1
llistxattr 1
setfsid 1
spinlock 1
epoll.h 1
xattr.h 1
st_atim.tv_nsec 1
package-string: glusterfs 3.8.4
/lib64/libglusterfs.so.0(_gf_msg_backtrace_nomem+0xc2)[0x7f0bdcf47872]
/lib64/libglusterfs.so.0(gf_print_trace+0x324)[0x7f0bdcf513a4]
/lib64/libc.so.6(+0x35270)[0x7f0bdb5b0270]
/usr/lib64/glusterfs/3.8.4/xlator/mgmt/glusterd.so(+0x3f627)[0x7f0bd1a0b627]
/usr/lib64/glusterfs/3.8.4/xlator/mgmt/glusterd.so(+0x3ff2f)[0x7f0bd1a0bf2f]
/usr/lib64/glusterfs/3.8.4/xlator/mgmt/glusterd.so(+0x72e47)[0x7f0bd1a3ee47]
/usr/lib64/glusterfs/3.8.4/xlator/mgmt/glusterd.so(+0x7250a)[0x7f0bd1a3e50a]
/lib64/libgfrpc.so.0(rpc_clnt_handle_reply+0x90)[0x7f0bdcd10840]
/lib64/libgfrpc.so.0(rpc_clnt_notify+0x1e7)[0x7f0bdcd10b27]
/lib64/libgfrpc.so.0(rpc_transport_notify+0x23)[0x7f0bdcd0c9e3]
/usr/lib64/glusterfs/3.8.4/rpc-transport/socket.so(+0x73d6)[0x7f0bcee583d6]
/usr/lib64/glusterfs/3.8.4/rpc-transport/socket.so(+0x997c)[0x7f0bcee5a97c]
/lib64/libglusterfs.so.0(+0x85846)[0x7f0bdcfa2846]
/lib64/libpthread.so.0(+0x7e25)[0x7f0bdbda6e25]
/lib64/libc.so.6(clone+0x6d)[0x7f0bdb67334d]

...


Running the backtrace script to convert the symbols into function names:

[root@test]# sh backtrace.sh /var/tmp/glusterfs-debuginfo-3.8.4-54.el7rhgs.x86_64.rpm glusterd.log

eu-addr2line: /var/tmp/glusterfs-debuginfo-3.8.4-54.el7rhgs.x86_64/usr/lib/debug-->/usr/lib64/glusterfs/3.8.4/xlator/mgmt/glusterd.so.debug: No such file or directory
-->/usr/lib64/glusterfs/3.8.4/xlator/mgmt/glusterd.so(+0x3ff2f)
eu-addr2line: /var/tmp/glusterfs-debuginfo-3.8.4-54.el7rhgs.x86_64/usr/lib/debug-->/usr/lib64/glusterfs/3.8.4/xlator/mgmt/glusterd.so.debug: No such file or directory
-->/usr/lib64/glusterfs/3.8.4/xlator/mgmt/glusterd.so(+0x3f806)
/usr/lib64/glusterfs/3.8.4/xlator/mgmt/glusterd.so(+0x3f627)[0x7f0bd1a0b627] glusterd_op_ac_brick_op_failed /usr/src/debug/glusterfs-3.8.4/xlators/mgmt/glusterd/src/glusterd-op-sm.c:5407
/usr/lib64/glusterfs/3.8.4/xlator/mgmt/glusterd.so(+0x3ff2f)[0x7f0bd1a0bf2f] glusterd_op_sm /usr/src/debug/glusterfs-3.8.4/xlators/mgmt/glusterd/src/glusterd-op-sm.c:8098
/usr/lib64/glusterfs/3.8.4/xlator/mgmt/glusterd.so(+0x72e47)[0x7f0bd1a3ee47] glusterd_mgmt_v3_lock_peers_cbk_fn /usr/src/debug/glusterfs-3.8.4/xlators/mgmt/glusterd/src/glusterd-rpc-ops.c:923
/usr/lib64/glusterfs/3.8.4/xlator/mgmt/glusterd.so(+0x7250a)[0x7f0bd1a3e50a] glusterd_big_locked_cbk /usr/src/debug/glusterfs-3.8.4/xlators/mgmt/glusterd/src/glusterd-rpc-ops.c:216
/usr/lib64/glusterfs/3.8.4/rpc-transport/socket.so(+0x73d6)[0x7f0bcee583d6] socket_event_poll_in /usr/src/debug/glusterfs-3.8.4/rpc/rpc-transport/socket/src/socket.c:2309
/usr/lib64/glusterfs/3.8.4/rpc-transport/socket.so(+0x997c)[0x7f0bcee5a97c] socket_event_handler /usr/src/debug/glusterfs-3.8.4/rpc/rpc-transport/socket/src/socket.c:2458
eu-addr2line: /var/tmp/glusterfs-debuginfo-3.8.4-54.el7rhgs.x86_64/usr/lib/debug/lib64/libglusterfs.so.0.debug: No such file or directory /lib64/libglusterfs.so.0(+0x85846)[0x7f0bdcfa2846]

A sosreport with the -all-logs option is stored under /cases/02068887/sosreport-20180404-091805/s51l017.cmc.be

Please, engineering assistance is needed to perform a RCA about this crash.


Note You need to log in before you can comment on or make changes to this bug.