Bug 1265597

Summary: Glusterfs crashed
Product: [Red Hat Storage] Red Hat Gluster Storage Reporter: RajeshReddy <rmekala>
Component: coreAssignee: Bug Updates Notification Mailing List <rhs-bugs>
Status: CLOSED WORKSFORME QA Contact: Anoop <annair>
Severity: unspecified Docs Contact:
Priority: unspecified    
Version: rhgs-3.1CC: atumball, mzywusko, rhs-bugs
Target Milestone: ---Keywords: ZStream
Target Release: ---   
Hardware: Unspecified   
OS: Unspecified   
Whiteboard:
Fixed In Version: Doc Type: Bug Fix
Doc Text:
Story Points: ---
Clone Of: Environment:
Last Closed: 2018-01-29 17:44:45 UTC Type: Bug
Regression: --- Mount Type: ---
Documentation: --- CRM:
Verified Versions: Category: ---
oVirt Team: --- RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: --- Target Upstream Version:
Embargoed:

Description RajeshReddy 2015-09-23 10:25:28 UTC
Description of problem:
==============
while doing add/remove brick glusterfs crashed 

Version-Release number of selected component (if applicable):
=============
glusterfs-api-3.7.1-14

How reproducible:


Steps to Reproduce:
=============
1.Create a script to populate data on mount and while io is running do add/remove bricks 
2.
3.

Actual results:
===========
Brick crashed and bt is given below 



[root@rhs-client9 yum.repos.d]# gdb /usr/sbin/glusterfsd /core.3263 
GNU gdb (GDB) Red Hat Enterprise Linux 7.6.1-80.el7
Copyright (C) 2013 Free Software Foundation, Inc.
License GPLv3+: GNU GPL version 3 or later <http://gnu.org/licenses/gpl.html>
This is free software: you are free to change and redistribute it.
There is NO WARRANTY, to the extent permitted by law.  Type "show copying"
and "show warranty" for details.
This GDB was configured as "x86_64-redhat-linux-gnu".
For bug reporting instructions, please see:
<http://www.gnu.org/software/gdb/bugs/>...
Reading symbols from /usr/sbin/glusterfsd...Reading symbols from /usr/lib/debug/usr/sbin/glusterfsd.debug...done.
done.
[New LWP 3800]
[New LWP 3268]
[New LWP 3263]
[New LWP 3896]
[New LWP 3713]
[New LWP 3281]
[New LWP 3267]
[New LWP 3464]
[New LWP 3265]
[New LWP 3895]
[New LWP 3714]
[New LWP 3797]
[New LWP 3894]
[New LWP 3266]
[New LWP 3799]
[New LWP 3715]
[New LWP 3798]
[Thread debugging using libthread_db enabled]
Using host libthread_db library "/lib64/libthread_db.so.1".
Core was generated by `/usr/sbin/glusterfsd -s rhs-client9.lab.eng.blr.redhat.com --volfile-id stress_'.
Program terminated with signal 11, Segmentation fault.
#0  list_del (old=0x7fb0bc000b40) at ../../../../libglusterfs/src/list.h:76
76		old->prev->next = old->next;
Missing separate debuginfos, use: debuginfo-install keyutils-libs-1.5.8-3.el7.x86_64 krb5-libs-1.12.2-14.el7.x86_64 libacl-2.2.51-12.el7.x86_64 libaio-0.3.109-12.el7.x86_64 libattr-2.4.46-12.el7.x86_64 libcom_err-1.42.9-7.el7.x86_64 libgcc-4.8.3-9.el7.x86_64 libselinux-2.2.2-6.el7.x86_64 pcre-8.32-14.el7.x86_64 sqlite-3.7.17-8.el7.x86_64 xz-libs-5.1.2-9alpha.el7.x86_64
(gdb) bt
#0  list_del (old=0x7fb0bc000b40) at ../../../../libglusterfs/src/list.h:76
#1  changelog_rpc_clnt_unref (crpc=0x7fb0bc000aa0) at changelog-ev-handle.h:74
#2  put_client (crpc=0x7fb0bc000aa0, c_clnt=0x7fb0e40729e0) at changelog-ev-handle.c:268
#3  _dispatcher (rlist=<optimized out>, arg=0x7fb0e40729e0) at changelog-ev-handle.c:298
#4  0x00007fb0f6d29c96 in rbuf_wait_for_completion (rbuf=0x7fb0e4074ae0, opaque=0x7fb0e4074d70, fn=fn@entry=0x7fb0e8117a50 <_dispatcher>, 
    arg=arg@entry=0x7fb0e40729e0) at rot-buffs.c:473
#5  0x00007fb0e8118227 in changelog_ev_dispatch (data=0x7fb0e40729e0) at changelog-ev-handle.c:347
#6  0x00007fb0f5b20df5 in start_thread (arg=0x7fb0d2ffd700) at pthread_create.c:308
#7  0x00007fb0f54671ad in clone () at ../sysdeps/unix/sysv/linux/x86_64/clone.S:113
(gdb) 

core file is available @

Comment 2 RajeshReddy 2015-09-23 10:31:17 UTC
/home/repo/sosreports/bug.1265597

Comment 4 Amar Tumballi 2018-01-29 17:44:45 UTC
With the major fixes in RPC layer over last 3 years, we haven't seen similar issues in recent RHGS 3.x release testing. 

Closing the issue as its not reproducible. Feel free to open it if seen again.