Bug 1049171 - [SNAPSHOT]: glusterd crashed while taking volume status with pthread_spin_lock () from /lib64/libpthread.so.0
Summary: [SNAPSHOT]: glusterd crashed while taking volume status with pthread_spin_loc...
Alias: None
Product: Red Hat Gluster Storage
Classification: Red Hat
Component: snapshot
Version: rhgs-3.0
Hardware: Unspecified
OS: Unspecified
Target Milestone: ---
: RHGS 3.0.0
Assignee: Avra Sengupta
QA Contact: Rahul Hinduja
Whiteboard: SNAPSHOT
: 1049166 (view as bug list)
Depends On:
TreeView+ depends on / blocked
Reported: 2014-01-07 07:18 UTC by Rahul Hinduja
Modified: 2016-09-17 12:57 UTC (History)
8 users (show)

Fixed In Version: glusterfs-
Doc Type: Bug Fix
Doc Text:
Clone Of:
Last Closed: 2014-09-22 19:31:27 UTC
Target Upstream Version:

Attachments (Terms of Use)

System ID Private Priority Status Summary Last Updated
Red Hat Product Errata RHEA-2014:1278 0 normal SHIPPED_LIVE Red Hat Storage Server 3.0 bug fix and enhancement update 2014-09-22 23:26:55 UTC

Description Rahul Hinduja 2014-01-07 07:18:21 UTC
Description of problem:

Have observed a glusterd crash while taking volume status from one node in cluster when the other nodes were performing stop and start of glusterd.

My setup consist of

1.Four servers: server1,server2,server3 and server4 in a cluster.
2.Four volumes: vol0,vol1,vol2,vol3

Stopped the glusterd on server3 and server4. Checked the gluster volume status which was successful. Again started the glusterd on server3 and server4, while glusterd was starting, tried to take the volume status from server1 and immediately glusterd on server2 crashed with bt as follows:

(gdb) bt
#0  0x000000321340c380 in pthread_spin_lock () from /lib64/libpthread.so.0
#1  0x0000003f09c3f43f in mem_put (ptr=0x7fd2e1d17814) at mem-pool.c:484
#2  0x0000003f09c183c5 in dict_destroy (this=0x7fd2e1d17c34) at dict.c:456
#3  0x00007fd2dfa75818 in glusterd_op_fini_ctx () at glusterd-op-sm.c:6180
#4  0x00007fd2dfa7d801 in glusterd_op_ac_commit_op (event=0x7fd2d4016690, ctx=0x7fd2d4016490) at glusterd-op-sm.c:4247
#5  0x00007fd2dfa7a0d0 in glusterd_op_sm () at glusterd-op-sm.c:6047
#6  0x00007fd2dfa6445b in __glusterd_handle_commit_op (req=0x7fd2df9d802c) at glusterd-handler.c:1006
#7  0x00007fd2dfa613cf in glusterd_big_locked_handler (req=0x7fd2df9d802c, actor_fn=0x7fd2dfa64350 <__glusterd_handle_commit_op>) at glusterd-handler.c:78
#8  0x0000003f09c4cdd2 in synctask_wrap (old_task=<value optimized out>) at syncop.c:293
#9  0x0000003213043bf0 in ?? () from /lib64/libc.so.6
#10 0x0000000000000000 in ?? ()


Version-Release number of selected component (if applicable):


Steps carried:
1. Create a setup of four servers (server1-4)
2. Create four volumes (vol0,vol1,vol2 and vol3)
3. Stop glusterd on server3 and server4
4. Check the volume status on server1
5. Start the glusterd on server3 and server4
6. While start in progress, take the volume status from server1

Actual results:

glusterd crashed with bt mentioned above, and logs mentioned below:

pending frames:
frame : type(0) op(0)
frame : type(0) op(0)

patchset: git://git.gluster.com/glusterfs.git
signal received: 11
time of crash: 2014-01-07 00:01:34configuration details:
argp 1
backtrace 1
dlfcn 1
fdatasync 1
libpthread 1
llistxattr 1
setfsid 1
spinlock 1
epoll.h 1
xattr.h 1
st_atim.tv_nsec 1
package-string: glusterfs 3.4.0.snap.dec30.2013git

Expected results:

glusterd should not crash

Comment 3 Raghavendra Bhat 2014-01-17 11:39:29 UTC
*** Bug 1049166 has been marked as a duplicate of this bug. ***

Comment 4 Avra Sengupta 2014-01-17 13:04:04 UTC
Fix at http://review.gluster.org/#/c/6728/

Comment 6 Nagaprasad Sathyanarayana 2014-04-21 06:17:46 UTC
Marking snapshot BZs to RHS 3.0.

Comment 7 Nagaprasad Sathyanarayana 2014-05-19 10:56:30 UTC
Setting flags required to add BZs to RHS 3.0 Errata

Comment 9 senaik 2014-06-04 10:28:58 UTC
Version : glusterfs-

Retried the steps as mentioned in 'Steps to reproduce'' did not face the issue. 

Marking the bug as 'Verified'

Comment 11 errata-xmlrpc 2014-09-22 19:31:27 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.


Note You need to log in before you can comment on or make changes to this bug.