Bug 1017007

Summary: glusterd crash seen when volume operations are performed simultaneously on the same node, and one fails.
Product: Red Hat Gluster Storage Reporter: Shruti Sampat <ssampat>
Component: glusterdAssignee: Bug Updates Notification Mailing List <rhs-bugs>
Status: CLOSED ERRATA QA Contact: Shruti Sampat <ssampat>
Severity: urgent Docs Contact:
Priority: urgent    
Version: 2.1CC: asriram, dtsang, kaushal, kdhananj, knarra, mmahoney, pprakash, sdharane, vagarwal, vbellur
Target Milestone: ---Keywords: ZStream
Target Release: ---   
Hardware: Unspecified   
OS: Unspecified   
Whiteboard:
Fixed In Version: glusterfs-3.4.0.35rhs Doc Type: Bug Fix
Doc Text:
Previously, due to locking issues in glusterd management deamon, glusterd crashed when two volume operations were executed simultaneously on the same node. Now, with this update this issue has been fixed.
Story Points: ---
Clone Of: Environment:
Last Closed: 2013-11-27 15:41:45 UTC Type: Bug
Regression: --- Mount Type: ---
Documentation: --- CRM:
Verified Versions: Category: ---
oVirt Team: --- RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: --- Target Upstream Version:
Attachments:
Description Flags
glusterd logs
none
core dump none

Description Shruti Sampat 2013-10-09 06:56:42 UTC
Created attachment 809691 [details]
glusterd logs

Description of problem:
------------------------
When volume operations like volume status, start and stop are executed simultaneously on the same machine, and one of them fails, a glusterd crash is seen.

Find attached glusterd logs and the core.

Version-Release number of selected component (if applicable):
glusterfs 3.4.2.0rhsc

How reproducible:
Always

Steps to Reproduce:
1. Run gluster volume status command on the node from a terminal multiple times.
2. On another terminal run volume operations like volume start and stop multiple times.

Actual results:
One of the operations fails and glusterd crashes.

Expected results:
glusterd should not crash.

Additional info:

Comment 1 Shruti Sampat 2013-10-09 07:11:05 UTC
Created attachment 809696 [details]
core dump

Comment 3 Prasanth 2013-10-09 12:14:50 UTC
glusterd crash is seen every often even without executing many operations in parallel. See below:

-----------
[2013-10-09 12:11:02.523379] E [glusterd-utils.c:149:glusterd_lock] 0-management: Unable to get lock for uuid: 50e87872-7d0c-4210-860e-41aabf41e79a, lock held by: 894e1ea3-8e38-4d2e-89e0-10fab0e9830b
[2013-10-09 12:11:02.523404] E [glusterd-syncop.c:1202:gd_sync_task_begin] 0-management: Unable to acquire lock
pending frames:
frame : type(0) op(0)

patchset: git://git.gluster.com/glusterfs.git
signal received: 11
time of crash: 2013-10-09 12:11:02configuration details:
argp 1
backtrace 1
dlfcn 1
fdatasync 1
libpthread 1
llistxattr 1
setfsid 1
spinlock 1
epoll.h 1
xattr.h 1
st_atim.tv_nsec 1
package-string: glusterfs 3.4.2.0rhsc
/lib64/libc.so.6[0x35e0e32960]
/usr/lib64/glusterfs/3.4.2.0rhsc/xlator/mgmt/glusterd.so(gd_unlock_op_phase+0xae)[0x7f199b2c785e]
/usr/lib64/glusterfs/3.4.2.0rhsc/xlator/mgmt/glusterd.so(gd_sync_task_begin+0xdf)[0x7f199b2c847f]
/usr/lib64/glusterfs/3.4.2.0rhsc/xlator/mgmt/glusterd.so(glusterd_op_begin_synctask+0x3b)[0x7f199b2c87cb]
/usr/lib64/glusterfs/3.4.2.0rhsc/xlator/mgmt/glusterd.so(__glusterd_handle_status_volume+0x14a)[0x7f199b2554da]
/usr/lib64/glusterfs/3.4.2.0rhsc/xlator/mgmt/glusterd.so(glusterd_big_locked_handler+0x3f)[0x7f199b255a5f]
/usr/lib64/libglusterfs.so.0(synctask_wrap+0x12)[0x39d7849822]
/lib64/libc.so.6[0x35e0e43bb0]
---------

Comment 4 Kaushal 2013-10-10 12:10:01 UTC
This has been fixed by patch https://code.engineering.redhat.com/gerrit/13857 done for bug-1016971.

Comment 5 Shruti Sampat 2013-10-18 09:30:30 UTC
Verified as fixed in glusterfs 3.4.0.35rhs. Executed multiple volume operations in parallel. glusterd crash not seen.

Comment 7 errata-xmlrpc 2013-11-27 15:41:45 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

http://rhn.redhat.com/errata/RHBA-2013-1769.html