Bug 1017007 - glusterd crash seen when volume operations are performed simultaneously on the same node, and one fails.
Summary: glusterd crash seen when volume operations are performed simultaneously on th...
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: Red Hat Gluster Storage
Classification: Red Hat Storage
Component: glusterd
Version: 2.1
Hardware: Unspecified
OS: Unspecified
urgent
urgent
Target Milestone: ---
: ---
Assignee: Bug Updates Notification Mailing List
QA Contact: Shruti Sampat
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2013-10-09 06:56 UTC by Shruti Sampat
Modified: 2013-11-27 15:41 UTC (History)
10 users (show)

Fixed In Version: glusterfs-3.4.0.35rhs
Doc Type: Bug Fix
Doc Text:
Previously, due to locking issues in glusterd management deamon, glusterd crashed when two volume operations were executed simultaneously on the same node. Now, with this update this issue has been fixed.
Clone Of:
Environment:
Last Closed: 2013-11-27 15:41:45 UTC
Embargoed:


Attachments (Terms of Use)
glusterd logs (1.01 MB, text/x-log)
2013-10-09 06:56 UTC, Shruti Sampat
no flags Details
core dump (262.53 KB, application/x-xz)
2013-10-09 07:11 UTC, Shruti Sampat
no flags Details


Links
System ID Private Priority Status Summary Last Updated
Red Hat Product Errata RHBA-2013:1769 0 normal SHIPPED_LIVE Red Hat Storage 2.1 enhancement and bug fix update #1 2013-11-27 20:17:39 UTC

Description Shruti Sampat 2013-10-09 06:56:42 UTC
Created attachment 809691 [details]
glusterd logs

Description of problem:
------------------------
When volume operations like volume status, start and stop are executed simultaneously on the same machine, and one of them fails, a glusterd crash is seen.

Find attached glusterd logs and the core.

Version-Release number of selected component (if applicable):
glusterfs 3.4.2.0rhsc

How reproducible:
Always

Steps to Reproduce:
1. Run gluster volume status command on the node from a terminal multiple times.
2. On another terminal run volume operations like volume start and stop multiple times.

Actual results:
One of the operations fails and glusterd crashes.

Expected results:
glusterd should not crash.

Additional info:

Comment 1 Shruti Sampat 2013-10-09 07:11:05 UTC
Created attachment 809696 [details]
core dump

Comment 3 Prasanth 2013-10-09 12:14:50 UTC
glusterd crash is seen every often even without executing many operations in parallel. See below:

-----------
[2013-10-09 12:11:02.523379] E [glusterd-utils.c:149:glusterd_lock] 0-management: Unable to get lock for uuid: 50e87872-7d0c-4210-860e-41aabf41e79a, lock held by: 894e1ea3-8e38-4d2e-89e0-10fab0e9830b
[2013-10-09 12:11:02.523404] E [glusterd-syncop.c:1202:gd_sync_task_begin] 0-management: Unable to acquire lock
pending frames:
frame : type(0) op(0)

patchset: git://git.gluster.com/glusterfs.git
signal received: 11
time of crash: 2013-10-09 12:11:02configuration details:
argp 1
backtrace 1
dlfcn 1
fdatasync 1
libpthread 1
llistxattr 1
setfsid 1
spinlock 1
epoll.h 1
xattr.h 1
st_atim.tv_nsec 1
package-string: glusterfs 3.4.2.0rhsc
/lib64/libc.so.6[0x35e0e32960]
/usr/lib64/glusterfs/3.4.2.0rhsc/xlator/mgmt/glusterd.so(gd_unlock_op_phase+0xae)[0x7f199b2c785e]
/usr/lib64/glusterfs/3.4.2.0rhsc/xlator/mgmt/glusterd.so(gd_sync_task_begin+0xdf)[0x7f199b2c847f]
/usr/lib64/glusterfs/3.4.2.0rhsc/xlator/mgmt/glusterd.so(glusterd_op_begin_synctask+0x3b)[0x7f199b2c87cb]
/usr/lib64/glusterfs/3.4.2.0rhsc/xlator/mgmt/glusterd.so(__glusterd_handle_status_volume+0x14a)[0x7f199b2554da]
/usr/lib64/glusterfs/3.4.2.0rhsc/xlator/mgmt/glusterd.so(glusterd_big_locked_handler+0x3f)[0x7f199b255a5f]
/usr/lib64/libglusterfs.so.0(synctask_wrap+0x12)[0x39d7849822]
/lib64/libc.so.6[0x35e0e43bb0]
---------

Comment 4 Kaushal 2013-10-10 12:10:01 UTC
This has been fixed by patch https://code.engineering.redhat.com/gerrit/13857 done for bug-1016971.

Comment 5 Shruti Sampat 2013-10-18 09:30:30 UTC
Verified as fixed in glusterfs 3.4.0.35rhs. Executed multiple volume operations in parallel. glusterd crash not seen.

Comment 7 errata-xmlrpc 2013-11-27 15:41:45 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

http://rhn.redhat.com/errata/RHBA-2013-1769.html


Note You need to log in before you can comment on or make changes to this bug.