Bug 1016971 - glusterd: core dumped when quota enabled
glusterd: core dumped when quota enabled
Status: CLOSED ERRATA
Product: Red Hat Gluster Storage
Classification: Red Hat
Component: glusterd (Show other bugs)
2.1
x86_64 Linux
high Severity urgent
: ---
: ---
Assigned To: krishnan parthasarathi
Saurabh
: ZStream
Depends On:
Blocks:
  Show dependency treegraph
 
Reported: 2013-10-09 01:09 EDT by Saurabh
Modified: 2016-01-19 01:15 EST (History)
5 users (show)

See Also:
Fixed In Version: glusterfs-3.4.0.35rhs
Doc Type: Bug Fix
Doc Text:
Story Points: ---
Clone Of:
Environment:
Last Closed: 2013-11-27 10:41:39 EST
Type: Bug
Regression: ---
Mount Type: ---
Documentation: ---
CRM:
Verified Versions:
Category: ---
oVirt Team: ---
RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: ---


Attachments (Terms of Use)
core file (322.97 KB, application/x-xz)
2013-10-09 01:09 EDT, Saurabh
no flags Details
sosreport (11.38 MB, application/x-xz)
2013-10-09 01:14 EDT, Saurabh
no flags Details

  None (edit)
Description Saurabh 2013-10-09 01:09:13 EDT
Created attachment 809651 [details]
core file

Description of problem:

I was executing a script for related to quota test. The script is about creating 10000 directories and creating data in them after setting limit on the directories.

Meanwhile this test was going, executed 
gluster volume quota $VOLNAME list

NOTE: script kept running overnight, and the above mentioned list command was executed in the morning.

and seems the glusterd crashed.

Version-Release number of selected component (if applicable):
glusterfs-3.4.0.34rhs-1.el6rhs.x86_64

How reproducible:
found it this time

Steps to Reproduce:
1. execute "gluster volume quota $VOLNAME list", while gluster command may be used for setting limit on directories.
2. 
3.

Actual results:

[2013-10-09 09:38:46.200538] E [glusterd-utils.c:149:glusterd_lock] 0-management: Unable to get lock for uuid: ee116b10-466c-45b4-8552-77a6ce289179, lock held by: ee116b10-466c-45b4-8552-77a6ce289179
[2013-10-09 09:38:46.200606] E [glusterd-syncop.c:1202:gd_sync_task_begin] 0-management: Unable to acquire lock
pending frames:
frame : type(0) op(0)
frame : type(0) op(0)
frame : type(0) op(0)
frame : type(0) op(0)
frame : type(0) op(0)

patchset: git://git.gluster.com/glusterfs.git
signal received: 11
time of crash: 2013-10-09 09:38:46configuration details:
argp 1
backtrace 1
dlfcn 1
fdatasync 1
libpthread 1
llistxattr 1
setfsid 1
spinlock 1
epoll.h 1
xattr.h 1
st_atim.tv_nsec 1
package-string: glusterfs 3.4.0.34rhs
/lib64/libc.so.6[0x3452832960]
/usr/lib64/glusterfs/3.4.0.34rhs/xlator/mgmt/glusterd.so(gd_unlock_op_phase+0xae)[0x7fea9019cfce]
/usr/lib64/glusterfs/3.4.0.34rhs/xlator/mgmt/glusterd.so(gd_sync_task_begin+0xdf)[0x7fea9019dbef]
/usr/lib64/glusterfs/3.4.0.34rhs/xlator/mgmt/glusterd.so(glusterd_op_begin_synctask+0x3b)[0x7fea9019df0b]
/usr/lib64/glusterfs/3.4.0.34rhs/xlator/mgmt/glusterd.so(__glusterd_handle_quota+0x22c)[0x7fea9017beec]
/usr/lib64/glusterfs/3.4.0.34rhs/xlator/mgmt/glusterd.so(glusterd_big_locked_handler+0x3f)[0x7fea9012ba7f]
/usr/lib64/libglusterfs.so.0(synctask_wrap+0x12)[0x7fea93beaa72]
/lib64/libc.so.6[0x3452843bb0]


Core was generated by `/usr/sbin/glusterd --pid-file=/var/run/glusterd.pid'.
Program terminated with signal 11, Segmentation fault.
#0  gd_unlock_op_phase (peers=0x24b4920, op=<value optimized out>, op_ret=-1, req=0x7fea8fddab08, op_ctx=0x7fea923cb6ac, 
    op_errstr=0x7fea88035eb0 "Another transaction is in progress. Please try again after sometime.", npeers=0, is_locked=_gf_false) at glusterd-syncop.c:1085
1085	        if (conf->pending_quorum_action)
Missing separate debuginfos, use: debuginfo-install device-mapper-event-libs-1.02.77-9.el6.x86_64 device-mapper-libs-1.02.77-9.el6.x86_64 glibc-2.12-1.107.el6_4.4.x86_64 keyutils-libs-1.4-4.el6.x86_64 krb5-libs-1.10.3-10.el6_4.4.x86_64 libcom_err-1.41.12-14.el6_4.2.x86_64 libgcc-4.4.7-3.el6.x86_64 libselinux-2.0.94-5.3.el6_4.1.x86_64 libsepol-2.0.41-4.el6.x86_64 libudev-147-2.46.el6.x86_64 libxml2-2.7.6-12.el6_4.1.x86_64 lvm2-libs-2.02.98-9.el6.x86_64 openssl-1.0.0-27.el6_4.2.x86_64 zlib-1.2.3-29.el6.x86_64
(gdb) bt
#0  gd_unlock_op_phase (peers=0x24b4920, op=<value optimized out>, op_ret=-1, req=0x7fea8fddab08, op_ctx=0x7fea923cb6ac, 
    op_errstr=0x7fea88035eb0 "Another transaction is in progress. Please try again after sometime.", npeers=0, is_locked=_gf_false) at glusterd-syncop.c:1085
#1  0x00007fea9019dbef in gd_sync_task_begin (op_ctx=0x7fea923cb6ac, req=0x7fea8fddab08) at glusterd-syncop.c:1246
#2  0x00007fea9019df0b in glusterd_op_begin_synctask (req=0x7fea8fddab08, op=<value optimized out>, dict=0x7fea923cb6ac) at glusterd-syncop.c:1273
#3  0x00007fea9017beec in __glusterd_handle_quota (req=0x7fea8fddab08) at glusterd-quota.c:116
#4  0x00007fea9012ba7f in glusterd_big_locked_handler (req=0x7fea8fddab08, actor_fn=0x7fea9017bcc0 <__glusterd_handle_quota>) at glusterd-handler.c:77
#5  0x00007fea93beaa72 in synctask_wrap (old_task=<value optimized out>) at syncop.c:132
#6  0x0000003452843bb0 in ?? () from /lib64/libc.so.6
#7  0x0000000000000000 in ?? ()


Expected results:
crash is never expected.

Additional info:
Comment 2 Saurabh 2013-10-09 01:14:57 EDT
Created attachment 809653 [details]
sosreport
Comment 5 errata-xmlrpc 2013-11-27 10:41:39 EST
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

http://rhn.redhat.com/errata/RHBA-2013-1769.html

Note You need to log in before you can comment on or make changes to this bug.