Bug 848239 - glusterfsd crashed
glusterfsd crashed
Product: Red Hat Gluster Storage
Classification: Red Hat
Component: glusterfs (Show other bugs)
Unspecified Unspecified
high Severity high
: ---
: ---
Assigned To: Raghavendra Bhat
Sachidananda Urs
Depends On: 825084
  Show dependency treegraph
Reported: 2012-08-14 21:26 EDT by Vidya Sakar
Modified: 2013-09-23 18:32 EDT (History)
6 users (show)

See Also:
Fixed In Version: glusterfs-3.4.0qa5-1
Doc Type: Bug Fix
Doc Text:
Story Points: ---
Clone Of: 825084
Last Closed: 2013-09-23 18:32:58 EDT
Type: Bug
Regression: ---
Mount Type: ---
Documentation: ---
Verified Versions:
Category: ---
oVirt Team: ---
RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: ---

Attachments (Terms of Use)

  None (edit)
Description Vidya Sakar 2012-08-14 21:26:57 EDT
+++ This bug was initially created as a clone of Bug #825084 +++

Created attachment 586753 [details]
Backtrace of core

Description of problem:

Core was generated by `/usr/local/sbin/glusterfsd -s localhost --volfile-id dstore.'.
Program terminated with signal 6, Aborted.
#0  0x0000003638632885 in raise () from /lib64/libc.so.6
Missing separate debuginfos, use: debuginfo-install glibc-2.12-1.47.el6.x86_64 libgcc-4.4.6-3.el6.x86_64 openssl-1.0.0-20.el6_2.3.x86_64 zlib-1.2.3-27.el6.x86_64
(gdb) bt
#0  0x0000003638632885 in raise () from /lib64/libc.so.6
#1  0x0000003638634065 in abort () from /lib64/libc.so.6
#2  0x000000363862b9fe in __assert_fail_base () from /lib64/libc.so.6
#3  0x000000363862bac0 in __assert_fail () from /lib64/libc.so.6
#4  0x00007fb8e4ead75a in mq_fetch_child_size_and_contri (frame=0x7fb8e8b29738, cookie=0x7fb8e8cf761c, this=0x1fc0970, op_ret=0, op_errno=0, xdata=0x0)
    at marker-quota.c:1790
#5  0x00007fb8e9eb11dd in default_setxattr_cbk (frame=0x7fb8e8cf761c, cookie=0x7fb8e8d04f20, this=0x1fbf680, op_ret=0, op_errno=0, xdata=0x0) at defaults.c:284
#6  0x00007fb8e52d1f54 in iot_setxattr_cbk (frame=0x7fb8e8d04f20, cookie=0x7fb8e8cffa78, this=0x1fbe4b0, op_ret=0, op_errno=0, xdata=0x0) at io-threads.c:1627
#7  0x00007fb8e9eb11dd in default_setxattr_cbk (frame=0x7fb8e8cffa78, cookie=0x7fb8e8cf990c, this=0x1fbd2d0, op_ret=0, op_errno=0, xdata=0x0) at defaults.c:284
#8  0x00007fb8e57083c5 in posix_acl_setxattr_cbk (frame=0x7fb8e8cf990c, cookie=0x7fb8e8cfe0f0, this=0x1fbc120, op_ret=0, op_errno=0, xdata=0x0) at posix-acl.c:1802
#9  0x00007fb8e5922e12 in posix_setxattr (frame=0x7fb8e8cfe0f0, this=0x1fbad00, loc=0x7fb8e89cf674, dict=0x7fb8e8953140, flags=0, xdata=0x0) at posix.c:2417
#10 0x00007fb8e5708674 in posix_acl_setxattr (frame=0x7fb8e8cf990c, this=0x1fbc120, loc=0x7fb8e89cf674, xattr=0x7fb8e8953140, flags=0, xdata=0x0) at posix-acl.c:1821
#11 0x00007fb8e9eb94fd in default_setxattr (frame=0x7fb8e8cffa78, this=0x1fbd2d0, loc=0x7fb8e89cf674, dict=0x7fb8e8953140, flags=0, xdata=0x0) at defaults.c:889
#12 0x00007fb8e52d21b9 in iot_setxattr_wrapper (frame=0x7fb8e8d04f20, this=0x1fbe4b0, loc=0x7fb8e89cf674, dict=0x7fb8e8953140, flags=0, xdata=0x0)
    at io-threads.c:1636
#13 0x00007fb8e9ed348b in call_resume_wind (stub=0x7fb8e89cf634) at call-stub.c:2531
#14 0x00007fb8e9edaf6a in call_resume (stub=0x7fb8e89cf634) at call-stub.c:4151
#15 0x00007fb8e52c78d6 in iot_worker (data=0x1fd51a0) at io-threads.c:131
#16 0x0000003638a077f1 in start_thread () from /lib64/libpthread.so.0
#17 0x00000036386e570d in clone () from /lib64/libc.so.6

(gdb) f 4
#4  0x00007fb8e4ead75a in mq_fetch_child_size_and_contri (frame=0x7fb8e8b29738, cookie=0x7fb8e8cf761c, this=0x1fc0970, op_ret=0, op_errno=0, xdata=0x0)
    at marker-quota.c:1790
1790	        GF_UUID_ASSERT (local->loc.gfid);

Version-Release number of selected component (if applicable):

--- Additional comment from junaid@redhat.com on 2012-06-26 07:48:02 EDT ---

Its not reproducible on my setup.


Can you reproduce it on the new release?
Comment 2 Amar Tumballi 2012-08-23 02:45:31 EDT
This bug is not seen in current master branch (which will get branched as RHS 2.1.0 soon). To consider it for fixing, want to make sure this bug still exists in RHS servers. If not reproduced, would like to close this.
Comment 3 Amar Tumballi 2012-10-18 01:28:45 EDT
Need info about whether it is happening.
Comment 4 Amar Tumballi 2012-10-25 02:11:40 EDT
upstream bug is ON_QA as it was in 'WORKSFORME' category. moving this bug to MODIFIED
Comment 5 Sachidananda Urs 2013-01-08 07:43:22 EST
Since there is no clear pattern of events that would lead to this crash. I've been running some random stress tests since a day. But haven't seen this crash.

Will be marking this as resolved for now.
Comment 7 Scott Haines 2013-09-23 18:32:58 EDT
Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. 

For information on the advisory, and where to find the updated files, follow the link below.

If the solution does not work for you, open a new bug report.


Note You need to log in before you can comment on or make changes to this bug.