Bug 785440 - Segfault on io-cache-open
Summary: Segfault on io-cache-open
Keywords:
Status: CLOSED DUPLICATE of bug 765191
Alias: None
Product: GlusterFS
Classification: Community
Component: io-cache
Version: 3.2.5
Hardware: x86_64
OS: Linux
high
urgent
Target Milestone: ---
Assignee: Raghavendra G
QA Contact:
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2012-01-28 23:43 UTC by Joseph Brower
Modified: 2012-03-21 02:04 UTC (History)
3 users (show)

Fixed In Version:
Doc Type: Bug Fix
Doc Text:
Clone Of:
Environment:
Last Closed: 2012-03-21 02:04:51 UTC
Regression: ---
Mount Type: ---
Documentation: ---
CRM:
Verified Versions:


Attachments (Terms of Use)

Description Joseph Brower 2012-01-28 23:43:30 UTC
Description of problem:  Fuse seems to segfault occasionally.  We're unsure why.  


Version-Release number of selected component (if applicable):


How reproducible:
It happens "on it's own".  We notice once we start getting "Tunnel Endpoint not connected" error.


Additional info:  Here is the crashdump
pending frames:
frame : type(1) op(OPEN)
frame : type(1) op(OPEN)
frame : type(1) op(OPEN)
frame : type(1) op(OPEN)

patchset: git://git.gluster.com/glusterfs.git
signal received: 11
time of crash: 2012-01-28 17:04:06
configuration details:
argp 1
backtrace 1
dlfcn 1
fdatasync 1
libpthread 1
llistxattr 1
setfsid 1
spinlock 1
epoll.h 1
xattr.h 1
st_atim.tv_nsec 1
package-string: glusterfs 3.2.5
/lib/libc.so.6(+0x33af0)[0x7fc924767af0]
/usr/lib/glusterfs/3.2.5/xlator/performance/io-cache.so(ioc_open_cbk+0x99)[0x7fc9215f28d9]
/usr/lib/glusterfs/3.2.5/xlator/performance/read-ahead.so(ra_open_cbk+0x1ac)[0x7fc9218026ac]
/usr/lib/glusterfs/3.2.5/xlator/performance/write-behind.so(wb_open_cbk+0x127)[0x7fc921a0f197]
/usr/lib/glusterfs/3.2.5/xlator/cluster/replicate.so(afr_open_cbk+0x25e)[0x7fc921c3c3fe]
/usr/lib/glusterfs/3.2.5/xlator/protocol/client.so(client3_1_open_cbk+0x229)[0x7fc921e93b59]
/usr/lib/libgfrpc.so.0(rpc_clnt_handle_reply+0xa5)[0x7fc9250fcc85]
/usr/lib/libgfrpc.so.0(rpc_clnt_notify+0xc9)[0x7fc9250fced9]
/usr/lib/libgfrpc.so.0(rpc_transport_notify+0x28)[0x7fc9250f7d98]
/usr/lib/glusterfs/3.2.5/rpc-transport/socket.so(socket_event_poll_in+0x34)[0x7fc922ac8044]
/usr/lib/glusterfs/3.2.5/rpc-transport/socket.so(socket_event_handler+0xc7)[0x7fc922ac8127]
/usr/lib/libglusterfs.so.0(+0x3db14)[0x7fc925341b14]
/usr/sbin/glusterfs(main+0x2ae)[0x4063ce]
/lib/libc.so.6(__libc_start_main+0xfd)[0x7fc924752c4d]
/usr/sbin/glusterfs[0x403be9]
---------

It only seems to be happening to one of our volumes named "new_writable".  It is used heavily for serving static web assets.  It's a distributed replicated volume.

Comment 1 Joe Julian 2012-01-29 00:05:33 UTC
Actually, it's replicate only.

Volume Name: new_writable
Type: Replicate
Status: Started
Number of Bricks: 2
Transport-type: tcp
Bricks:
Brick1: 10.60.5.19:/export/new_writable
Brick2: 10.60.5.22:/export/new_writable

glusterd appears to have only been running on one of the servers.

Comment 2 Amar Tumballi 2012-03-12 09:37:46 UTC
joe, can you try with 'gluster volume set new_writable io-cache off'

Comment 3 Raghavendra G 2012-03-20 02:47:31 UTC
(In reply to comment #1)
Hi Joe,

Do you have a set of test cases which can reproduce this bug? Is it possible for you to get a valgrind report of glusterfs client? you can generate valgrind reports using,

valgrind --log-file=gluster-client.valgrind.log --leak-check=full glusterfs --volfile-id=new_writable --volfile-server=<volume-file-server> -N <mount-point>

regards,
Raghavendra.

Comment 4 Joe Julian 2012-03-20 02:59:42 UTC
(In reply to comment #3)

Other Joe... I only added missing information that I had from our IRC conversation.

Comment 5 Raghavendra G 2012-03-21 02:03:54 UTC
(In reply to comment #4)
oh.. sorry :)..

Joseph,
(just copy pasting my previous query)
Do you have a set of test cases which can reproduce this bug? Is it possible
for you to get a valgrind report of glusterfs client? you can generate valgrind
reports using,

valgrind --log-file=gluster-client.valgrind.log --leak-check=full glusterfs
--volfile-id=new_writable --volfile-server=<volume-file-server> -N
<mount-point>

regards,
Raghavendra.

Comment 6 Raghavendra G 2012-03-21 02:04:51 UTC

*** This bug has been marked as a duplicate of bug 765191 ***


Note You need to log in before you can comment on or make changes to this bug.