Bug 765191 (GLUSTER-3459) - Crash in io-cache
Summary: Crash in io-cache
Keywords:
Status: CLOSED WORKSFORME
Alias: GLUSTER-3459
Product: GlusterFS
Classification: Community
Component: io-cache
Version: 3.2.2
Hardware: x86_64
OS: Linux
high
medium
Target Milestone: ---
Assignee: Raghavendra G
QA Contact:
URL:
Whiteboard:
: 785440 (view as bug list)
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2011-08-22 07:09 UTC by Vijay Bellur
Modified: 2012-10-11 09:56 UTC (History)
4 users (show)

Fixed In Version:
Doc Type: Bug Fix
Doc Text:
Clone Of:
Environment:
Last Closed: 2012-10-11 09:56:22 UTC
Regression: ---
Mount Type: fuse
Documentation: ---
CRM:
Verified Versions:


Attachments (Terms of Use)

Description Vijay Bellur 2011-08-22 07:09:15 UTC
Found the following crash while doing untar and rm -rf in parallel from 2 clients:




#0  0x00007f38851ea914 in ioc_open_cbk (frame=0x7f3887805b08, cookie=0x7f3887805544, this=0x1e9db30, op_ret=0, op_errno=0, fd=0x7f388431a078)
    at ../../../../../xlators/performance/io-cache/src/io-cache.c:553
#1  0x00007f38853fca9a in ra_open_cbk (frame=0x7f3887805544, cookie=0x7f38878055e8, this=0x1e9ca00, op_ret=0, op_errno=0, fd=0x7f388431a078)
    at ../../../../../xlators/performance/read-ahead/src/read-ahead.c:119
#2  0x00007f388560fd06 in wb_open_cbk (frame=0x7f38878055e8, cookie=0x7f3887805730, this=0x1e9b8b0, op_ret=0, op_errno=0, fd=0x7f388431a078)
    at ../../../../../xlators/performance/write-behind/src/write-behind.c:1390
#3  0x00007f3888ef7d71 in default_open_cbk (frame=0x7f3887805730, cookie=0x7f38878057d4, this=0x1e9a650, op_ret=0, op_errno=0, fd=0x7f388431a078)
    at ../../../libglusterfs/src/defaults.c:192
#4  0x00007f3885a45fc2 in client3_1_open_cbk (req=0x7f3884945024, iov=0x7f3884945064, count=1, myframe=0x7f38878057d4)
    at ../../../../../xlators/protocol/client/src/client3_1-fops.c:368
#5  0x00007f3888cc9a5f in rpc_clnt_handle_reply (clnt=0x1ea3640, pollin=0x7f38780011d0) at ../../../../rpc/rpc-lib/src/rpc-clnt.c:736
#6  0x00007f3888cc9dbe in rpc_clnt_notify (trans=0x1ea37b0, mydata=0x1ea3670, event=RPC_TRANSPORT_MSG_RECEIVED, data=0x7f38780011d0)
    at ../../../../rpc/rpc-lib/src/rpc-clnt.c:849
#7  0x00007f3888cc6016 in rpc_transport_notify (this=0x1ea37b0, event=RPC_TRANSPORT_MSG_RECEIVED, data=0x7f38780011d0)
    at ../../../../rpc/rpc-lib/src/rpc-transport.c:918
#8  0x00007f3886684174 in socket_event_poll_in (this=0x1ea37b0) at ../../../../../rpc/rpc-transport/socket/src/socket.c:1647
#9  0x00007f38866846f8 in socket_event_handler (fd=7, idx=1, data=0x1ea37b0, poll_in=1, poll_out=0, poll_err=0)
    at ../../../../../rpc/rpc-transport/socket/src/socket.c:1762
#10 0x00007f3888f22876 in event_dispatch_epoll_handler (event_pool=0x1e8f350, events=0x1e93a40, i=0) at ../../../libglusterfs/src/event.c:794
#11 0x00007f3888f22a99 in event_dispatch_epoll (event_pool=0x1e8f350) at ../../../libglusterfs/src/event.c:856
#12 0x00007f3888f22e24 in event_dispatch (event_pool=0x1e8f350) at ../../../libglusterfs/src/event.c:956
#13 0x0000000000407722 in main (argc=5, argv=0x7fff8137a3b8) at ../../../glusterfsd/src/glusterfsd.c:1504



(gdb) f 0
#0  0x00007f38851ea914 in ioc_open_cbk (frame=0x7f3887805b08, cookie=0x7f3887805544, this=0x1e9db30, op_ret=0, op_errno=0, fd=0x7f388431a078)
    at ../../../../../xlators/performance/io-cache/src/io-cache.c:553
553	                ioc_table_lock (ioc_inode->table);
(gdb) p ioc_inode
$1 = (ioc_inode_t *) 0x0
(gdb)

Comment 1 Amar Tumballi 2012-02-28 03:29:06 UTC
need to check if it is happening in master, and need a fix ASAP.

Comment 2 Raghavendra G 2012-03-21 02:04:51 UTC
*** Bug 785440 has been marked as a duplicate of this bug. ***

Comment 3 Amar Tumballi 2012-05-08 11:12:03 UTC
Need update on whether this happens on master.

Comment 4 Amar Tumballi 2012-05-11 07:08:01 UTC
not able to see this crash with master branch. Hence taking it off the 3.3.0beta milestone.

Comment 5 Amar Tumballi 2012-10-11 09:56:22 UTC
not reproduced in long time now.. will re-open if seen again.


Note You need to log in before you can comment on or make changes to this bug.