Bug 767367 - SIGSEGV in ioc_writev_cbk during disconnect
Summary: SIGSEGV in ioc_writev_cbk during disconnect
Keywords:
Status: CLOSED DUPLICATE of bug 767359
Alias: None
Product: GlusterFS
Classification: Community
Component: io-cache
Version: mainline
Hardware: x86_64
OS: Linux
unspecified
high
Target Milestone: ---
Assignee: Raghavendra G
QA Contact:
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2011-12-13 21:43 UTC by Jeff Darcy
Modified: 2011-12-14 05:31 UTC (History)
1 user (show)

Fixed In Version:
Clone Of:
Environment:
Last Closed: 2011-12-14 05:31:16 UTC
Regression: ---
Mount Type: ---
Documentation: ---
CRM:
Verified Versions:
Embargoed:


Attachments (Terms of Use)

Description Jeff Darcy 2011-12-13 21:43:31 UTC
Description of problem:

During a simulated brick failure, the client crashes while trying to disconnect.

How reproducible:

Mount a simple single-brick volume.  Start I/O (I used iozone).  Simulate a brick failure by stopping the glusterfsd process.


Actual results:

Program received signal SIGSEGV, Segmentation fault.
0x00007f4f86901923 in ioc_writev_cbk (frame=0x7f4f89ba5fc0, cookie=0x7f4f89ba5dbc, 
    this=0x18e1d80, op_ret=-1, op_errno=116, prebuf=0x0, postbuf=0x0)
    at io-cache.c:1231
1231	        inode_ctx_get (local->fd->inode, this, &ioc_inode);
(gdb) bt
#0  0x00007f4f86901923 in ioc_writev_cbk (frame=0x7f4f89ba5fc0, 
    cookie=0x7f4f89ba5dbc, this=0x18e1d80, op_ret=-1, op_errno=116, prebuf=0x0, 
    postbuf=0x0) at io-cache.c:1231
#1  0x00007f4f86f4edf9 in client3_1_writev (frame=0x7f4f89ba5dbc, this=0x18de860, 
    data=0x7fff791fab40) at client3_1-fops.c:3587
#2  0x00007f4f86f38d0f in client_writev (frame=0x7f4f89ba5dbc, this=0x18de860, 
    fd=0x7f4f8598b30c, vector=0x18ebd50, count=1, off=12713984, iobref=0x18ec020)
    at client.c:820
#3  0x00007f4f86d1e18e in wb_sync (frame=0x7f4f8992b958, file=0x18ea330, 
    winds=0x7fff791fadd0) at write-behind.c:553
#4  0x00007f4f86d24754 in wb_do_ops (frame=0x7f4f8992b958, file=0x18ea330, 
    winds=0x7fff791fadd0, unwinds=0x7fff791fadc0, other_requests=0x7fff791fadb0)
    at write-behind.c:1864
#5  0x00007f4f86d24fc8 in wb_process_queue (frame=0x7f4f8992b958, file=0x18ea330)
    at write-behind.c:2053
#6  0x00007f4f86d1d8bf in wb_sync_cbk (frame=0x7f4f8992b958, cookie=0x7f4f89ba4c44, 
    this=0x18dfb40, op_ret=-1, op_errno=107, prebuf=0x7fff791faf40, 
    postbuf=0x7fff791faed0) at write-behind.c:410
#7  0x00007f4f86f430e4 in client3_1_writev_cbk (req=0x7f4f86053df4, 
    iov=0x7fff791fb110, count=1, myframe=0x7f4f89ba4c44) at client3_1-fops.c:692
#8  0x00007f4f8ad419e1 in saved_frames_unwind (saved_frames=0x18da0f0)
    at rpc-clnt.c:385
#9  0x00007f4f8ad41a90 in saved_frames_destroy (frames=0x18da0f0) at rpc-clnt.c:403
#10 0x00007f4f8ad42005 in rpc_clnt_connection_cleanup (conn=0x18e7d40)
    at rpc-clnt.c:559
#11 0x00007f4f8ad42ae6 in rpc_clnt_notify (trans=0x18e7e60, mydata=0x18e7d40, 
    event=RPC_TRANSPORT_DISCONNECT, data=0x18e7e60) at rpc-clnt.c:863
#12 0x00007f4f8ad3ed5c in rpc_transport_notify (this=0x18e7e60, 
    event=RPC_TRANSPORT_DISCONNECT, data=0x18e7e60) at rpc-transport.c:498
#13 0x00007f4f87d86213 in socket_event_poll_err (this=0x18e7e60) at socket.c:694
#14 0x00007f4f87d8a849 in socket_event_handler (fd=7, idx=1, data=0x18e7e60, 
    poll_in=1, poll_out=0, poll_err=16) at socket.c:1797
#15 0x00007f4f8af936c4 in event_dispatch_epoll_handler (event_pool=0x18d4d90, 
    events=0x18d9040, i=0) at event.c:794
#16 0x00007f4f8af938e7 in event_dispatch_epoll (event_pool=0x18d4d90) at event.c:856
#17 0x00007f4f8af93c72 in event_dispatch (event_pool=0x18d4d90) at event.c:956
#18 0x0000000000407a5e in main ()

Expected results:

I/O stoppage and/or errors on client, followed by reconnect and normal operation when the glusterfsd process is resumed.

Additional info:

This was found while testing the fix for #767359, and appears unrelated.

Comment 1 Jeff Darcy 2011-12-14 05:31:16 UTC

*** This bug has been marked as a duplicate of bug 767359 ***


Note You need to log in before you can comment on or make changes to this bug.