Iozone from one client, on a simple 2x2 setup iozone -c -e -r 4k -s 100m -l 16 -i 0 One or more of the glusterfsds will SEGV, often in sync. Here's the characteristic stack trace. #0 0x0000003a00287e80 in __memcpy_sse2 () from /lib64/libc.so.6 #1 0x0000003a00315030 in xdrmem_getbytes () from /lib64/libc.so.6 #2 0x0000003a0031474a in xdr_opaque_internal () from /lib64/libc.so.6 #3 0x00007f4730e70540 in xdr_gfs3_write_req ( xdrs=0x7fff3f6d1630, objp=0x7fff3f6d1660) at glusterfs3-xdr.c:734 #4 0x00007f4727de9290 in server_writev_vecsizer (state=1, readsize=0x7fff3f6d1718, base_addr=0x0, curr_addr=0x7f47305c208c "") at server3_1-fops.c:3466 #5 0x00007f472db35308 in __socket_read_vectored_request ( this=0x1429640, vector_sizer=0x7f4727de919f <server_writev_vecsizer>) at socket.c:890 #6 0x00007f472db35ce4 in __socket_read_request (this=0x1429640) at socket.c:1016 #7 0x00007f472db37bba in __socket_read_frag (this=0x1429640) at socket.c:1443 #8 0x00007f472db381d3 in __socket_proto_state_machine ( this=0x1429640, pollin=0x7fff3f6d1960) at socket.c:1583 #9 0x00007f472db38613 in socket_proto_state_machine ( this=0x1429640, pollin=0x7fff3f6d1960) at socket.c:1693 #10 0x00007f472db3865b in socket_event_poll_in (this=0x1429640) at socket.c:1708 #11 0x00007f472db38c05 in socket_event_handler (fd=16, idx=7, data=0x1429640, poll_in=1, poll_out=0, poll_err=0) at socket.c:1826 Note the base_addr=0x0 in frame 4, which is the first sign something is awry.
Avati already sent patch to this @ http://review.gluster.com/3524
problem was in upstream, patch sent and accepted in upstream.