Bug 761876 (GLUSTER-144) - Crash in client-protocol
Summary: Crash in client-protocol
Keywords:
Status: CLOSED CURRENTRELEASE
Alias: GLUSTER-144
Product: GlusterFS
Classification: Community
Component: protocol
Version: 2.0.3
Hardware: All
OS: Linux
low
high
Target Milestone: ---
Assignee: Raghavendra Bhat
QA Contact:
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2009-07-15 15:27 UTC by Vikas Gorur
Modified: 2010-01-28 11:14 UTC (History)
2 users (show)

Fixed In Version:
Doc Type: Bug Fix
Doc Text:
Clone Of:
Environment:
Last Closed:
Regression: ---
Mount Type: ---
Documentation: ---
CRM:
Verified Versions:


Attachments (Terms of Use)

Description Vikas Gorur 2009-07-15 15:27:28 UTC
Reported by Jeff Evans on the mailing list <jeffe>:


RHEL5.3
Fuse 2.7.4
Kernel 2.6.18-128.1.6.el5xen

Vol spec is basic AFR over GigE:
--------------------------------

volume u0-2
    type protocol/client
    option transport-type tcp/client
    option remote-host 192.168.200.2
    option remote-subvolume u0
end-volume

volume u0-1
    type protocol/client
    option transport-type tcp/client
    option remote-host 192.168.200.1
    option remote-subvolume u0
end-volume

volume afr
   type cluster/afr
   option read-subvolume u0-2
   subvolumes u0-1 u0-2
end-volume

Server spec:
------------

volume export
  type storage/posix
  option directory /export/u0
end-volume

volume posix-locks
  type features/posix-locks
 subvolumes export
end-volume

volume io-thr
 type performance/io-threads
  subvolumes posix-locks
end-volume

volume u0
 type performance/read-ahead
 subvolumes io-thr
end-volume

volume server
  type protocol/server
  option transport-type tcp/server
  option auth.addr.u0.allow 192.*
  subvolumes u0
end-volume

Core dump BT gives:
-------------------

Core was generated by `/sbin/glusterfs --log-level=NORMAL
--disable-direct-io-mode
--volfile=/etc/glusterfs/glusterfs-client.vol'.Program terminated with signal 11, Segmentation fault.
[New process 3663]
[New process 3668]
[New process 3664]
#0  client_readv_cbk (frame=0x2aaaac00bb10, hdr=0x9629d40,
    hdrlen=<value optimized out>, iobuf=0x0) at client-protocol.c:4319
4319                            vector.iov_base = iobuf->ptr;
(gdb) bt
#0  client_readv_cbk (frame=0x2aaaac00bb10, hdr=0x9629d40,
    hdrlen=<value optimized out>, iobuf=0x0) at client-protocol.c:4319
#1  0x00002b1089e807ca in protocol_client_pollin (this=0x961cc40,
    trans=0x9621cc0) at client-protocol.c:6169
#2  0x00002b1089e87b42 in notify (this=0xb2ee594a, event=2,
data=0x9621cc0)
    at client-protocol.c:6213
#3  0x00002aaaaaaafcf3 in socket_event_handler (fd=<value optimized
out>,    idx=2, data=0x9621cc0, poll_in=1, poll_out=0, poll_err=0) at
socket.c:814
#4  0x00002b10893f5135 in event_dispatch_epoll (event_pool=0x96184b0)
    at event.c:804
#5  0x00000000004039ea in main (argc=5, argv=0x7fff216f9158)
    at glusterfsd.c:1226

Any ideas?

The easiest way I've found to reproduce this is by attempting to build
software within the mounted glusterfs.

(gdb) frame 0
#0  client_readv_cbk (frame=0x2aaaac00bb10, hdr=0x9629d40,
    hdrlen=<value optimized out>, iobuf=0x0) at
    client-protocol.c:4319
4319                            vector.iov_base = iobuf->ptr;
(gdb) print op_ret
$1 = 4096


Note You need to log in before you can comment on or make changes to this bug.