Bug 762755 (GLUSTER-1023) - Server crashes on Gentoo when 2.0.2 clients try to connect
Summary: Server crashes on Gentoo when 2.0.2 clients try to connect
Keywords:
Status: CLOSED NOTABUG
Alias: GLUSTER-1023
Product: GlusterFS
Classification: Community
Component: core
Version: 3.0.4
Hardware: All
OS: Linux
urgent
high
Target Milestone: ---
Assignee: Anand Avati
QA Contact:
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2010-06-24 03:40 UTC by Harshavardhana
Modified: 2015-09-01 23:04 UTC (History)
3 users (show)

Fixed In Version:
Doc Type: Bug Fix
Doc Text:
Clone Of:
Environment:
Last Closed:
Regression: ---
Mount Type: ---
Documentation: ---
CRM:
Verified Versions:
Embargoed:


Attachments (Terms of Use)

Description Harshavardhana 2010-06-24 03:40:11 UTC
----------

Logfile Trace

"
frame : type(2) op(SETVOLUME)

patchset: v3.0.4
signal received: 11
time of crash: 2010-06-23 13:34:05
configuration details:
argp 1
backtrace 1
dlfcn 1
fdatasync 1
libpthread 1
llistxattr 1
setfsid 1
spinlock 1
epoll.h 1
xattr.h 1
st_atim.tv_nsec 1
package-string: glusterfs 3.0.4
/lib/libc.so.6[0x7ff4ad9de430]
/opt/glusterfs/3.0.4/lib/
libglusterfs.so.0(dict_unserialize+0x10a)[0x7ff4ae11bf1a]
/opt/glusterfs/3.0.4/lib/glusterfs/3.0.4/xlator/protocol/server.so(mop_setvolume+0x9b)[0x7ff4ac95e29b]
/opt/glusterfs/3.0.4/lib/glusterfs/3.0.4/xlator/protocol/server.so(protocol_server_pollin+0x90)[0x7ff4ac95a030]
/opt/glusterfs/3.0.4/lib/glusterfs/3.0.4/xlator/protocol/server.so(notify+0xcb)[0x7ff4ac95a10b]
/opt/glusterfs/3.0.4/lib/libglusterfs.so.0(xlator_notify+0x43)[0x7ff4ae120de3]
/opt/glusterfs/3.0.4/lib/glusterfs/3.0.4/transport/socket.so(socket_event_handler+0xc8)[0x7ff4ac74b3c8]
/opt/glusterfs/3.0.4/lib/libglusterfs.so.0[0x7ff4ae13a74b]
/opt/glusterfs/3.0.4/sbin/glusterfsd(main+0x9b9)[0x4044d9]
/lib/libc.so.6(__libc_start_main+0xf4)[0x7ff4ad9cbb74]
/opt/glusterfs/3.0.4/sbin/glusterfsd[0x402ab9]

"


Backtrace

"
Program terminated with signal 11, Segmentation fault.
#0  dict_unserialize (orig_buf=<value optimized out>, size=<value optimized out>,
   fill=0x7fff1dfbc590) at dict.c:2417
2417                    memcpy (&hostord, buf, sizeof(hostord));
(gdb) bt
#0  dict_unserialize (orig_buf=<value optimized out>, size=<value optimized out>,
   fill=0x7fff1dfbc590) at dict.c:2417
#1  0x00007f4f143af29b in mop_setvolume (frame=0x620358, bound_xl=0x0, req_hdr=0x6201b0,
   req_hdrlen=<value optimized out>, iobuf=<value optimized out>) at server-protocol.c:5706
#2  0x00007f4f143ab030 in protocol_server_pollin (this=0x610c90, trans=0x61f970)
   at server-protocol.c:6727
#3  0x00007f4f143ab10b in notify (this=0x610c90, event=<value optimized out>, data=0x620190)
   at server-protocol.c:6783
#4  0x00007f4f15b71de3 in xlator_notify (xl=0x610c90, event=2, data=0x61f970) at xlator.c:924
#5  0x00007f4f1419c3c8 in socket_event_handler (fd=<value optimized out>, idx=1, data=0x61f970,
   poll_in=1, poll_out=0, poll_err=0) at socket.c:831
#6  0x00007f4f15b8b74b in event_dispatch_epoll (event_pool=0x60a8b0) at event.c:804
#7  0x00000000004044d9 in main (argc=7, argv=0x7fff1dfbd458) at glusterfsd.c:1425
(gdb) fr 0
#0  dict_unserialize (orig_buf=<value optimized out>, size=<value optimized out>,
   fill=0x7fff1dfbc590) at dict.c:2417
2417                    memcpy (&hostord, buf, sizeof(hostord));
(gdb) l
2412                                    DICT_DATA_HDR_KEY_LEN);
2413                            gf_log ("dict", GF_LOG_ERROR,
2414                                    "undersized buffer passsed");
2415                            goto out;
2416                    }
2417                    memcpy (&hostord, buf, sizeof(hostord));
2418                    keylen = ntoh32 (hostord);
2419                    buf += DICT_DATA_HDR_KEY_LEN;
2420
2421                    if ((buf + DICT_DATA_HDR_VAL_LEN) > (orig_buf + size)) {

"

Comment 1 Harshavardhana 2010-06-24 15:36:15 UTC
Confirmed compiler issue at customer site, resolving this issue since reproducing is not viable solution ATM.  Since catching an compiler optimized mangled buffer is not a priority.


Note You need to log in before you can comment on or make changes to this bug.