1. mount glusterfs volume 2. touch one file 3. crash # touch 1 touch: closing `1': Software caused connection abort # rm 1 rm: cannot lstat `1': Transport endpoint is not connected ------------------------------------------------------------------ pending frames: frame : type(1) op(SETATTR) frame : type(1) op(SETATTR) frame : type(1) op(SETATTR) frame : type(1) op(OPEN) frame : type(0) op(0) patchset: git://git.gluster.com/glusterfs.git signal received: 11 time of crash: 2014-11-19 07:15:03configuration details: argp 1 backtrace 1 dlfcn 1 fdatasync 1 libpthread 1 llistxattr 1 setfsid 1 spinlock 1 epoll.h 1 xattr.h 1 st_atim.tv_nsec 1 package-string: glusterfs 3.5.2 /home/nemo/glusterfs352/sbin/glusterfs(glusterfsd_print_trace+0x1a)[0x804b6fa] [0xffffe500] /home/nemo/glusterfs352/lib/glusterfs/3.5.2/xlator/performance/write-behind.so(wb_setattr+0x269)[0xf41bf559] /home/nemo/glusterfs352/lib/libglusterfs.so.0(default_setattr+0x75)[0xf7f08ce5] /home/nemo/glusterfs352/lib/glusterfs/3.5.2/xlator/performance/io-cache.so(ioc_setattr+0x15a)[0xf41a460a] /home/nemo/glusterfs352/lib/libglusterfs.so.0(default_setattr+0x75)[0xf7f08ce5] /home/nemo/glusterfs352/lib/libglusterfs.so.0(default_setattr+0x75)[0xf7f08ce5] /home/nemo/glusterfs352/lib/glusterfs/3.5.2/xlator/performance/md-cache.so(mdc_setattr+0x131)[0xf418d781] /home/nemo/glusterfs352/lib/glusterfs/3.5.2/xlator/debug/io-stats.so(io_stats_setattr+0x143)[0xf417a3a3] /home/nemo/glusterfs352/lib/glusterfs/3.5.2/xlator/mount/fuse.so(fuse_setattr_resume+0x2a7)[0xf6b1eb77] /home/nemo/glusterfs352/lib/glusterfs/3.5.2/xlator/mount/fuse.so[0xf6b0cfdf] /home/nemo/glusterfs352/lib/glusterfs/3.5.2/xlator/mount/fuse.so[0xf6b0ce74] /home/nemo/glusterfs352/lib/glusterfs/3.5.2/xlator/mount/fuse.so[0xf6b0d021] /home/nemo/glusterfs352/lib/glusterfs/3.5.2/xlator/mount/fuse.so(fuse_resolve_continue+0x40)[0xf6b0d0b0] /home/nemo/glusterfs352/lib/glusterfs/3.5.2/xlator/mount/fuse.so(fuse_resolve_inode+0x48)[0xf6b0d1e8] /home/nemo/glusterfs352/lib/glusterfs/3.5.2/xlator/mount/fuse.so[0xf6b0cea0] /home/nemo/glusterfs352/lib/glusterfs/3.5.2/xlator/mount/fuse.so[0xf6b0d003] /home/nemo/glusterfs352/lib/glusterfs/3.5.2/xlator/mount/fuse.so(fuse_resolve_and_resume+0x32)[0xf6b0d062] /home/nemo/glusterfs352/lib/glusterfs/3.5.2/xlator/mount/fuse.so[0xf6b10898] /home/nemo/glusterfs352/lib/glusterfs/3.5.2/xlator/mount/fuse.so[0xf6b1fcb6] /lib/libpthread.so.0[0x994852] /lib/libc.so.6(clone+0x5e)[0x8fe84e] --------- Program terminated with signal 11, Segmentation fault. #0 0xf42415ac in __glusterfs_this_location@plt () from /home/nemo/glusterfs352/lib/glusterfs/3.5.2/xlator/cluster/distribute.so (gdb) bt #0 0xf42415ac in __glusterfs_this_location@plt () from /home/nemo/glusterfs352/lib/glusterfs/3.5.2/xlator/cluster/distribute.so #1 0xf426ce76 in dht_setattr (frame=0xf723bcd8, this=0x8a67f70, loc=0xf23200a8, stbuf=0xf2320250, valid=48, xdata=0x0) at dht-inode-write.c:919 #2 0xf4237559 in wb_setattr (frame=0xf7235c18, this=0x8a68df0, loc=0xf23200a8, stbuf=0xf2320250, valid=48, xdata=0x0) at write-behind.c:1796 #3 0xf7f80ce5 in default_setattr (frame=0xf7235c18, this=0x8a69c80, loc=0xf23200a8, stbuf=0xf2320250, valid=48, xdata=0x0) at defaults.c:1285 #4 0xf421c60a in ioc_setattr (frame=0xf723fe18, this=0x8a6aaf0, loc=0xf23200a8, stbuf=0xf2320250, valid=48, xdata=0x0) at io-cache.c:168 #5 0xf7f80ce5 in default_setattr (frame=0xf723fe18, this=0x8a6b938, loc=0xf23200a8, stbuf=0xf2320250, valid=48, xdata=0x0) at defaults.c:1285 #6 0xf7f80ce5 in default_setattr (frame=0xf723fe18, this=0x8a6c7b8, loc=0xf23200a8, stbuf=0xf2320250, valid=48, xdata=0x0) at defaults.c:1285 #7 0xf4205781 in mdc_setattr (frame=0xf72365d8, this=0x8a6d608, loc=0xf23200a8, stbuf=0xf2320250, valid=48, xdata=0x0) at md-cache.c:1551 #8 0xf41f23a3 in io_stats_setattr (frame=0xf723ced8, this=0x8a6e430, loc=0xf23200a8, stbuf=0xf2320250, valid=48, xdata=0x0) at io-stats.c:2039 #9 0xf6b96b77 in fuse_setattr_resume (state=0xf2320098) at fuse-bridge.c:1118 #10 0xf6b84fdf in fuse_resolve_done (state=0x0) at fuse-resolve.c:663 #11 fuse_resolve_all (state=0x0) at fuse-resolve.c:692 #12 0xf6b84e74 in fuse_resolve (state=0xf2320098) at fuse-resolve.c:649 #13 0xf6b85021 in fuse_resolve_all (state=0x0) at fuse-resolve.c:688 #14 0xf6b850b0 in fuse_resolve_continue (state=0xf2320098) at fuse-resolve.c:708 #15 0xf6b851e8 in fuse_resolve_inode (state=0xf2320098) at fuse-resolve.c:361 #16 0xf6b84ea0 in fuse_resolve (state=0xf2320098) at fuse-resolve.c:646 #17 0xf6b85003 in fuse_resolve_all (state=0x0) at fuse-resolve.c:681 #18 0xf6b85062 in fuse_resolve_and_resume (state=0xf2320098, fn=0xf6b968d0 <fuse_setattr_resume>) at fuse-resolve.c:721 #19 0xf6b88898 in fuse_setattr (this=0x8a4a088, finh=0xf2255638, msg=0xf2255660) at fuse-bridge.c:1189 #20 0xf6b97cb6 in fuse_thread_proc (data=0x8a4a088) at fuse-bridge.c:4797 #21 0x00994852 in start_thread () from /lib/libpthread.so.0 #22 0x008fe84e in clone () from /lib/libc.so.6 (gdb) ------------------------------------------------------------------ (gdb) disassemble 0xf42415ac Dump of assembler code for function __glusterfs_this_location@plt: 0xf42415ac <__glusterfs_this_location@plt+0>: jmp *0x3ec(%ebx) 0xf42415b2 <__glusterfs_this_location@plt+6>: push $0x7c0 0xf42415b7 <__glusterfs_this_location@plt+11>: jmp 0xf424061c End of assembler dump. (gdb) ------------------------------------------------------------------ (gdb) info all-registers eax 0x0 0 ecx 0x0 0 edx 0x8a670f8 145125624 ebx 0xffffffff -1 esp 0xf2face4c 0xf2face4c ebp 0xf2facea8 0xf2facea8 esi 0xf72316b8 -148695368 edi 0x8a670f8 145125624 eip 0xf42415ac 0xf42415ac <__glusterfs_this_location@plt> eflags 0x200246 [ PF ZF IF ID ] cs 0x23 35 ss 0x2b 43 ds 0x2b 43 es 0x2b 43 fs 0x3 3 gs 0x63 99
loniy , Thanks for the bug. To debug this further we need following information 1. Which protocol you used for mounting the volume? 2. Please mention the mount command too. 3. Command output of "gluster v <volumename> status" on one of the gluster nodes. 4. Are you using a 32bit hardware? as you have mentioned i686 as the hardware type in this report.
Since we haven't heard anything yet, closing this bug. Please re-open with the relevant details if you still happen to hit the issue.
The needinfo request[s] on this closed bug have been removed as they have been unresolved for 1000 days