Bug 762201 (GLUSTER-469)

Summary: V3.0 and rsync crash with distribute on server side
Product: [Community] GlusterFS Reporter: Harshavardhana <fharshav>
Component: distributeAssignee: Anand Avati <aavati>
Status: CLOSED WONTFIX QA Contact:
Severity: low Docs Contact:
Priority: low    
Version: 3.0.0CC: chrisw, cww, gluster-bugs, pavan
Target Milestone: ---   
Target Release: ---   
Hardware: All   
OS: Linux   
Whiteboard:
Fixed In Version: Doc Type: Bug Fix
Doc Text:
Story Points: ---
Clone Of: Environment:
Last Closed: Type: ---
Regression: --- Mount Type: ---
Documentation: --- CRM:
Verified Versions: Category: ---
oVirt Team: --- RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: --- Target Upstream Version:

Description Harshavardhana 2009-12-15 00:41:11 UTC
Client log tail:

 

[2009-12-14 08:45:40] N [fuse-bridge.c:2931:fuse_init] glusterfs-fuse: FUSE init

ed with protocol versions: glusterfs 7.13 kernel 7.8

[2009-12-14 18:15:45] E [saved-frames.c:165:saved_frames_unwind] storage: forced

 unwinding frame type(1) op(UNLINK)

[2009-12-14 18:15:45] W [fuse-bridge.c:1207:fuse_unlink_cbk] glusterfs-fuse: 804

6528: UNLINK() /storage/blobdata/20/40/.49f4020ad397eb689df4e83770a9.8Zirsl => -

1 (Transport endpoint is not connected)

[2009-12-14 18:15:45] W [fuse-bridge.c:1167:fuse_err_cbk] glusterfs-fuse: 804652

9: FLUSH() ERR => -1 (Transport endpoint is not connected)

[2009-12-14 18:15:45] N [client-protocol.c:6972:notify] storage: disconnected

[2009-12-14 18:15:45] E [socket.c:760:socket_connect_finish] storage: connection

 to 10.0.0.91:6996 failed (Connection refused)

[2009-12-14 18:15:45] E [socket.c:760:socket_connect_finish] storage: connection

 to 10.0.0.91:6996 failed (Connection refused)

 

Server log tail:

 

[2009-12-14 08:52:57] N [server-protocol.c:5809:mop_setvolume] server: accepted

client from 10.0.0.71:1021

[2009-12-14 08:52:57] N [server-protocol.c:5809:mop_setvolume] server: accepted

client from 10.0.0.71:1020

[2009-12-14 09:51:45] W [posix.c:246:posix_lstat_with_gen] brick1: Access to /mn

t/glusterfs/vol1//.. (on dev 2304) is crossing device (2052)

[2009-12-14 09:51:45] W [posix.c:246:posix_lstat_with_gen] brick2: Access to /mn

t/glusterfs/vol2//.. (on dev 2304) is crossing device (2068)

pending frames:

frame : type(1) op(UNLINK)

 

patchset: 2.0.1-886-g8379edd

signal received: 11

time of crash: 2009-12-14 18:23:08

configuration details:

argp 1

backtrace 1

dlfcn 1

fdatasync 1

libpthread 1

llistxattr 1

setfsid 1

spinlock 1

epoll.h 1

xattr.h 1

st_atim.tv_nsec 1

package-string: glusterfs 3.0.0

/lib64/libc.so.6[0x35702302d0]

/usr/lib64/glusterfs/3.0.0/xlator/protocol/server.so[0x2ad7fd3820d3]

/usr/lib64/glusterfs/3.0.0/xlator/protocol/server.so(server_unlink_cbk+0x265)[0x

2ad7fd383d75]

/usr/lib64/glusterfs/3.0.0/xlator/cluster/distribute.so(dht_unlink_cbk+0x1d5)[0x

2ad7fd1619fa]

/usr/lib64/glusterfs/3.0.0/xlator/storage/posix.so(posix_unlink+0x6cc)[0x2ad7fcf

39b9f]

/usr/lib64/glusterfs/3.0.0/xlator/cluster/distribute.so(dht_unlink+0x530)[0x2ad7

fd16a54f]

/usr/lib64/glusterfs/3.0.0/xlator/protocol/server.so(server_unlink_resume+0x17e)

[0x2ad7fd389a81]

/usr/lib64/glusterfs/3.0.0/xlator/protocol/server.so(server_resolve_done+0x59)[0

x2ad7fd395970]

/usr/lib64/glusterfs/3.0.0/xlator/protocol/server.so(server_resolve_all+0xea)[0x

2ad7fd395a61]

/usr/lib64/glusterfs/3.0.0/xlator/protocol/server.so(server_resolve+0xce)[0x2ad7

fd395910]

/usr/lib64/glusterfs/3.0.0/xlator/protocol/server.so(server_resolve_all+0xc5)[0x

2ad7fd395a3c]

/usr/lib64/glusterfs/3.0.0/xlator/protocol/server.so(server_resolve_entry+0xb1)[

0x2ad7fd395559]

/usr/lib64/glusterfs/3.0.0/xlator/protocol/server.so(server_resolve+0x7d)[0x2ad7

fd3958bf]

/usr/lib64/glusterfs/3.0.0/xlator/protocol/server.so(server_resolve_all+0x76)[0x

2ad7fd3959ed]

/usr/lib64/glusterfs/3.0.0/xlator/protocol/server.so(resolve_and_resume+0x50)[0x

2ad7fd395af9]

/usr/lib64/glusterfs/3.0.0/xlator/protocol/server.so(server_unlink+0x115)[0x2ad7

fd389bfd]

/usr/lib64/glusterfs/3.0.0/xlator/protocol/server.so(protocol_server_interpret+0

x1d9)[0x2ad7fd392d1f]

/usr/lib64/glusterfs/3.0.0/xlator/protocol/server.so(protocol_server_pollin+0x69

)[0x2ad7fd393ebf]

/usr/lib64/glusterfs/3.0.0/xlator/protocol/server.so(notify+0x130)[0x2ad7fd3940c

e]

/usr/lib64/libglusterfs.so.0(xlator_notify+0xf5)[0x2ad7fc47959b]

/usr/lib64/glusterfs/3.0.0/transport/socket.so(socket_event_poll_in+0x40)[0x2aaa

aaaaed9a]

/usr/lib64/glusterfs/3.0.0/transport/socket.so(socket_event_handler+0xb7)[0x2aaa

aaaaf08f]

/usr/lib64/libglusterfs.so.0[0x2ad7fc49e185]

/usr/lib64/libglusterfs.so.0[0x2ad7fc49e35a]

/usr/lib64/libglusterfs.so.0(event_dispatch+0x73)[0x2ad7fc49e670]

glusterfs(main+0xe88)[0x405e10]

/lib64/libc.so.6(__libc_start_main+0xf4)[0x357021d994]

glusterfs[0x4025d9]

---------

Comment 1 Anand Avati 2010-01-07 07:51:12 UTC
need output of 'thread apply all bt full' from the core

Comment 2 Pavan Vilas Sondur 2010-02-16 08:21:22 UTC
Harsha to get more information.

Comment 3 Harshavardhana 2010-02-16 08:52:20 UTC
This bug was reported by 

Larry Bates - Vital Safe Inc

He was advised for distribute on client side which has been working for him since. 

Closing this bug as its not a supported configuration.