Bug 761982 (GLUSTER-250)

Summary: Crash in replicate getxattr
Product: [Community] GlusterFS Reporter: Vikas Gorur <vikas>
Component: replicateAssignee: Vikas Gorur <vikas>
Status: CLOSED CURRENTRELEASE QA Contact:
Severity: high Docs Contact:
Priority: low    
Version: mainlineCC: gluster-bugs, mark, rabhat
Target Milestone: ---   
Target Release: ---   
Hardware: All   
OS: Linux   
Whiteboard:
Fixed In Version: Doc Type: Bug Fix
Doc Text:
Story Points: ---
Clone Of: Environment:
Last Closed: Type: ---
Regression: RTA Mount Type: ---
Documentation: --- CRM:
Verified Versions: Category: ---
oVirt Team: --- RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: --- Target Upstream Version:

Description Vikas Gorur 2009-09-07 13:10:06 UTC
Reported by Mark Mielke on gluster-devel:

Had a client crash with bonnie++ running on one client, and the other
client listing a directory that was removed by Bonnie++ before or during
the list request:


frame : type(1) op(GETXATTR)

patchset: git://git.sv.gnu.org/gluster.git
signal received: 11
time of crash: 2009-09-07 02:31:30
configuration details:
argp 1
backtrace 1
bdb->cursor->get 1
db.h 1
dlfcn 1
fdatasync 1
libpthread 1
llistxattr 1
setfsid 1
spinlock 1
epoll.h 1
xattr.h 1
st_atim.tv_nsec 1
package-string: glusterfs 2.1.0git
/lib64/libc.so.6[0x3133a33370]
/opt/glusterfs/lib/libglusterfs.so.0(dict_foreach+0x18)[0x7f66c043c858]
/opt/glusterfs/lib/glusterfs/2.1.0git/xlator/cluster/replicate.so(__filter_xattrs+0x2a)[0x7f66bf771d3a]
/opt/glusterfs/lib/glusterfs/2.1.0git/xlator/cluster/replicate.so(afr_getxattr_cbk+0x54)[0x7f66bf771e04]
/opt/glusterfs/lib/glusterfs/2.1.0git/xlator/protocol/client.so(client_getxattr_cbk+0x145)[0x7f66bf99f9a5]
/opt/glusterfs/lib/glusterfs/2.1.0git/xlator/protocol/client.so(protocol_client_pollin+0xca)[0x7f66bf996dba]
/opt/glusterfs/lib/glusterfs/2.1.0git/xlator/protocol/client.so(notify+0xe8)[0x7f66bf9a0ae8]
/opt/glusterfs/lib/libglusterfs.so.0(xlator_notify+0x43)[0x7f66c04441c3]
/opt/glusterfs/lib/glusterfs/2.1.0git/transport/socket.so(socket_event_handler+0xc8)[0x7f66be318be8]
/opt/glusterfs/lib/libglusterfs.so.0[0x7f66c045dd5d]
/opt/glusterfs/sbin/glusterfs(main+0x75d)[0x403bbd]
/lib64/libc.so.6(__libc_start_main+0xfd)[0x3133a1ea2d]
/opt/glusterfs/sbin/glusterfs[0x402579]

This is for a 3-node cluster/replicate cluster. Hope the info helps...

Comment 1 Mark Mielke 2009-09-07 13:29:51 UTC
Here is the backtrace from the core dump - sorry, no debug symbols enabled although I will try to reproduce once I recompile with debugging enabled.

#0  dict_foreach (dict=0x0, fn=0x7f66bf770be0 <__gather_xattr_keys>, data=0x7ffff50bccf0) at dict.c:1182
#1  0x00007f66bf771d3a in __filter_xattrs (dict=0x0) at afr-inode-read.c:586
#2  0x00007f66bf771e04 in afr_getxattr_cbk (frame=0x7f66b00022e0, cookie=0x7f66bf770be0, 
    this=<value optimized out>, op_ret=-1, op_errno=<value optimized out>, dict=<value optimized out>)
    at afr-inode-read.c:645
#3  0x00007f66bf99f9a5 in client_getxattr_cbk (frame=0x23ca860, hdr=<value optimized out>, 
    hdrlen=<value optimized out>, iobuf=<value optimized out>) at client-protocol.c:4950
#4  0x00007f66bf996dba in protocol_client_pollin (this=0x23bec00, trans=0x23c3a80) at client-protocol.c:6082
#5  0x00007f66bf9a0ae8 in notify (this=0x0, event=<value optimized out>, data=0x23c3a80)
    at client-protocol.c:6125
#6  0x00007f66c04441c3 in xlator_notify (xl=0x23bec00, event=2, data=0x23c3a80) at xlator.c:930
#7  0x00007f66be318be8 in socket_event_handler (fd=<value optimized out>, idx=3, data=0x23c3a80, poll_in=1, 
    poll_out=0, poll_err=<value optimized out>) at socket.c:813
#8  0x00007f66c045dd5d in event_dispatch_epoll_handler (i=<value optimized out>, 
    events=<value optimized out>, event_pool=<value optimized out>) at event.c:804
#9  event_dispatch_epoll (i=<value optimized out>, events=<value optimized out>, 
    event_pool=<value optimized out>) at event.c:867
#10 0x0000000000403bbd in main (argc=<value optimized out>, argv=<value optimized out>) at glusterfsd.c:1294

Comment 2 Mark Mielke 2009-09-07 13:56:45 UTC
I can reproduce it fairly quickly now -

Three-node cluster/replication cluster. The servers each have storage/posix, features/locks, performance/io-threads(4), protocol/server(tcp). The clients each have 3 x protocol/client(tcp) (1 for each server), cluster/replicate (read-subvolume each set to themselves), performance/io-threads(4), performance/write-behind(1MB), performance/io-cache(256MB). The clients and servers are the same machines, connected via 100BaseTX network.

On the first client/server, I run:

$ cd /gluster/volume
$ perl -e 'while (1) { open(f, "> abc"); close(f); unlink("abc") }'
... let it run ...

On the third client/server, I run:

$ cd /gluster/volume
$ while true; do /bin/ls; done
... let it run ...

On the third client/server, this eventually (15 seconds?) results in:

ls: cannot open directory .: Transport endpoint is not connected

The /core.N file has identical contents to my earlier comment, with the failure being listed as line 1182 of dict.c.

The mount needs to be unmounting and remounted to clear this condition.

I think this is enough info for you to reproduce, so unless you cannot reproduce the problem - I'm not going to figure out how to re-install with debugging yet.

Comment 3 Mark Mielke 2009-09-07 14:13:42 UTC
More information:

I simplified the client configuration to take out all the performance/ modules. Not it only has the three protocol/client entries, and the cluster/replication entry.

I also adjusted the Perl script to increase the frequency of errors:

[markm@wcarh033]/gluster/tools/markm% perl -e 'while (1) { for (0 .. 9) { open(f, "> $_"); close(f) } for (0 .. 9) { unlink($_) } }'

By creating 10 and then removing 10, it makes the /bin/ls output more chance of having a failure than with just one.

Also, I found that my:

while true; do /bin/ls; done

Does not actually work. For some reason, the continuous execution causes it to *succeed* continuously. No errors. But, as soon as I start typing 'ls' a few times on the command line randomly instead, I start to get:

[root@wcarh035 markm]# ls
ls: cannot access 1: No such file or directory
ls: cannot access 2: No such file or directory
ls: cannot access 3: No such file or directory
1  2  3  4  5  6  7  8  9  abc

And then:

[root@wcarh035 markm]# ls
ls: cannot open directory .: Transport endpoint is not connected

Comment 4 Vikas Gorur 2009-09-08 06:18:52 UTC
Mark, 

Thank you for the excellent diagnostics. A patch to fix this
is under review:

http://patches.gluster.com/patch/1259/

Comment 5 Vikas Gorur 2009-09-08 06:23:34 UTC
Oops, typed the wrong bug number for the previous patch.

http://patches.gluster.com/patch/1261/

Comment 6 Anand Avati 2009-09-08 08:00:18 UTC
PATCH: http://patches.gluster.com/patch/1263 in master (cluster/afr: Check op_ret in afr_getxattr_cbk before accessing dict)

Comment 7 Anand Avati 2009-09-08 08:00:22 UTC
PATCH: http://patches.gluster.com/patch/1264 in release-2.0 (cluster/afr: Check op_ret in afr_getxattr_cbk before accessing dict)