| Summary: | Crash in replicate getxattr | ||
|---|---|---|---|
| Product: | [Community] GlusterFS | Reporter: | Vikas Gorur <vikas> |
| Component: | replicate | Assignee: | Vikas Gorur <vikas> |
| Status: | CLOSED CURRENTRELEASE | QA Contact: | |
| Severity: | high | Docs Contact: | |
| Priority: | low | ||
| Version: | mainline | CC: | gluster-bugs, mark, rabhat |
| Target Milestone: | --- | ||
| Target Release: | --- | ||
| Hardware: | All | ||
| OS: | Linux | ||
| Whiteboard: | |||
| Fixed In Version: | Doc Type: | Bug Fix | |
| Doc Text: | Story Points: | --- | |
| Clone Of: | Environment: | ||
| Last Closed: | Type: | --- | |
| Regression: | RTA | Mount Type: | --- |
| Documentation: | --- | CRM: | |
| Verified Versions: | Category: | --- | |
| oVirt Team: | --- | RHEL 7.3 requirements from Atomic Host: | |
| Cloudforms Team: | --- | Target Upstream Version: | |
|
Description
Vikas Gorur
2009-09-07 13:10:06 UTC
Here is the backtrace from the core dump - sorry, no debug symbols enabled although I will try to reproduce once I recompile with debugging enabled.
#0 dict_foreach (dict=0x0, fn=0x7f66bf770be0 <__gather_xattr_keys>, data=0x7ffff50bccf0) at dict.c:1182
#1 0x00007f66bf771d3a in __filter_xattrs (dict=0x0) at afr-inode-read.c:586
#2 0x00007f66bf771e04 in afr_getxattr_cbk (frame=0x7f66b00022e0, cookie=0x7f66bf770be0,
this=<value optimized out>, op_ret=-1, op_errno=<value optimized out>, dict=<value optimized out>)
at afr-inode-read.c:645
#3 0x00007f66bf99f9a5 in client_getxattr_cbk (frame=0x23ca860, hdr=<value optimized out>,
hdrlen=<value optimized out>, iobuf=<value optimized out>) at client-protocol.c:4950
#4 0x00007f66bf996dba in protocol_client_pollin (this=0x23bec00, trans=0x23c3a80) at client-protocol.c:6082
#5 0x00007f66bf9a0ae8 in notify (this=0x0, event=<value optimized out>, data=0x23c3a80)
at client-protocol.c:6125
#6 0x00007f66c04441c3 in xlator_notify (xl=0x23bec00, event=2, data=0x23c3a80) at xlator.c:930
#7 0x00007f66be318be8 in socket_event_handler (fd=<value optimized out>, idx=3, data=0x23c3a80, poll_in=1,
poll_out=0, poll_err=<value optimized out>) at socket.c:813
#8 0x00007f66c045dd5d in event_dispatch_epoll_handler (i=<value optimized out>,
events=<value optimized out>, event_pool=<value optimized out>) at event.c:804
#9 event_dispatch_epoll (i=<value optimized out>, events=<value optimized out>,
event_pool=<value optimized out>) at event.c:867
#10 0x0000000000403bbd in main (argc=<value optimized out>, argv=<value optimized out>) at glusterfsd.c:1294
I can reproduce it fairly quickly now -
Three-node cluster/replication cluster. The servers each have storage/posix, features/locks, performance/io-threads(4), protocol/server(tcp). The clients each have 3 x protocol/client(tcp) (1 for each server), cluster/replicate (read-subvolume each set to themselves), performance/io-threads(4), performance/write-behind(1MB), performance/io-cache(256MB). The clients and servers are the same machines, connected via 100BaseTX network.
On the first client/server, I run:
$ cd /gluster/volume
$ perl -e 'while (1) { open(f, "> abc"); close(f); unlink("abc") }'
... let it run ...
On the third client/server, I run:
$ cd /gluster/volume
$ while true; do /bin/ls; done
... let it run ...
On the third client/server, this eventually (15 seconds?) results in:
ls: cannot open directory .: Transport endpoint is not connected
The /core.N file has identical contents to my earlier comment, with the failure being listed as line 1182 of dict.c.
The mount needs to be unmounting and remounted to clear this condition.
I think this is enough info for you to reproduce, so unless you cannot reproduce the problem - I'm not going to figure out how to re-install with debugging yet.
More information:
I simplified the client configuration to take out all the performance/ modules. Not it only has the three protocol/client entries, and the cluster/replication entry.
I also adjusted the Perl script to increase the frequency of errors:
[markm@wcarh033]/gluster/tools/markm% perl -e 'while (1) { for (0 .. 9) { open(f, "> $_"); close(f) } for (0 .. 9) { unlink($_) } }'
By creating 10 and then removing 10, it makes the /bin/ls output more chance of having a failure than with just one.
Also, I found that my:
while true; do /bin/ls; done
Does not actually work. For some reason, the continuous execution causes it to *succeed* continuously. No errors. But, as soon as I start typing 'ls' a few times on the command line randomly instead, I start to get:
[root@wcarh035 markm]# ls
ls: cannot access 1: No such file or directory
ls: cannot access 2: No such file or directory
ls: cannot access 3: No such file or directory
1 2 3 4 5 6 7 8 9 abc
And then:
[root@wcarh035 markm]# ls
ls: cannot open directory .: Transport endpoint is not connected
Mark, Thank you for the excellent diagnostics. A patch to fix this is under review: http://patches.gluster.com/patch/1259/ Oops, typed the wrong bug number for the previous patch. http://patches.gluster.com/patch/1261/ PATCH: http://patches.gluster.com/patch/1263 in master (cluster/afr: Check op_ret in afr_getxattr_cbk before accessing dict) PATCH: http://patches.gluster.com/patch/1264 in release-2.0 (cluster/afr: Check op_ret in afr_getxattr_cbk before accessing dict) |