Description of problem:
While doing stress testing on fanout setup, found the following traceback:
[2016-06-08 18:08:58.969448] E [syncdutils(/rhs/brick2/b4):276:log_raise_exception] <top>: FAIL:
Traceback (most recent call last):
File "/usr/libexec/glusterfs/python/syncdaemon/syncdutils.py", line 306, in twrap
File "/usr/libexec/glusterfs/python/syncdaemon/resource.py", line 172, in tailer
l = os.read(fd, 1024)
OSError: [Errno 9] Bad file descriptor
[2016-06-08 18:08:58.971385] I [syncdutils(/rhs/brick2/b4):220:finalize] <top>: exiting.
Worker crashed and restarted, files are synced to slave.
Version-Release number of selected component (if applicable):
Not sure about the iteration, since similar cases on non fanout setup is already tried multiple times. With fanout also, i couldn't hit again with smaller data set though.
Found this in the logs, not sure about the exact steps.
BZ 1340756 fixes this issue too. Sent patch to Upstream
Patch sent to 3.2.0 as part of BZ 1340756
Upstream mainline : http://review.gluster.org/15379
Upstream 3.8 : http://review.gluster.org/15447
downstream patch : https://code.engineering.redhat.com/gerrit/#/c/85007
Verified with the build: glusterfs-geo-replication-3.8.4-17.el7rhgs.x86_64
Tried following fops:
create,chmod,chown,chgrp,symlink,hardlink,truncate,rename,remove via rsync in changelog,xsync and history crawl. In any of these cases, did not see the crash with respect to Bad file descriptor. Moving this bug to verified state.
Will create/reopen if could reproduce by anyother steps.
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.
For information on the advisory, and where to find the updated
files, follow the link below.
If the solution does not work for you, open a new bug report.