Bug 1375094 - [geo-rep]: Worker crashes with OSError: [Errno 61] No data available
Summary: [geo-rep]: Worker crashes with OSError: [Errno 61] No data available
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: Red Hat Gluster Storage
Classification: Red Hat Storage
Component: geo-replication
Version: rhgs-3.1
Hardware: x86_64
OS: Linux
unspecified
medium
Target Milestone: ---
: RHGS 3.4.0
Assignee: Aravinda VK
QA Contact: Rochelle
URL:
Whiteboard: rebase
Depends On:
Blocks: 1503134
TreeView+ depends on / blocked
 
Reported: 2016-09-12 07:36 UTC by Rahul Hinduja
Modified: 2018-09-14 04:35 UTC (History)
6 users (show)

Fixed In Version: glusterfs-3.12.2-1
Doc Type: If docs needed, set a value
Doc Text:
Clone Of:
: 1499391 (view as bug list)
Environment:
Last Closed: 2018-09-04 06:29:44 UTC
Embargoed:


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
Red Hat Product Errata RHSA-2018:2607 0 None None None 2018-09-04 06:31:17 UTC

Description Rahul Hinduja 2016-09-12 07:36:36 UTC
Description of problem:
=======================

While running the automation snaity check which does "create, chmod, chown, chgrp, symlink, hardlink, rename, truncate, rm" during changelog, xsync and history crawl. 

Following worker crash was observed:

[2016-09-11 13:52:43.422640] E [syncdutils(/bricks/brick1/master_brick5):276:log_raise_exception] <top>: FAIL: 
Traceback (most recent call last):
  File "/usr/libexec/glusterfs/python/syncdaemon/syncdutils.py", line 306, in twrap
    tf(*aa)
  File "/usr/libexec/glusterfs/python/syncdaemon/master.py", line 1267, in Xsyncer
    self.Xcrawl()
  File "/usr/libexec/glusterfs/python/syncdaemon/master.py", line 1424, in Xcrawl
    self.Xcrawl(e, xtr_root)
  File "/usr/libexec/glusterfs/python/syncdaemon/master.py", line 1424, in Xcrawl
    self.Xcrawl(e, xtr_root)
  File "/usr/libexec/glusterfs/python/syncdaemon/master.py", line 1424, in Xcrawl
    self.Xcrawl(e, xtr_root)
  File "/usr/libexec/glusterfs/python/syncdaemon/master.py", line 1424, in Xcrawl
    self.Xcrawl(e, xtr_root)
  File "/usr/libexec/glusterfs/python/syncdaemon/master.py", line 1424, in Xcrawl
    self.Xcrawl(e, xtr_root)
  File "/usr/libexec/glusterfs/python/syncdaemon/master.py", line 1424, in Xcrawl
    self.Xcrawl(e, xtr_root)
  File "/usr/libexec/glusterfs/python/syncdaemon/master.py", line 1406, in Xcrawl
    gfid = self.master.server.gfid(e)
  File "/usr/libexec/glusterfs/python/syncdaemon/resource.py", line 1414, in gfid
    return super(brickserver, cls).gfid(e)
  File "/usr/libexec/glusterfs/python/syncdaemon/resource.py", line 327, in ff
    return f(*a)
  File "/usr/libexec/glusterfs/python/syncdaemon/resource.py", line 369, in gfid
    buf = Xattr.lgetxattr(path, cls.GFID_XATTR, 16)
  File "/usr/libexec/glusterfs/python/syncdaemon/libcxattr.py", line 55, in lgetxattr
    return cls._query_xattr(path, siz, 'lgetxattr', attr)
  File "/usr/libexec/glusterfs/python/syncdaemon/libcxattr.py", line 47, in _query_xattr
    cls.raise_oserr()
  File "/usr/libexec/glusterfs/python/syncdaemon/libcxattr.py", line 37, in raise_oserr
    raise OSError(errn, os.strerror(errn))
OSError: [Errno 61] No data available
[2016-09-11 13:52:43.428107] I [syncdutils(/bricks/brick1/master_brick5):220:finalize] <top>: exiting.
  


Version-Release number of selected component (if applicable):
=============================================================

glusterfs-3.7.9-12.el7rhgs.x86_64


How reproducible:
=================

Happened to see it once, while the same test suite is executed multiple times.

Comment 7 Kotresh HR 2017-10-07 03:28:53 UTC
Upstream Patch:

https://review.gluster.org/18445 (master)

Comment 12 errata-xmlrpc 2018-09-04 06:29:44 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHSA-2018:2607


Note You need to log in before you can comment on or make changes to this bug.