Bug 1023382 - Dist-geo-rep : geo-rep went to faulty with traceback OSError: [Errno 22] Invalid argument: '.gfid/00000000-0000-0000-0000-000000000000'
Dist-geo-rep : geo-rep went to faulty with traceback OSError: [Errno 22] Inva...
Product: Red Hat Gluster Storage
Classification: Red Hat
Component: geo-replication (Show other bugs)
x86_64 Linux
high Severity high
: ---
: ---
Assigned To: Amar Tumballi
Vijaykumar Koppad
: ZStream
Depends On:
  Show dependency treegraph
Reported: 2013-10-25 06:16 EDT by Vijaykumar Koppad
Modified: 2014-08-24 20:50 EDT (History)
7 users (show)

See Also:
Fixed In Version: glusterfs-
Doc Type: Bug Fix
Doc Text:
Story Points: ---
Clone Of:
Last Closed: 2013-11-27 10:44:15 EST
Type: Bug
Regression: ---
Mount Type: ---
Documentation: ---
Verified Versions:
Category: ---
oVirt Team: ---
RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: ---

Attachments (Terms of Use)

  None (edit)
Description Vijaykumar Koppad 2013-10-25 06:16:23 EDT
Description of problem: While doing renames, geo-rep went to faulty with the traceback OSError: [Errno 22] Invalid argument: '.gfid/00000000-0000-0000-0000-000000000000', and got stuck there. 

[2013-10-25 15:31:35.497792] I [master(/bricks/brick1):358:crawlwrap] _GMaster: 20 crawls, 0 turns
[2013-10-25 15:32:35.567895] I [master(/bricks/brick1):358:crawlwrap] _GMaster: 19 crawls, 2 turns
[2013-10-25 15:33:10.913541] E [syncdutils(/bricks/brick1):207:log_raise_exception] <top>: FAIL: 
Traceback (most recent call last):
  File "/usr/libexec/glusterfs/python/syncdaemon/gsyncd.py", line 150, in main
  File "/usr/libexec/glusterfs/python/syncdaemon/gsyncd.py", line 530, in main_i
    local.service_loop(*[r for r in [remote] if r])
  File "/usr/libexec/glusterfs/python/syncdaemon/resource.py", line 1074, in service_loop
  File "/usr/libexec/glusterfs/python/syncdaemon/master.py", line 369, in crawlwrap
  File "/usr/libexec/glusterfs/python/syncdaemon/master.py", line 799, in crawl
  File "/usr/libexec/glusterfs/python/syncdaemon/master.py", line 760, in process
    if self.process_change(change, done, retry):
  File "/usr/libexec/glusterfs/python/syncdaemon/master.py", line 735, in process_change
    st = lstat(go)
  File "/usr/libexec/glusterfs/python/syncdaemon/syncdutils.py", line 399, in lstat
    return os.lstat(e)
OSError: [Errno 22] Invalid argument: '.gfid/00000000-0000-0000-0000-000000000000'
[2013-10-25 15:33:10.919718] I [syncdutils(/bricks/brick1):159:finalize] <top>: exiting.
[2013-10-25 15:33:10.933976] I [monitor(monitor):81:set_state] Monitor: new state: faulty


Version-Release number of selected component (if applicable): glusterfs-

How reproducible: Didn't try to reproduce.

Steps to Reproduce: 
1.Create and start a geo-rep relationship between master and slave.
2. Enable quota on master 
3.create data, symlink and hardlinks let them sync 
4.And start renames of the files created 
5.Check the status

Actual results: the status went to faulty and stuck there . 

Expected results: It shouldn't go to faulty. 

Additional info:
Comment 2 Vijaykumar Koppad 2013-10-25 07:08:59 EDT
I was able to reproduce them even while creating hardlinks. 

The particular changelogs have NULL GFID, like

E 69be80b8-6037-44fd-9ba1-0523e896b7f6 LINK 11794ef1-7632-42ba-ac28-42ab9406042f%2F526a4de8%25%257WUVHEDYJH
M 69be80b8-6037-44fd-9ba1-0523e896b7f6
M 00000000-0000-0000-0000-000000000000
M 61bd5f6c-9d62-4262-98d7-723a4647f8bb
M 00000000-0000-0000-0000-000000000000
M 94df0a4a-17eb-4d37-ac58-dbefb8ec9b05
Comment 3 Vijaykumar Koppad 2013-10-25 09:15:50 EDT
Earlier, the quota was enabled. Even with quota disables it is reproducible.
Comment 4 Amar Tumballi 2013-11-01 06:10:10 EDT
can you confirm with .38rhs? because two fixes which went into solve some setxattr() inode being NULL here.. Also looks like bug 1025476 is related.
Comment 5 Vijaykumar Koppad 2013-11-13 01:48:30 EST
Tried couple of times on the build glusterfs- Seems to work fine.
Comment 6 errata-xmlrpc 2013-11-27 10:44:15 EST
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.


Note You need to log in before you can comment on or make changes to this bug.