Bug 1023382

Summary: Dist-geo-rep : geo-rep went to faulty with traceback OSError: [Errno 22] Invalid argument: '.gfid/00000000-0000-0000-0000-000000000000'
Product: Red Hat Gluster Storage Reporter: Vijaykumar Koppad <vkoppad>
Component: geo-replicationAssignee: Amar Tumballi <amarts>
Status: CLOSED ERRATA QA Contact: Vijaykumar Koppad <vkoppad>
Severity: high Docs Contact:
Priority: high    
Version: 2.1CC: aavati, amarts, bbandari, csaba, grajaiya, nsathyan, vraman
Target Milestone: ---Keywords: ZStream
Target Release: ---   
Hardware: x86_64   
OS: Linux   
Whiteboard:
Fixed In Version: glusterfs-3.4.0.42rhs-1 Doc Type: Bug Fix
Doc Text:
Story Points: ---
Clone Of: Environment:
Last Closed: 2013-11-27 15:44:15 UTC Type: Bug
Regression: --- Mount Type: ---
Documentation: --- CRM:
Verified Versions: Category: ---
oVirt Team: --- RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: --- Target Upstream Version:

Description Vijaykumar Koppad 2013-10-25 10:16:23 UTC
Description of problem: While doing renames, geo-rep went to faulty with the traceback OSError: [Errno 22] Invalid argument: '.gfid/00000000-0000-0000-0000-000000000000', and got stuck there. 

Traceback
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
[2013-10-25 15:31:35.497792] I [master(/bricks/brick1):358:crawlwrap] _GMaster: 20 crawls, 0 turns
[2013-10-25 15:32:35.567895] I [master(/bricks/brick1):358:crawlwrap] _GMaster: 19 crawls, 2 turns
[2013-10-25 15:33:10.913541] E [syncdutils(/bricks/brick1):207:log_raise_exception] <top>: FAIL: 
Traceback (most recent call last):
  File "/usr/libexec/glusterfs/python/syncdaemon/gsyncd.py", line 150, in main
    main_i()
  File "/usr/libexec/glusterfs/python/syncdaemon/gsyncd.py", line 530, in main_i
    local.service_loop(*[r for r in [remote] if r])
  File "/usr/libexec/glusterfs/python/syncdaemon/resource.py", line 1074, in service_loop
    g2.crawlwrap()
  File "/usr/libexec/glusterfs/python/syncdaemon/master.py", line 369, in crawlwrap
    self.crawl()
  File "/usr/libexec/glusterfs/python/syncdaemon/master.py", line 799, in crawl
    self.process(changes)
  File "/usr/libexec/glusterfs/python/syncdaemon/master.py", line 760, in process
    if self.process_change(change, done, retry):
  File "/usr/libexec/glusterfs/python/syncdaemon/master.py", line 735, in process_change
    st = lstat(go)
  File "/usr/libexec/glusterfs/python/syncdaemon/syncdutils.py", line 399, in lstat
    return os.lstat(e)
OSError: [Errno 22] Invalid argument: '.gfid/00000000-0000-0000-0000-000000000000'
[2013-10-25 15:33:10.919718] I [syncdutils(/bricks/brick1):159:finalize] <top>: exiting.
[2013-10-25 15:33:10.933976] I [monitor(monitor):81:set_state] Monitor: new state: faulty

>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>

Version-Release number of selected component (if applicable): glusterfs-3.4.0.36rhs-1.el6rhs.x86_64


How reproducible: Didn't try to reproduce.


Steps to Reproduce: 
1.Create and start a geo-rep relationship between master and slave.
2. Enable quota on master 
3.create data, symlink and hardlinks let them sync 
4.And start renames of the files created 
5.Check the status

Actual results: the status went to faulty and stuck there . 


Expected results: It shouldn't go to faulty. 


Additional info:

Comment 2 Vijaykumar Koppad 2013-10-25 11:08:59 UTC
I was able to reproduce them even while creating hardlinks. 

The particular changelogs have NULL GFID, like

E 69be80b8-6037-44fd-9ba1-0523e896b7f6 LINK 11794ef1-7632-42ba-ac28-42ab9406042f%2F526a4de8%25%257WUVHEDYJH
M 69be80b8-6037-44fd-9ba1-0523e896b7f6
M 00000000-0000-0000-0000-000000000000
M 61bd5f6c-9d62-4262-98d7-723a4647f8bb
M 00000000-0000-0000-0000-000000000000
M 94df0a4a-17eb-4d37-ac58-dbefb8ec9b05

Comment 3 Vijaykumar Koppad 2013-10-25 13:15:50 UTC
Earlier, the quota was enabled. Even with quota disables it is reproducible.

Comment 4 Amar Tumballi 2013-11-01 10:10:10 UTC
can you confirm with .38rhs? because two fixes which went into solve some setxattr() inode being NULL here.. Also looks like bug 1025476 is related.

Comment 5 Vijaykumar Koppad 2013-11-13 06:48:30 UTC
Tried couple of times on the build glusterfs-3.4.0.43rhs-1. Seems to work fine.

Comment 6 errata-xmlrpc 2013-11-27 15:44:15 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

http://rhn.redhat.com/errata/RHBA-2013-1769.html