Bug 790319

Summary: Geo-replcation status gives faulty with log - [Errno 22] Invalid argument.[glusterfs-3.3.0qa22]
Product: [Community] GlusterFS Reporter: Vijaykumar Koppad <vkoppad>
Component: geo-replicationAssignee: Csaba Henk <csaba>
Status: CLOSED WORKSFORME QA Contact:
Severity: high Docs Contact:
Priority: medium    
Version: mainlineCC: bbandari, gluster-bugs, vbellur
Target Milestone: ---   
Target Release: ---   
Hardware: x86_64   
OS: Linux   
Whiteboard:
Fixed In Version: Doc Type: Bug Fix
Doc Text:
Story Points: ---
Clone Of: Environment:
Last Closed: 2012-03-27 06:57:55 UTC Type: ---
Regression: --- Mount Type: ---
Documentation: --- CRM:
Verified Versions: Category: ---
oVirt Team: --- RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: --- Target Upstream Version:

Description Vijaykumar Koppad 2012-02-14 09:06:54 UTC
Description of problem:

Volume Name: doa
Type: Distribute
Volume ID: 8863bb19-dfe0-4e09-8f7a-f4183a7c1817
Status: Started
Number of Bricks: 2
Transport-type: tcp
Bricks:
Brick1: vostro:/root/bricks/doa/d1
Brick2: vostro:/root/bricks/doa/d2
root@vostro:/mnt/client# gluster volume set doa indexing on
Set volume successful
root@vostro:/mnt/client# gluster volume geo-replication doa /mnt/client1/ start
Starting geo-replication session between doa & /mnt/client1/ has been successful
root@vostro:/mnt/client# gluster volume geo-replication doa /mnt/client1/ status
MASTER               SLAVE                                              STATUS    
--------------------------------------------------------------------------------
doa                  /mnt/client1/                                      starting...
root@vostro:/mnt/client# gluster volume geo-replication doa /mnt/client1/ status
MASTER               SLAVE                                              STATUS    
--------------------------------------------------------------------------------
doa                  /mnt/client1/                                      starting...
root@vostro:/mnt/client# gluster volume geo-replication doa /mnt/client1/ status
MASTER               SLAVE                                              STATUS    
--------------------------------------------------------------------------------
doa                  /mnt/client1/                                      faulty    

If i start a geo-relication session on the local directory, it gives the status as faulty. 
###############################################

Log message says 
###############################################

[2012-02-14 14:30:42.580591] E [syncdutils:184:log_raise_exception] <top>: FAIL: 
Traceback (most recent call last):
  File "/usr/local/libexec/glusterfs/python/syncdaemon/gsyncd.py", line 115, in main
    main_i()
  File "/usr/local/libexec/glusterfs/python/syncdaemon/gsyncd.py", line 365, in main_i
    local.service_loop(*[r for r in [remote] if r])
  File "/usr/local/libexec/glusterfs/python/syncdaemon/resource.py", line 742, in service_loop
    GMaster(self, args[0]).crawl_loop()
  File "/usr/local/libexec/glusterfs/python/syncdaemon/master.py", line 142, in crawl_loop
    self.crawl()
  File "/usr/local/libexec/glusterfs/python/syncdaemon/master.py", line 303, in crawl
    xtl = self.xtime(path)
  File "/usr/local/libexec/glusterfs/python/syncdaemon/master.py", line 83, in xtime
    rsc.server.set_xtime(path, self.uuid, xt)
  File "/usr/local/libexec/glusterfs/python/syncdaemon/resource.py", line 229, in ff
    return f(*a)
  File "/usr/local/libexec/glusterfs/python/syncdaemon/resource.py", line 336, in set_xtime
    Xattr.lsetxattr(path, '.'.join([cls.GX_NSPACE, uuid, 'xtime']), struct.pack('!II', *mark))
  File "/usr/local/libexec/glusterfs/python/syncdaemon/libcxattr.py", line 56, in lsetxattr
    cls.raise_oserr()
  File "/usr/local/libexec/glusterfs/python/syncdaemon/libcxattr.py", line 25, in raise_oserr
    raise OSError(errn, os.strerror(errn))
OSError: [Errno 22] Invalid argument
###########################################################################

Version-Release number of selected component (if applicable):master 

How reproducible:always

Comment 1 Csaba Henk 2012-02-27 03:42:45 UTC
Please check:

- Does it happen with single brick volume?
- What's the deal with /mnt/client1? Setting extended attributes on it, in particular, ones in the trusted namespace, should be possible. Please check manually with setfattr(1). If it does not work, then it's an issue with the setup, not with the software.

Comment 2 Vijaykumar Koppad 2012-03-27 06:57:55 UTC
I haven't got these kind of errors recently. I am closing this bug for now. I'll reopen if i am able to reproduce th issue, with sufficient information to debug.