Bug 790319 - Geo-replcation status gives faulty with log - [Errno 22] Invalid argument.[glusterfs-3.3.0qa22]
Summary: Geo-replcation status gives faulty with log - [Errno 22] Invalid argument.[g...
Keywords:
Status: CLOSED WORKSFORME
Alias: None
Product: GlusterFS
Classification: Community
Component: geo-replication
Version: mainline
Hardware: x86_64
OS: Linux
medium
high
Target Milestone: ---
Assignee: Csaba Henk
QA Contact:
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2012-02-14 09:06 UTC by Vijaykumar Koppad
Modified: 2015-12-01 16:45 UTC (History)
3 users (show)

Fixed In Version:
Doc Type: Bug Fix
Doc Text:
Clone Of:
Environment:
Last Closed: 2012-03-27 06:57:55 UTC
Regression: ---
Mount Type: ---
Documentation: ---
CRM:
Verified Versions:


Attachments (Terms of Use)

Description Vijaykumar Koppad 2012-02-14 09:06:54 UTC
Description of problem:

Volume Name: doa
Type: Distribute
Volume ID: 8863bb19-dfe0-4e09-8f7a-f4183a7c1817
Status: Started
Number of Bricks: 2
Transport-type: tcp
Bricks:
Brick1: vostro:/root/bricks/doa/d1
Brick2: vostro:/root/bricks/doa/d2
root@vostro:/mnt/client# gluster volume set doa indexing on
Set volume successful
root@vostro:/mnt/client# gluster volume geo-replication doa /mnt/client1/ start
Starting geo-replication session between doa & /mnt/client1/ has been successful
root@vostro:/mnt/client# gluster volume geo-replication doa /mnt/client1/ status
MASTER               SLAVE                                              STATUS    
--------------------------------------------------------------------------------
doa                  /mnt/client1/                                      starting...
root@vostro:/mnt/client# gluster volume geo-replication doa /mnt/client1/ status
MASTER               SLAVE                                              STATUS    
--------------------------------------------------------------------------------
doa                  /mnt/client1/                                      starting...
root@vostro:/mnt/client# gluster volume geo-replication doa /mnt/client1/ status
MASTER               SLAVE                                              STATUS    
--------------------------------------------------------------------------------
doa                  /mnt/client1/                                      faulty    

If i start a geo-relication session on the local directory, it gives the status as faulty. 
###############################################

Log message says 
###############################################

[2012-02-14 14:30:42.580591] E [syncdutils:184:log_raise_exception] <top>: FAIL: 
Traceback (most recent call last):
  File "/usr/local/libexec/glusterfs/python/syncdaemon/gsyncd.py", line 115, in main
    main_i()
  File "/usr/local/libexec/glusterfs/python/syncdaemon/gsyncd.py", line 365, in main_i
    local.service_loop(*[r for r in [remote] if r])
  File "/usr/local/libexec/glusterfs/python/syncdaemon/resource.py", line 742, in service_loop
    GMaster(self, args[0]).crawl_loop()
  File "/usr/local/libexec/glusterfs/python/syncdaemon/master.py", line 142, in crawl_loop
    self.crawl()
  File "/usr/local/libexec/glusterfs/python/syncdaemon/master.py", line 303, in crawl
    xtl = self.xtime(path)
  File "/usr/local/libexec/glusterfs/python/syncdaemon/master.py", line 83, in xtime
    rsc.server.set_xtime(path, self.uuid, xt)
  File "/usr/local/libexec/glusterfs/python/syncdaemon/resource.py", line 229, in ff
    return f(*a)
  File "/usr/local/libexec/glusterfs/python/syncdaemon/resource.py", line 336, in set_xtime
    Xattr.lsetxattr(path, '.'.join([cls.GX_NSPACE, uuid, 'xtime']), struct.pack('!II', *mark))
  File "/usr/local/libexec/glusterfs/python/syncdaemon/libcxattr.py", line 56, in lsetxattr
    cls.raise_oserr()
  File "/usr/local/libexec/glusterfs/python/syncdaemon/libcxattr.py", line 25, in raise_oserr
    raise OSError(errn, os.strerror(errn))
OSError: [Errno 22] Invalid argument
###########################################################################

Version-Release number of selected component (if applicable):master 

How reproducible:always

Comment 1 Csaba Henk 2012-02-27 03:42:45 UTC
Please check:

- Does it happen with single brick volume?
- What's the deal with /mnt/client1? Setting extended attributes on it, in particular, ones in the trusted namespace, should be possible. Please check manually with setfattr(1). If it does not work, then it's an issue with the setup, not with the software.

Comment 2 Vijaykumar Koppad 2012-03-27 06:57:55 UTC
I haven't got these kind of errors recently. I am closing this bug for now. I'll reopen if i am able to reproduce th issue, with sufficient information to debug.


Note You need to log in before you can comment on or make changes to this bug.