Bug 1045426 - geo-replication failed with: (xtime) failed on peer with OSError, when use non-privileged user
Summary: geo-replication failed with: (xtime) failed on peer with OSError, when use no...
Keywords:
Status: CLOSED DEFERRED
Alias: None
Product: GlusterFS
Classification: Community
Component: geo-replication
Version: 3.3.2
Hardware: x86_64
OS: Linux
unspecified
high
Target Milestone: ---
Assignee: bugs@gluster.org
QA Contact:
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2013-12-20 11:52 UTC by Alex
Modified: 2014-12-14 19:40 UTC (History)
2 users (show)

Fixed In Version:
Doc Type: Bug Fix
Doc Text:
Clone Of:
Environment:
Last Closed: 2014-12-14 19:40:32 UTC
Regression: ---
Mount Type: ---
Documentation: ---
CRM:
Verified Versions:
Embargoed:


Attachments (Terms of Use)

Description Alex 2013-12-20 11:52:45 UTC
Description of problem: geo-replication failed with: "(xtime) failed on peer with OSError" when I use non-privileged user for geo-replication.
I use Mountbroker, my config file on master and slave:
#cat /etc/glusterfs/glusterd.vol
volume management
    type mgmt/glusterd
    option working-directory /var/lib/glusterd
    option transport-type socket,rdma
    option transport.socket.keepalive-time 10
    option transport.socket.keepalive-interval 2
    option transport.socket.read-fail-log off

    option mountbroker-root /var/mountbroker-root
    option mountbroker-geo-replication.gsync server2
    option geo-replication-log-group geogroup
end-volume


Full error log:
[2013-12-20 11:41:01.133238] I [monitor(monitor):80:monitor] Monitor: ------------------------------------------------------------
[2013-12-20 11:41:01.133610] I [monitor(monitor):81:monitor] Monitor: starting gsyncd worker
[2013-12-20 11:41:01.187074] I [gsyncd:354:main_i] <top>: syncing: gluster://localhost:Mail -> ssh://gsync@server2:/home/data/gsync/mail
[2013-12-20 11:41:06.657636] I [master:284:crawl] GMaster: new master is d808c8f2-700c-491a-bd04-9d12ee1d585b
[2013-12-20 11:41:06.657954] I [master:288:crawl] GMaster: primary master with volume id d808c8f2-700c-491a-bd04-9d12ee1d585b ...
[2013-12-20 11:41:06.703836] E [repce:188:__call__] RepceClient: call 2164:140619435083520:1387539666.66 (xtime) failed on peer with OSError
[2013-12-20 11:41:06.704036] E [syncdutils:190:log_raise_exception] <top>: FAIL:
Traceback (most recent call last):
  File "/usr/libexec/glusterfs/python/syncdaemon/gsyncd.py", line 115, in main
    main_i()
  File "/usr/libexec/glusterfs/python/syncdaemon/gsyncd.py", line 365, in main_i
    local.service_loop(*[r for r in [remote] if r])
  File "/usr/libexec/glusterfs/python/syncdaemon/resource.py", line 827, in service_loop
    GMaster(self, args[0]).crawl_loop()
  File "/usr/libexec/glusterfs/python/syncdaemon/master.py", line 143, in crawl_loop
    self.crawl()
  File "/usr/libexec/glusterfs/python/syncdaemon/master.py", line 308, in crawl
    xtr0 = self.xtime(path, self.slave)
  File "/usr/libexec/glusterfs/python/syncdaemon/master.py", line 74, in xtime
    xt = rsc.server.xtime(path, self.uuid)
  File "/usr/libexec/glusterfs/python/syncdaemon/repce.py", line 204, in __call__
    return self.ins(self.meth, *a)
  File "/usr/libexec/glusterfs/python/syncdaemon/repce.py", line 189, in __call__
    raise res
OSError: [Errno 95] Operation not supported
[2013-12-20 11:41:06.705061] I [syncdutils:142:finalize] <top>: exiting.




How reproducible: happens every time.



Steps to Reproduce:
1. Setting up mountbroker on master and slave
2. gluster volume geo-replication Mail gsync@server2:/home/data/gsync/mail start


Actual results:
gluster volume geo-replication Mail gsync@server2:/home/data/gsync/mail status
-> faulty


Expected results:
gluster volume geo-replication Mail gsync@server2:/home/data/gsync/mail status
-> OK

Additional info:

Comment 1 Niels de Vos 2014-11-27 14:54:36 UTC
The version that this bug has been reported against, does not get any updates from the Gluster Community anymore. Please verify if this report is still valid against a current (3.4, 3.5 or 3.6) release and update the version, or close this bug.

If there has been no update before 9 December 2014, this bug will get automatocally closed.


Note You need to log in before you can comment on or make changes to this bug.