Bug 1646104 - [Geo-rep]: Faulty geo-rep sessions due to link ownership on slave volume
Summary: [Geo-rep]: Faulty geo-rep sessions due to link ownership on slave volume
Keywords:
Status: CLOSED CURRENTRELEASE
Alias: None
Product: GlusterFS
Classification: Community
Component: geo-replication
Version: mainline
Hardware: Unspecified
OS: Unspecified
high
high
Target Milestone: ---
Assignee: bugs@gluster.org
QA Contact:
URL:
Whiteboard:
Depends On: 1645916
Blocks: 1646806
TreeView+ depends on / blocked
 
Reported: 2018-11-05 07:17 UTC by Kotresh HR
Modified: 2019-03-25 16:31 UTC (History)
9 users (show)

Fixed In Version: glusterfs-6.0
Doc Type: If docs needed, set a value
Doc Text:
Clone Of: 1645916
: 1646806 1654115 (view as bug list)
Environment:
Last Closed: 2018-11-28 04:54:29 UTC
Regression: ---
Mount Type: ---
Documentation: ---
CRM:
Verified Versions:
Embargoed:


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
Gluster.org Gerrit 21546 0 None Merged geo-rep: Fix traceback with symlink metadata sync 2018-11-06 02:23:12 UTC

Description Kotresh HR 2018-11-05 07:17:34 UTC
+++ This bug was initially created as a clone of Bug #1645916 +++

Description of problem:

Geo-rep session is Faulty with following tracebacks

Master Traceback:
[2018-11-05 07:12:37.31370] E [syncdutils(worker /bricks/brick0/b0):338:log_raise_exception] <top>: FAIL: 
Traceback (most recent call last):
  File "/usr/local/libexec/glusterfs/python/syncdaemon/gsyncd.py", line 322, in main
    func(args)
  File "/usr/local/libexec/glusterfs/python/syncdaemon/subcmds.py", line 82, in subcmd_worker
    local.service_loop(remote)
  File "/usr/local/libexec/glusterfs/python/syncdaemon/resource.py", line 1312, in service_loop
    till=e))
  File "/usr/local/libexec/glusterfs/python/syncdaemon/master.py", line 599, in crawlwrap
    self.crawl()
  File "/usr/local/libexec/glusterfs/python/syncdaemon/master.py", line 1555, in crawl
    self.changelogs_batch_process(changes)
  File "/usr/local/libexec/glusterfs/python/syncdaemon/master.py", line 1455, in changelogs_batch_process
    self.process(batch)
  File "/usr/local/libexec/glusterfs/python/syncdaemon/master.py", line 1290, in process
    self.process_change(change, done, retry)
  File "/usr/local/libexec/glusterfs/python/syncdaemon/master.py", line 1237, in process_change
    failures = self.slave.server.meta_ops(meta_entries)
  File "/usr/local/libexec/glusterfs/python/syncdaemon/repce.py", line 233, in __call__
    return self.ins(self.meth, *a)
  File "/usr/local/libexec/glusterfs/python/syncdaemon/repce.py", line 215, in __call__
    raise res
OSError: [Errno 30] Read-only file system: '.gfid/981277d0-bf2f-4a43-be9a-ec3eabf84955'



Slave Traceback:
[2018-11-05 07:12:37.30050] E [repce(slave f281/bricks/brick0/b0):122:worker] <top>: call failed: 
Traceback (most recent call last):
  File "/usr/local/libexec/glusterfs/python/syncdaemon/repce.py", line 118, in worker
    res = getattr(self.obj, rmeth)(*in_data[2:])
  File "/usr/local/libexec/glusterfs/python/syncdaemon/resource.py", line 765, in meta_ops
    cmd_ret = errno_wrap(os.chmod, [go, mode],
  File "/usr/local/libexec/glusterfs/python/syncdaemon/syncdutils.py", line 546, in errno_wrap
    return call(*arg)
OSError: [Errno 30] Read-only file system: '.gfid/981277d0-bf2f-4a43-be9a-ec3eabf84955'


Version-Release number of selected component (if applicable):
mainline


How reproducible:
Always




Steps to Reproduce:
Use geo-replication to copy symbolic links with either the group or ownership something other than root.

Actual results:
Geo-rep sessions with a status of "Faulty"


Expected results:
Completed geo-rep sessions.

Comment 1 Worker Ant 2018-11-05 09:13:25 UTC
REVIEW: https://review.gluster.org/21546 (geo-rep: Fix traceback with symlink metadata sync) posted (#1) for review on master by Kotresh HR

Comment 2 Worker Ant 2018-11-06 02:23:09 UTC
REVIEW: https://review.gluster.org/21546 (geo-rep: Fix traceback with symlink metadata sync) posted (#2) for review on master by Atin Mukherjee

Comment 3 Kotresh HR 2018-11-28 04:54:29 UTC
Will be part of v5.2 and available v4.x latest release

Comment 4 Shyamsundar 2019-03-25 16:31:44 UTC
This bug is getting closed because a release has been made available that should address the reported issue. In case the problem is still not fixed with glusterfs-6.0, please open a new bug report.

glusterfs-6.0 has been announced on the Gluster mailinglists [1], packages for several distributions should become available in the near future. Keep an eye on the Gluster Users mailinglist [2] and the update infrastructure for your distribution.

[1] https://lists.gluster.org/pipermail/announce/2019-March/000120.html
[2] https://www.gluster.org/pipermail/gluster-users/


Note You need to log in before you can comment on or make changes to this bug.