Bug 1466123 - [RFE] Pass slave volume in geo-rep as read-only
Summary: [RFE] Pass slave volume in geo-rep as read-only
Keywords:
Status: CLOSED EOL
Alias: None
Product: GlusterFS
Classification: Community
Component: geo-replication
Version: 3.11
Hardware: All
OS: All
unspecified
medium
Target Milestone: ---
Assignee: bugs@gluster.org
QA Contact: Rahul Hinduja
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2017-06-29 06:28 UTC by Gianni Pezzarossi
Modified: 2018-06-20 18:29 UTC (History)
15 users (show)

Fixed In Version:
Doc Type: If docs needed, set a value
Doc Text:
Clone Of: 1430608
Environment:
Last Closed: 2018-06-20 18:29:13 UTC
Regression: ---
Mount Type: ---
Documentation: ---
CRM:
Verified Versions:
Embargoed:


Attachments (Terms of Use)

Description Gianni Pezzarossi 2017-06-29 06:28:33 UTC
+++ This bug was initially created as a clone of Bug #1430608 +++

Description of problem:

Geo-Replication cannot write to a Read-Only slave volume

This supposedly was fixed as of 3.11.0 according to the bug report this is cloned from, however, a fresh install of 3.11.1 shows that if the slave gluster volume is set to Read-Only, geo-replication fails stating that the slave volume is set to read-only.

from log-file on master:

"[2017-06-29 06:15:18.447021] I [master(/brick/brick1/gvol0):1363:crawl] _GMaster: processing xsync changelog /var/lib/misc/glusterfsd/gvol0/ssh%3A%2F%2Froot%40172.22.6.151%3Agluster%3A%2F%2F127.0.0.1%3Ageovol/b7cdfed7a45ded34d6b360dc29e54688/xsync/XSYNC-CHANGELOG.1498716917
[2017-06-29 06:15:18.455240] E [repce(/brick/brick1/gvol0):207:__call__] RepceClient: call 15612:140708044412736:1498716918.45 (entry_ops) failed on peer with OSError
[2017-06-29 06:15:18.455410] E [syncdutils(/brick/brick1/gvol0):312:log_raise_exception] <top>: FAIL:

Traceback (most recent call last):
  File "/usr/libexec/glusterfs/python/syncdaemon/gsyncd.py", line 204, in main
    main_i()
  File "/usr/libexec/glusterfs/python/syncdaemon/gsyncd.py", line 782, in main_i
    local.service_loop(*[r for r in [remote] if r])
  File "/usr/libexec/glusterfs/python/syncdaemon/resource.py", line 1656, in service_loop
    g1.crawlwrap(oneshot=True, register_time=register_time)
  File "/usr/libexec/glusterfs/python/syncdaemon/master.py", line 600, in crawlwrap
    self.crawl()
  File "/usr/libexec/glusterfs/python/syncdaemon/master.py", line 1364, in crawl
    self.process([item[1]], 0)
  File "/usr/libexec/glusterfs/python/syncdaemon/master.py", line 1039, in process
    self.process_change(change, done, retry)
  File "/usr/libexec/glusterfs/python/syncdaemon/master.py", line 960, in process_change
    failures = self.slave.server.entry_ops(entries)
  File "/usr/libexec/glusterfs/python/syncdaemon/repce.py", line 226, in __call__
    return self.ins(self.meth, *a)
  File "/usr/libexec/glusterfs/python/syncdaemon/repce.py", line 208, in __call__
    raise res
OSError: [Errno 30] Read-only file system"

Version-Release number of selected component (if applicable):

Any

How reproducible:

Always

Steps to Reproduce:
1. create a geo-repo
2. Set slave to read-only
3. Start geo-repo and look for status of "Faulty" plus error in master log file for the geo-repo session.

Actual results:

Several:

geo-replication never runs

Expected results:

as far as the patch states, internal functions such as gsyncd should be able to write to a read-only volume.

--- Additional comment from Kotresh HR on 2017-03-09 01:08:06 EST ---

upstream patch :
https://review.gluster.org/#/c/16854/
https://review.gluster.org/#/c/16855/

--- Additional comment from Worker Ant on 2017-03-09 01:40:52 EST ---

REVIEW: https://review.gluster.org/16854 (performance/write-behind: Honor the client pid set) posted (#2) for review on master by Kotresh HR (khiremat)

--- Additional comment from Worker Ant on 2017-03-09 01:41:00 EST ---

REVIEW: https://review.gluster.org/16855 (features/read-only: Allow internal clients to r/w) posted (#2) for review on master by Kotresh HR (khiremat)

--- Additional comment from Worker Ant on 2017-03-10 00:17:03 EST ---

COMMIT: https://review.gluster.org/16854 committed in master by Raghavendra G (rgowdapp) 
------
commit b9e1c911833ca1916055622e5265672d5935d925
Author: Kotresh HR <khiremat>
Date:   Mon Mar 6 10:34:05 2017 -0500

    performance/write-behind: Honor the client pid set
    
    write-behind xlator does not honor the client pid being
    set. It doesn't pass down the client pid saved in
    'frame->root->pid'. This patch fixes the same.
    
    Change-Id: I838dcf43f56d6d0aa1d2c88811a2b271d9e88d05
    BUG: 1430608
    Signed-off-by: Kotresh HR <khiremat>
    Reviewed-on: https://review.gluster.org/16854
    Smoke: Gluster Build System <jenkins.org>
    NetBSD-regression: NetBSD Build System <jenkins.org>
    CentOS-regression: Gluster Build System <jenkins.org>
    Reviewed-by: Vijay Bellur <vbellur>
    Reviewed-by: Raghavendra G <rgowdapp>

--- Additional comment from Worker Ant on 2017-03-30 00:40:03 EDT ---

REVIEW: https://review.gluster.org/16855 (features/read-only: Allow internal clients to r/w) posted (#3) for review on master by Kotresh HR (khiremat)

--- Additional comment from Worker Ant on 2017-04-27 05:55:25 EDT ---

REVIEW: https://review.gluster.org/16855 (features/read-only: Allow internal clients to r/w) posted (#4) for review on master by Kotresh HR (khiremat)

--- Additional comment from Shyamsundar on 2017-05-30 14:47:11 EDT ---

This bug is getting closed because a release has been made available that should address the reported issue. In case the problem is still not fixed with glusterfs-3.11.0, please open a new bug report.

glusterfs-3.11.0 has been announced on the Gluster mailinglists [1], packages for several distributions should become available in the near future. Keep an eye on the Gluster Users mailinglist [2] and the update infrastructure for your distribution.

[1] http://lists.gluster.org/pipermail/announce/2017-May/000073.html
[2] https://www.gluster.org/pipermail/gluster-users/

--- Additional comment from Worker Ant on 2017-06-09 14:50:51 EDT ---

REVIEW: https://review.gluster.org/16855 (features/read-only: Allow internal clients to r/w) posted (#5) for review on master by Kotresh HR (khiremat)

Comment 2 Atin Mukherjee 2017-07-05 04:28:17 UTC
Are you using community version of Gluster? If so, can you please change the product from Red Hat Gluster Storage to GlusterFS?

Comment 4 Shyamsundar 2018-06-20 18:29:13 UTC
This bug reported is against a version of Gluster that is no longer maintained
(or has been EOL'd). See https://www.gluster.org/release-schedule/ for the
versions currently maintained.

As a result this bug is being closed.

If the bug persists on a maintained version of gluster or against the mainline
gluster repository, request that it be reopened and the Version field be marked
appropriately.


Note You need to log in before you can comment on or make changes to this bug.