Bug 1288115

Summary: [RFE] Pass slave volume in geo-rep as read-only
Product: [Red Hat Storage] Red Hat Gluster Storage Reporter: Patrick Ladd <pladd>
Component: geo-replicationAssignee: Kotresh HR <khiremat>
Status: CLOSED ERRATA QA Contact: Rahul Hinduja <rhinduja>
Severity: medium Docs Contact:
Priority: medium    
Version: rhgs-3.1CC: amarts, amukherj, atumball, avishwan, chrisw, csaba, cyril, khiremat, nchilaka, nlevinki, olim, pladd, rallan, rhinduja, rhs-bugs, sankarshan, sheggodu, skoduri, srmukher, vshankar, zsarosi
Target Milestone: ---Keywords: FutureFeature, Triaged
Target Release: RHGS 3.4.0   
Hardware: All   
OS: All   
Whiteboard: rebase
Fixed In Version: glusterfs-3.12.2-1 Doc Type: Enhancement
Doc Text:
Red Hat Gluster Storage 3.4 provides an option to make gluster volume read-only. #gluster volume set <VOLNAME> features.read-only on This option is useful when a particular gluster volume is supposed to be read-only. For example, in geo-replication, this option is helpful when the slave volume should not be written by any other client except geo-rep.
Story Points: ---
Clone Of: 1225546
: 1430608 1531932 (view as bug list) Environment:
Last Closed: 2018-09-04 06:27:31 UTC Type: Bug
Regression: --- Mount Type: ---
Documentation: --- CRM:
Verified Versions: Category: ---
oVirt Team: --- RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: --- Target Upstream Version:
Embargoed:
Bug Depends On: 1225546    
Bug Blocks: 1430608, 1472361, 1503132, 1503135, 1531932    

Comment 2 Patrick Ladd 2015-12-03 15:09:23 UTC
*** Bug 1288113 has been marked as a duplicate of this bug. ***

Comment 3 Patrick Ladd 2015-12-03 15:10:05 UTC
*** Bug 1288114 has been marked as a duplicate of this bug. ***

Comment 4 Atin Mukherjee 2017-03-06 15:43:37 UTC
upstream patch :
https://review.gluster.org/#/c/16854/
https://review.gluster.org/#/c/16855/

Comment 9 Kotresh HR 2017-09-13 12:34:44 UTC
Patrick,

The patch w.r.t to geo-rep is merged. But the whole feature of read-only needs to targeted. Putting needinfo on Amar for the same info.

Comment 10 Amar Tumballi 2017-09-13 13:09:29 UTC
When is it required? We can target it for 3.4.0 and not before.

Comment 13 Patrick Ladd 2017-09-22 03:07:21 UTC
Amar -

Please target this for 3.4.0 then.  It's been available upstream since earlier this year, so I'd like to be able to deliver for our customer with the next release.

I've think I've adjusted the flags to reflect this correctly - please fix them if not correct.

Patrick

Comment 14 Amar Tumballi 2017-09-22 07:31:06 UTC
Hi Patrick,

Yes, we will keep this targeted for 3.4.0 then. Will come with the first build of 3.4.0 itself for QE to validate this.

Comment 20 Rahul Hinduja 2018-04-03 11:12:41 UTC
Verified with the build: glusterfs-3.12.2-5.el7rhgs.x86_64

In order to validate the functionality being available in the code for FF. Following two scenarios have been validated. 

1. CLI validation
2. Internal fop's being synced from master to slave. 

CLI Validation
--------------

[root@dhcp41-204 ~]# echo "----------------- CLI Validation ---------------------"
----------------- CLI Validation ---------------------
[root@dhcp41-204 ~]# gluster volume set slave read-only on
volume set: success
[root@dhcp41-204 ~]# gluster volume info | grep read-only
features.read-only: on
[root@dhcp41-204 ~]# gluster volume set slave read-only ON
volume set: success
[root@dhcp41-204 ~]# gluster volume info | grep read-only
features.read-only: ON
[root@dhcp41-204 ~]# gluster volume set slave read-only OFF
volume set: success
[root@dhcp41-204 ~]# gluster volume info | grep read-only
features.read-only: OFF
[root@dhcp41-204 ~]# gluster volume set slave read-only OFFF
volume set: failed: option read-only OFFF: 'OFFF' is not a valid boolean value
[root@dhcp41-204 ~]# gluster volume set slave read-only disable
volume set: success
[root@dhcp41-204 ~]# gluster volume info | grep read-only
features.read-only: disable
[root@dhcp41-204 ~]# gluster volume set slave read-only enable
volume set: success
[root@dhcp41-204 ~]# gluster volume set slave read-only enabled
volume set: failed: option read-only enabled: 'enabled' is not a valid boolean value
[root@dhcp41-204 ~]# gluster volume set slave read-only 0
volume set: success
[root@dhcp41-204 ~]# gluster volume info | grep read-only
features.read-only: 0
[root@dhcp41-204 ~]# 
[root@dhcp41-204 ~]# gluster volume set slave read-only 1
volume set: success
[root@dhcp41-204 ~]#

During the on and off of cli client behaves as expected.

[root@dhcp37-108 slave]# rm -rf test/
rm: cannot remove ‘test/dd.2’: Read-only file system
rm: cannot remove ‘test/dd.3’: Read-only file system
rm: cannot remove ‘test/dd.4’: Read-only file system
rm: cannot remove ‘test/dd.9’: Read-only file system
rm: cannot remove ‘test/dd.8’: Read-only file system
rm: cannot remove ‘test/dd.1’: Read-only file system
rm: cannot remove ‘test/dd.5’: Read-only file system
rm: cannot remove ‘test/dd.7’: Read-only file system
rm: cannot remove ‘test/dd.6’: Read-only file system
[root@dhcp37-108 slave]# touch rahul
touch: cannot touch ‘rahul’: Read-only file system
[root@dhcp37-108 slave]# 
[root@dhcp37-108 slave]# ls
etc  test  thread0  thread1  thread2  thread3  thread4
[root@dhcp37-108 slave]# rm -rf test/
[root@dhcp37-108 slave]# ls
etc  thread0  thread1  thread2  thread3  thread4
[root@dhcp37-108 slave]# touch b
[root@dhcp37-108 slave]# rm -rf b
[root@dhcp37-108 slave]# ls
etc  thread0  thread1  thread2  thread3  thread4
[root@dhcp37-108 slave]# 


Internal fop's being synced from master to slave. 
-------------------------------------------------

Following 10 fops being altered at Master while the read-only translator was loaded at slave volume.

create, chmod, chown, chgrp, symlink, hardlink, truncate, rename, setxattr, remove

All the fops successfully synced to slave and checksum matches between master and slave. 

Moving this bug to verified state for FF.

Comment 24 errata-xmlrpc 2018-09-04 06:27:31 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHSA-2018:2607