Bug 764557 - (GLUSTER-2825) hardening geo-replication
hardening geo-replication
Status: CLOSED CURRENTRELEASE
Product: GlusterFS
Classification: Community
Component: geo-replication (Show other bugs)
mainline
x86_64 Linux
medium Severity low
: ---
: ---
Assigned To: Csaba Henk
: FutureFeature
: GLUSTER-2831 (view as bug list)
Depends On:
Blocks:
  Show dependency treegraph
 
Reported: 2011-04-21 04:15 EDT by Csaba Henk
Modified: 2015-12-01 11:45 EST (History)
4 users (show)

See Also:
Fixed In Version: glusterfs-3.5.0
Doc Type: Enhancement
Doc Text:
Story Points: ---
Clone Of:
Environment:
Last Closed: 2014-04-17 07:37:42 EDT
Type: ---
Regression: ---
Mount Type: ---
Documentation: ---
CRM:
Verified Versions:
Category: ---
oVirt Team: ---
RHEL 7.3 requirements from Atomic Host:


Attachments (Terms of Use)

  None (edit)
Description Csaba Henk 2011-04-21 04:15:08 EDT
Implement some measures which can mitigate the effect of an unauthorized person taking over a geo-replication master.
Comment 1 Csaba Henk 2011-04-25 00:48:18 EDT
*** Bug 2831 has been marked as a duplicate of this bug. ***
Comment 2 Anand Avati 2011-09-12 06:24:22 EDT
CHANGE: http://review.gluster.com/399 (gsyncd:) merged in master by Vijay Bellur (vijay@gluster.com)
Comment 3 Jeff Darcy 2011-09-12 08:01:55 EDT
Responding to this from the Gerrit commit:

"glusterd security settings are too coarse,
so that if we made it possible for an unprivileged gsyncd
to operate, we would open up too far."

This is related to an issue that we (in HekaFS-land) have with glusterd, and the solution might be the same: use the SSL-transport code to identify users via certificates, and then implement proper role-based access control based on those identities.  If such mechanisms were in place in glusterd, would there even be a need for separate mechanisms in georep to ensure this kind of safety/security?
Comment 4 Csaba Henk 2011-09-12 09:09:41 EDT
(In reply to comment #3)
> Responding to this from the Gerrit commit:
> 
> "glusterd security settings are too coarse,
> so that if we made it possible for an unprivileged gsyncd
> to operate, we would open up too far."
> 
> This is related to an issue that we (in HekaFS-land) have with glusterd, and
> the solution might be the same: use the SSL-transport code to identify users
> via certificates, and then implement proper role-based access control based on
> those identities.  If such mechanisms were in place in glusterd, would there
> even be a need for separate mechanisms in georep to ensure this kind of
> safety/security?

You are right -- had we a proper auth + RBAC system in place, that could be used to implement the needed access control mechanism for geo-rep. However, as of now, given that we do not have such a thing at hand, the design philosophy I followed is just... KISS. This has the advantage that what has been / is being done for geo-rep is by-and-large the common denominator of any kind of access control work, and thus I highly hope that most of the code we write will be durable. In particular,

- We have added the mountbroker service by which it's possible to request a mount of a volume from glusterd. As of now, handling of the MOUNT message is implemented with a minimalistic authentication mechanism (so we didn't give a chance for ourselves to get it wrong ;)) -- indeed, we put emphasis on making sure that noone can access the mount except for the one whom should. If later on glusterfs will get a proper authentication framework, the mountbroker backend can be kept and the needed auth hooks can be added.

- The sentence you are quoting above refers to the need of adding a layer to glusterd RPC by which messages from cli to glusterd can be selectively allowed / rejected. As of now, the logic for selection will be just checking for the privilegedness of the requestor (by the good old privileged port binding hack). Instead of this logic, we can later plug in some more sophisticated cert-based mechanism -- but the need for the ability of distinguishing between various cli ops will not go away, so again, the core code can be kept.
Comment 5 Anand Avati 2011-09-19 21:50:14 EDT
CHANGE: http://review.gluster.com/458 ($sbindir is the install path for gluster* binaries,) merged in master by Vijay Bellur (vijay@gluster.com)
Comment 6 Anand Avati 2011-09-22 05:23:01 EDT
CHANGE: http://review.gluster.com/459 (This rewrite does not change functionality;) merged in master by Vijay Bellur (vijay@gluster.com)
Comment 7 Anand Avati 2011-09-22 05:23:34 EDT
CHANGE: http://review.gluster.com/460 (With this change, the suggested way of setting up a geo-sync) merged in master by Vijay Bellur (vijay@gluster.com)
Comment 8 Anand Avati 2011-09-22 05:23:56 EDT
CHANGE: http://review.gluster.com/461 (- require/perform rsync invocation with unprotected args) merged in master by Vijay Bellur (vijay@gluster.com)
Comment 9 Anand Avati 2011-09-22 05:24:25 EDT
CHANGE: http://review.gluster.com/462 (Change-Id: I2da62b34aa833b9a28728fa1db23951f28b7e538) merged in master by Vijay Bellur (vijay@gluster.com)
Comment 10 Anand Avati 2011-09-22 05:24:43 EDT
CHANGE: http://review.gluster.com/488 (- gsyncd gets allow-network tunable which is expected to) merged in master by Vijay Bellur (vijay@gluster.com)
Comment 11 Anand Avati 2012-04-23 23:02:16 EDT
CHANGE: http://review.gluster.com/3189 (geo-rep / glusterd: update list of reserved tunables) merged in master by Anand Avati (avati@redhat.com)
Comment 12 Niels de Vos 2014-04-17 07:37:42 EDT
This bug is getting closed because a release has been made available that should address the reported issue. In case the problem is still not fixed with glusterfs-3.5.0, please reopen this bug report.

glusterfs-3.5.0 has been announced on the Gluster Developers mailinglist [1], packages for several distributions should become available in the near future. Keep an eye on the Gluster Users mailinglist [2] and the update infrastructure for your distribution.

[1] http://thread.gmane.org/gmane.comp.file-systems.gluster.devel/6137
[2] http://thread.gmane.org/gmane.comp.file-systems.gluster.user

Note You need to log in before you can comment on or make changes to this bug.