Implement some measures which can mitigate the effect of an unauthorized person taking over a geo-replication master.
*** Bug 2831 has been marked as a duplicate of this bug. ***
CHANGE: http://review.gluster.com/399 (gsyncd:) merged in master by Vijay Bellur (vijay)
Responding to this from the Gerrit commit: "glusterd security settings are too coarse, so that if we made it possible for an unprivileged gsyncd to operate, we would open up too far." This is related to an issue that we (in HekaFS-land) have with glusterd, and the solution might be the same: use the SSL-transport code to identify users via certificates, and then implement proper role-based access control based on those identities. If such mechanisms were in place in glusterd, would there even be a need for separate mechanisms in georep to ensure this kind of safety/security?
(In reply to comment #3) > Responding to this from the Gerrit commit: > > "glusterd security settings are too coarse, > so that if we made it possible for an unprivileged gsyncd > to operate, we would open up too far." > > This is related to an issue that we (in HekaFS-land) have with glusterd, and > the solution might be the same: use the SSL-transport code to identify users > via certificates, and then implement proper role-based access control based on > those identities. If such mechanisms were in place in glusterd, would there > even be a need for separate mechanisms in georep to ensure this kind of > safety/security? You are right -- had we a proper auth + RBAC system in place, that could be used to implement the needed access control mechanism for geo-rep. However, as of now, given that we do not have such a thing at hand, the design philosophy I followed is just... KISS. This has the advantage that what has been / is being done for geo-rep is by-and-large the common denominator of any kind of access control work, and thus I highly hope that most of the code we write will be durable. In particular, - We have added the mountbroker service by which it's possible to request a mount of a volume from glusterd. As of now, handling of the MOUNT message is implemented with a minimalistic authentication mechanism (so we didn't give a chance for ourselves to get it wrong ;)) -- indeed, we put emphasis on making sure that noone can access the mount except for the one whom should. If later on glusterfs will get a proper authentication framework, the mountbroker backend can be kept and the needed auth hooks can be added. - The sentence you are quoting above refers to the need of adding a layer to glusterd RPC by which messages from cli to glusterd can be selectively allowed / rejected. As of now, the logic for selection will be just checking for the privilegedness of the requestor (by the good old privileged port binding hack). Instead of this logic, we can later plug in some more sophisticated cert-based mechanism -- but the need for the ability of distinguishing between various cli ops will not go away, so again, the core code can be kept.
CHANGE: http://review.gluster.com/458 ($sbindir is the install path for gluster* binaries,) merged in master by Vijay Bellur (vijay)
CHANGE: http://review.gluster.com/459 (This rewrite does not change functionality;) merged in master by Vijay Bellur (vijay)
CHANGE: http://review.gluster.com/460 (With this change, the suggested way of setting up a geo-sync) merged in master by Vijay Bellur (vijay)
CHANGE: http://review.gluster.com/461 (- require/perform rsync invocation with unprotected args) merged in master by Vijay Bellur (vijay)
CHANGE: http://review.gluster.com/462 (Change-Id: I2da62b34aa833b9a28728fa1db23951f28b7e538) merged in master by Vijay Bellur (vijay)
CHANGE: http://review.gluster.com/488 (- gsyncd gets allow-network tunable which is expected to) merged in master by Vijay Bellur (vijay)
CHANGE: http://review.gluster.com/3189 (geo-rep / glusterd: update list of reserved tunables) merged in master by Anand Avati (avati)
This bug is getting closed because a release has been made available that should address the reported issue. In case the problem is still not fixed with glusterfs-3.5.0, please reopen this bug report. glusterfs-3.5.0 has been announced on the Gluster Developers mailinglist [1], packages for several distributions should become available in the near future. Keep an eye on the Gluster Users mailinglist [2] and the update infrastructure for your distribution. [1] http://thread.gmane.org/gmane.comp.file-systems.gluster.devel/6137 [2] http://thread.gmane.org/gmane.comp.file-systems.gluster.user