Bug 1271065 - [RFE] Render all mounts of a volume defunct upon access revocation
[RFE] Render all mounts of a volume defunct upon access revocation
Status: CLOSED EOL
Product: GlusterFS
Classification: Community
Component: core (Show other bugs)
3.7.6
Unspecified Unspecified
unspecified Severity unspecified
: ---
: ---
Assigned To: Prasanna Kumar Kalever
: Triaged
Depends On: 1245380
Blocks: 1265571 glusterfs-3.7.7
  Show dependency treegraph
 
Reported: 2015-10-13 01:59 EDT by Prasanna Kumar Kalever
Modified: 2017-03-08 06:02 EST (History)
8 users (show)

See Also:
Fixed In Version:
Doc Type: Bug Fix
Doc Text:
Story Points: ---
Clone Of: 1245380
Environment:
Last Closed: 2017-03-08 06:02:07 EST
Type: Bug
Regression: ---
Mount Type: ---
Documentation: ---
CRM:
Verified Versions:
Category: ---
oVirt Team: ---
RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: ---


Attachments (Terms of Use)

  None (edit)
Description Prasanna Kumar Kalever 2015-10-13 01:59:58 EDT
+++ This bug was initially created as a clone of Bug #1245380 +++

The auth.ssl-allow volume option -- and most likely, auth.allow as well, although we haven't yet confirmed that -- operates along the access logic of files in Unix: that is , one who once got a handle to a file through successfully opening it, can happily use that handle to do I/O on the file, no matter how the permission of the file changes later. So in our case, once one has mounted the volume, she'll have a functional mount no matter if her accces to the volume is revoked in the meantime.

However, the cloud industry consensual behavior is the opposite: if access is revoked, that should take effect immediately, and further on all syscalls done against existing mounts should fail (preferably with EACCESS) if they reach the GlusterFS server (ie. not served from local buffer cache).

The new behavior could either be optional (along the old one) or take over exclusively.

--- Additional comment from Prasanna Kumar Kalever on 2015-09-24 08:32:02 EDT ---

http://review.gluster.org/#/c/12229/
Comment 1 Vijay Bellur 2015-10-13 02:01:35 EDT
REVIEW: http://review.gluster.org/12343 (server/protocol: option for dynamic authorization of client permissions) posted (#1) for review on release-3.7 by Prasanna Kumar Kalever (pkalever@redhat.com)
Comment 2 Vijay Bellur 2015-10-13 12:05:41 EDT
COMMIT: http://review.gluster.org/12343 committed in release-3.7 by Raghavendra G (rgowdapp@redhat.com) 
------
commit b8ba012da0cf276329025e30b36f43624548f7f1
Author: Prasanna Kumar Kalever <prasanna.kalever@redhat.com>
Date:   Fri Aug 21 00:08:23 2015 +0530

    server/protocol: option for dynamic authorization of client permissions
    
    problem:
    assuming gluster volume is already mounted (for gfapi: say client transport
    connection has already established), now if somebody change the volume
    permissions say *.allow | *.reject for a client, gluster should allow/terminate
    the client connection based on the fresh set of volume options immediately,
    but in existing scenario neither we have any option to set this behaviour nor
    we take any action until and unless we remount the volume manually
    
    solution:
    Introduce 'dynamic-auth' option (default: on).
    If 'dynamic-auth' is 'on' gluster will perform dynamic authentication to
    allow/terminate client transport connection immediately in response to
    *.allow | *.reject volume set options, thus if volume permissions have changed
    for a particular client (say client is added to auth.reject list), his
    transport connection to gluster volume will be terminated immediately.
    
    Backport of:
    > Change-Id: I6243a6db41bf1e0babbf050a8e4f8620732e00d8
    > BUG: 1245380
    > Signed-off-by: Prasanna Kumar Kalever <prasanna.kalever@redhat.com>
    > Reviewed-on: http://review.gluster.org/12229
    > Tested-by: NetBSD Build System <jenkins@build.gluster.org>
    > Reviewed-by: Raghavendra G <rgowdapp@redhat.com>
    > (cherry picked from commit 84e90b756566bc211535a8627ed16d4231110ade)
    
    Change-Id: If7e5c9be912412ea388391ef406ee2c8bedb26b8
    BUG: 1271065
    Signed-off-by: Prasanna Kumar Kalever <prasanna.kalever@redhat.com>
    Reviewed-on: http://review.gluster.org/12343
    Tested-by: NetBSD Build System <jenkins@build.gluster.org>
    Tested-by: Gluster Build System <jenkins@build.gluster.com>
    Reviewed-by: Raghavendra G <rgowdapp@redhat.com>
Comment 3 Raghavendra Talur 2015-11-08 15:23:39 EST
This bug could not be fixed in time for glusterfs-3.7.6.
This is now being tracked for being fixed in glusterfs-3.7.7.
Comment 4 Prasanna Kumar Kalever 2016-04-15 07:36:38 EDT
landed in v3.7.10
Comment 5 Kaushal 2017-03-08 06:02:07 EST
This bug is getting closed because GlusteFS-3.7 has reached its end-of-life.

Note: This bug is being closed using a script. No verification has been performed to check if it still exists on newer releases of GlusterFS.
If this bug still exists in newer GlusterFS releases, please reopen this bug against the newer release.

Note You need to log in before you can comment on or make changes to this bug.