Bug 1271065 - [RFE] Render all mounts of a volume defunct upon access revocation
Summary: [RFE] Render all mounts of a volume defunct upon access revocation
Keywords:
Status: CLOSED EOL
Alias: None
Product: GlusterFS
Classification: Community
Component: core
Version: 3.7.6
Hardware: Unspecified
OS: Unspecified
unspecified
unspecified
Target Milestone: ---
Assignee: Prasanna Kumar Kalever
QA Contact:
URL:
Whiteboard:
Depends On: 1245380
Blocks: 1265571 glusterfs-3.7.7
TreeView+ depends on / blocked
 
Reported: 2015-10-13 05:59 UTC by Prasanna Kumar Kalever
Modified: 2017-03-08 11:02 UTC (History)
8 users (show)

Fixed In Version:
Doc Type: Bug Fix
Doc Text:
Clone Of: 1245380
Environment:
Last Closed: 2017-03-08 11:02:07 UTC
Regression: ---
Mount Type: ---
Documentation: ---
CRM:
Verified Versions:
Embargoed:


Attachments (Terms of Use)

Description Prasanna Kumar Kalever 2015-10-13 05:59:58 UTC
+++ This bug was initially created as a clone of Bug #1245380 +++

The auth.ssl-allow volume option -- and most likely, auth.allow as well, although we haven't yet confirmed that -- operates along the access logic of files in Unix: that is , one who once got a handle to a file through successfully opening it, can happily use that handle to do I/O on the file, no matter how the permission of the file changes later. So in our case, once one has mounted the volume, she'll have a functional mount no matter if her accces to the volume is revoked in the meantime.

However, the cloud industry consensual behavior is the opposite: if access is revoked, that should take effect immediately, and further on all syscalls done against existing mounts should fail (preferably with EACCESS) if they reach the GlusterFS server (ie. not served from local buffer cache).

The new behavior could either be optional (along the old one) or take over exclusively.

--- Additional comment from Prasanna Kumar Kalever on 2015-09-24 08:32:02 EDT ---

http://review.gluster.org/#/c/12229/

Comment 1 Vijay Bellur 2015-10-13 06:01:35 UTC
REVIEW: http://review.gluster.org/12343 (server/protocol: option for dynamic authorization of client permissions) posted (#1) for review on release-3.7 by Prasanna Kumar Kalever (pkalever)

Comment 2 Vijay Bellur 2015-10-13 16:05:41 UTC
COMMIT: http://review.gluster.org/12343 committed in release-3.7 by Raghavendra G (rgowdapp) 
------
commit b8ba012da0cf276329025e30b36f43624548f7f1
Author: Prasanna Kumar Kalever <prasanna.kalever>
Date:   Fri Aug 21 00:08:23 2015 +0530

    server/protocol: option for dynamic authorization of client permissions
    
    problem:
    assuming gluster volume is already mounted (for gfapi: say client transport
    connection has already established), now if somebody change the volume
    permissions say *.allow | *.reject for a client, gluster should allow/terminate
    the client connection based on the fresh set of volume options immediately,
    but in existing scenario neither we have any option to set this behaviour nor
    we take any action until and unless we remount the volume manually
    
    solution:
    Introduce 'dynamic-auth' option (default: on).
    If 'dynamic-auth' is 'on' gluster will perform dynamic authentication to
    allow/terminate client transport connection immediately in response to
    *.allow | *.reject volume set options, thus if volume permissions have changed
    for a particular client (say client is added to auth.reject list), his
    transport connection to gluster volume will be terminated immediately.
    
    Backport of:
    > Change-Id: I6243a6db41bf1e0babbf050a8e4f8620732e00d8
    > BUG: 1245380
    > Signed-off-by: Prasanna Kumar Kalever <prasanna.kalever>
    > Reviewed-on: http://review.gluster.org/12229
    > Tested-by: NetBSD Build System <jenkins.org>
    > Reviewed-by: Raghavendra G <rgowdapp>
    > (cherry picked from commit 84e90b756566bc211535a8627ed16d4231110ade)
    
    Change-Id: If7e5c9be912412ea388391ef406ee2c8bedb26b8
    BUG: 1271065
    Signed-off-by: Prasanna Kumar Kalever <prasanna.kalever>
    Reviewed-on: http://review.gluster.org/12343
    Tested-by: NetBSD Build System <jenkins.org>
    Tested-by: Gluster Build System <jenkins.com>
    Reviewed-by: Raghavendra G <rgowdapp>

Comment 3 Raghavendra Talur 2015-11-08 20:23:39 UTC
This bug could not be fixed in time for glusterfs-3.7.6.
This is now being tracked for being fixed in glusterfs-3.7.7.

Comment 4 Prasanna Kumar Kalever 2016-04-15 11:36:38 UTC
landed in v3.7.10

Comment 5 Kaushal 2017-03-08 11:02:07 UTC
This bug is getting closed because GlusteFS-3.7 has reached its end-of-life.

Note: This bug is being closed using a script. No verification has been performed to check if it still exists on newer releases of GlusterFS.
If this bug still exists in newer GlusterFS releases, please reopen this bug against the newer release.


Note You need to log in before you can comment on or make changes to this bug.