Bug 1245380

Summary: [RFE] Render all mounts of a volume defunct upon access revocation
Product: [Community] GlusterFS Reporter: Csaba Henk <csaba>
Component: coreAssignee: Prasanna Kumar Kalever <prasanna.kalever>
Status: CLOSED CURRENTRELEASE QA Contact:
Severity: unspecified Docs Contact:
Priority: unspecified    
Version: mainlineCC: bugs, pkarampu, prasanna.kalever, rcyriac, rgowdapp, sankarshan
Target Milestone: ---Keywords: Triaged
Target Release: ---   
Hardware: Unspecified   
OS: Unspecified   
Whiteboard:
Fixed In Version: glusterfs-3.8rc2 Doc Type: Bug Fix
Doc Text:
Story Points: ---
Clone Of:
: 1265571 1271065 (view as bug list) Environment:
Last Closed: 2016-06-16 13:25:33 UTC Type: Bug
Regression: --- Mount Type: ---
Documentation: --- CRM:
Verified Versions: Category: ---
oVirt Team: --- RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: --- Target Upstream Version:
Embargoed:
Bug Depends On:    
Bug Blocks: 1265571, 1271065    

Description Csaba Henk 2015-07-21 23:56:49 UTC
The auth.ssl-allow volume option -- and most likely, auth.allow as well, although we haven't yet confirmed that -- operates along the access logic of files in Unix: that is , one who once got a handle to a file through successfully opening it, can happily use that handle to do I/O on the file, no matter how the permission of the file changes later. So in our case, once one has mounted the volume, she'll have a functional mount no matter if her accces to the volume is revoked in the meantime.

However, the cloud industry consensual behavior is the opposite: if access is revoked, that should take effect immediately, and further on all syscalls done against existing mounts should fail (preferably with EACCESS) if they reach the GlusterFS server (ie. not served from local buffer cache).

The new behavior could either be optional (along the old one) or take over exclusively.

Comment 1 Prasanna Kumar Kalever 2015-09-24 12:32:02 UTC
http://review.gluster.org/#/c/12229/

Comment 2 Mike McCune 2016-03-28 23:22:56 UTC
This bug was accidentally moved from POST to MODIFIED via an error in automation, please see mmccune with any questions

Comment 3 Niels de Vos 2016-06-16 13:25:33 UTC
This bug is getting closed because a release has been made available that should address the reported issue. In case the problem is still not fixed with glusterfs-3.8.0, please open a new bug report.

glusterfs-3.8.0 has been announced on the Gluster mailinglists [1], packages for several distributions should become available in the near future. Keep an eye on the Gluster Users mailinglist [2] and the update infrastructure for your distribution.

[1] http://blog.gluster.org/2016/06/glusterfs-3-8-released/
[2] http://thread.gmane.org/gmane.comp.file-systems.gluster.user