Bug 1245380 - [RFE] Render all mounts of a volume defunct upon access revocation
Summary: [RFE] Render all mounts of a volume defunct upon access revocation
Keywords:
Status: CLOSED CURRENTRELEASE
Alias: None
Product: GlusterFS
Classification: Community
Component: core
Version: mainline
Hardware: Unspecified
OS: Unspecified
unspecified
unspecified
Target Milestone: ---
Assignee: Prasanna Kumar Kalever
QA Contact:
URL:
Whiteboard:
Depends On:
Blocks: 1265571 1271065
TreeView+ depends on / blocked
 
Reported: 2015-07-21 23:56 UTC by Csaba Henk
Modified: 2016-06-16 13:25 UTC (History)
6 users (show)

Fixed In Version: glusterfs-3.8rc2
Doc Type: Bug Fix
Doc Text:
Clone Of:
: 1265571 1271065 (view as bug list)
Environment:
Last Closed: 2016-06-16 13:25:33 UTC
Regression: ---
Mount Type: ---
Documentation: ---
CRM:
Verified Versions:
Embargoed:


Attachments (Terms of Use)

Description Csaba Henk 2015-07-21 23:56:49 UTC
The auth.ssl-allow volume option -- and most likely, auth.allow as well, although we haven't yet confirmed that -- operates along the access logic of files in Unix: that is , one who once got a handle to a file through successfully opening it, can happily use that handle to do I/O on the file, no matter how the permission of the file changes later. So in our case, once one has mounted the volume, she'll have a functional mount no matter if her accces to the volume is revoked in the meantime.

However, the cloud industry consensual behavior is the opposite: if access is revoked, that should take effect immediately, and further on all syscalls done against existing mounts should fail (preferably with EACCESS) if they reach the GlusterFS server (ie. not served from local buffer cache).

The new behavior could either be optional (along the old one) or take over exclusively.

Comment 1 Prasanna Kumar Kalever 2015-09-24 12:32:02 UTC
http://review.gluster.org/#/c/12229/

Comment 2 Mike McCune 2016-03-28 23:22:56 UTC
This bug was accidentally moved from POST to MODIFIED via an error in automation, please see mmccune with any questions

Comment 3 Niels de Vos 2016-06-16 13:25:33 UTC
This bug is getting closed because a release has been made available that should address the reported issue. In case the problem is still not fixed with glusterfs-3.8.0, please open a new bug report.

glusterfs-3.8.0 has been announced on the Gluster mailinglists [1], packages for several distributions should become available in the near future. Keep an eye on the Gluster Users mailinglist [2] and the update infrastructure for your distribution.

[1] http://blog.gluster.org/2016/06/glusterfs-3-8-released/
[2] http://thread.gmane.org/gmane.comp.file-systems.gluster.user


Note You need to log in before you can comment on or make changes to this bug.