The auth.ssl-allow volume option -- and most likely, auth.allow as well, although we haven't yet confirmed that -- operates along the access logic of files in Unix: that is , one who once got a handle to a file through successfully opening it, can happily use that handle to do I/O on the file, no matter how the permission of the file changes later. So in our case, once one has mounted the volume, she'll have a functional mount no matter if her accces to the volume is revoked in the meantime. However, the cloud industry consensual behavior is the opposite: if access is revoked, that should take effect immediately, and further on all syscalls done against existing mounts should fail (preferably with EACCESS) if they reach the GlusterFS server (ie. not served from local buffer cache). The new behavior could either be optional (along the old one) or take over exclusively.
http://review.gluster.org/#/c/12229/
This bug was accidentally moved from POST to MODIFIED via an error in automation, please see mmccune with any questions
This bug is getting closed because a release has been made available that should address the reported issue. In case the problem is still not fixed with glusterfs-3.8.0, please open a new bug report. glusterfs-3.8.0 has been announced on the Gluster mailinglists [1], packages for several distributions should become available in the near future. Keep an eye on the Gluster Users mailinglist [2] and the update infrastructure for your distribution. [1] http://blog.gluster.org/2016/06/glusterfs-3-8-released/ [2] http://thread.gmane.org/gmane.comp.file-systems.gluster.user