Bug 1285126 - RFE: GlusterFS NFS does not implement an all_squash volume setting
Summary: RFE: GlusterFS NFS does not implement an all_squash volume setting
Keywords:
Status: CLOSED CURRENTRELEASE
Alias: None
Product: GlusterFS
Classification: Community
Component: protocol
Version: mainline
Hardware: All
OS: All
low
low
Target Milestone: ---
Assignee: bugs@gluster.org
QA Contact:
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2015-11-24 23:52 UTC by Earl Ruby
Modified: 2019-03-25 16:30 UTC (History)
5 users (show)

Fixed In Version: glusterfs-6.0
Doc Type: Enhancement
Doc Text:
Clone Of:
Environment:
Last Closed: 2019-03-25 16:30:11 UTC
Regression: ---
Mount Type: ---
Documentation: ---
CRM:
Verified Versions:
Embargoed:


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
Gluster.org Gerrit 21607 0 None Open protocol/server: support server.all-squash 2018-12-05 21:46:25 UTC

Description Earl Ruby 2015-11-24 23:52:01 UTC
Description of problem:

all_squash is a standard NFS volume export option that sets all incoming request UIDs to anonuid and all GIDs to anongid. root_squash is a standard NFS volume export option that sets incoming requests from root to anonuid and anongid. GlusterFS implements volume settings for anonuid (server.anonuid), anongid (server.anongid), and root_squash (server.root-squash), but does not implement all_squash (should be server.all-squash).

This option was requested in https://bugzilla.redhat.com/show_bug.cgi?id=1043886 but that part of the request was overlooked.


Version-Release number of selected component (if applicable): 3.7.6

How reproducible: always


Steps to Reproduce:
1. gluster volume set volume1 server.all-squash on


Actual results:
volume set: failed: option : server.all-squash does not exist
Did you mean server.root-squash?


Expected results:
volume set: success


Additional info:
https://bugzilla.redhat.com/show_bug.cgi?id=1043886

Comment 1 Niels de Vos 2015-11-26 15:10:35 UTC
Advanced configuration options for uid/gid squashing is available through NFS-Ganesha. Only very little work is put into Gluster/NFS, NFS-Ganesha will become the preferred NFS-server in the future. If you rely on this functionality, you probably should give NFS-Ganesha with FSAL_GLUSTER a try.

---

The server.*-squash options are not NFS specific, they are usable for FUSE mounts and libgfapi access as well.

It probably makes sense to provide a server.all-squash option that makes all access to the volume done through the anonymous user/group (anonuid/anongid). However, the server.root-squash is volume wide option, and can only be turned on/off. It might be more useful to have a server.all-squash option that checks for the IP-address instead of an all-or-nothing switch.

Earl, could you explain a little about the expected feature and config options that you would like to see?

Comment 2 Earl Ruby 2016-10-13 04:04:54 UTC
The request was to implement all_squash, anonuid and anongid. The example given in https://bugzilla.redhat.com/show_bug.cgi?id=1043886 was the export line:

/home/joe       pc001(rw,all_squash,anonuid=150,anongid=100)

... which will map all incoming user requests to UID 150, GID 100.

glusterfs-3.6.1 implements root_squash, anonuid and anongid.

root_squash maps UID 0 / GID 0 requests to anonuid / anongid.

all_squash should map ALL UIDs to anonuid and ALL GIDs to anongid.

Comment 3 Amar Tumballi 2018-10-08 17:28:12 UTC
I recommend marking it as DEFERRED, as we are not planning to fix it any time soon!  But keeping it here for somemore time to see if anyone can pick it up (as I added EasyFix flag).

Happy for getting some help!

Comment 4 Worker Ant 2018-11-09 07:11:46 UTC
REVIEW: https://review.gluster.org/21607 (protocol/server: support server.all-squash) posted (#1) for review on master by Xue Chuanyu

Comment 5 Worker Ant 2018-12-05 21:46:22 UTC
REVIEW: https://review.gluster.org/21607 (protocol/server: support server.all-squash) posted (#9) for review on master by Amar Tumballi

Comment 6 Shyamsundar 2019-03-25 16:30:11 UTC
This bug is getting closed because a release has been made available that should address the reported issue. In case the problem is still not fixed with glusterfs-6.0, please open a new bug report.

glusterfs-6.0 has been announced on the Gluster mailinglists [1], packages for several distributions should become available in the near future. Keep an eye on the Gluster Users mailinglist [2] and the update infrastructure for your distribution.

[1] https://lists.gluster.org/pipermail/announce/2019-March/000120.html
[2] https://www.gluster.org/pipermail/gluster-users/


Note You need to log in before you can comment on or make changes to this bug.