Hide Forgot
<from ticket 2407> The issue we're having is with permissions. In our current NAS we connect exclusively via NFS and Samba over NFS. All of the exports force the same UID and GID, and I believe if we can get Gluster to do the same our perms issues will be resolved. Unfortunately I'm a little hazy on what that particular translator should look like and where it should go. My first thought was a new translator on the 10.blahblah.vol: # force UID/GID volume storage-server type features/fixed-id fixed-uid 500 fixed-gid 501 end-volume
(In reply to comment #0) > <from ticket 2407> > > The issue we're having is with permissions. In our current NAS we connect > exclusively via NFS and Samba over NFS. All of the exports force the same UID > and GID, and I believe if we can get Gluster to do the same our perms issues There is a legacy xlator called 'features/filter' which was written to support fixed-uid/gid and other stuff. This was for 'GlusterFUSE' mountpoint to be a top-level module. You can close it by saying we don't support that anymore.
> The issue we're having is with permissions. In our current NAS we connect > exclusively via NFS and Samba over NFS. All of the exports force the same UID > and GID, and I believe if we can get Gluster to do the same our perms issues Hi Harold, There is a legacy xlator called 'features/filter' which was written to support fixed-uid/gid and other stuff. This was for 'GlusterFUSE' mountpoint to be a top-level module. We do not support this translator anymore.
We have a customer who requested this feature. We know how to do it, as we've done it before. I can understand telling them we can't do this for technical or even commercial reasons, but I believe telling the customer we used to do that, but we won't any more because it's old is not really an acceptable answer. "There is a legacy xlator called 'features/filter' which was written to support fixed-uid/gid and other stuff. This was for 'GlusterFUSE' mountpoint to be a top-level module. We do not support this translator anymore." Can we tell the customer how to use this old module? Or do I just tell them "Engineering said they won't do this again"?
All are 'GlusterFS-Commercial' bugs, mostly related to customers a year back or so. Good to have a resolution on these issues. Moving the component considering the visibility in RHS component :-)
This should be possible with nfs-ganesha. The export that should force the particular UID/GID should use these options: Anonymous_uid = 500; Anonymous_gid = 501; Squash = "all"; There is currently no plan to extend Gluster/NFS with this functionality. Contributions from the community to add this feature would be accepted, but they may not make it into the Red Hat Storage product. Please re-open this bug if there is a very strong demand and nfs-ganesha can not be used.