Bug 1614168

Summary: [uss]snapshot: posix acl authentication is not working as expected
Product: [Community] GlusterFS Reporter: Mohammed Rafi KC <rkavunga>
Component: snapshotAssignee: bugs <bugs>
Status: CLOSED CURRENTRELEASE QA Contact:
Severity: unspecified Docs Contact:
Priority: unspecified    
Version: mainlineCC: bugs
Target Milestone: ---   
Target Release: ---   
Hardware: Unspecified   
OS: Unspecified   
Whiteboard:
Fixed In Version: glusterfs-5.0 Doc Type: If docs needed, set a value
Doc Text:
Story Points: ---
Clone Of: Environment:
Last Closed: 2018-10-23 15:16:35 UTC Type: Bug
Regression: --- Mount Type: ---
Documentation: --- CRM:
Verified Versions: Category: ---
oVirt Team: --- RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: --- Target Upstream Version:
Embargoed:

Description Mohammed Rafi KC 2018-08-09 06:34:26 UTC
Description of problem:

When accessing files through uss, posix acl authentications are not honoured.


Version-Release number of selected component (if applicable):

mainline

How reproducible:

100%

Steps to Reproduce:
1. create a user and add the user to a group say group1
2. create a file and set group permission to group1
3. create a snapshot and access that file through uss

Actual results:

we should be able to access the file as we have the right permission

Expected results:

access failed with EPERM

Additional info:

Comment 1 Worker Ant 2018-08-09 06:47:23 UTC
REVIEW: https://review.gluster.org/20684 (snapview/server: Set uid,gid,and groups for gfapi call) posted (#1) for review on master by mohammed rafi  kc

Comment 2 Worker Ant 2018-08-23 03:46:48 UTC
COMMIT: https://review.gluster.org/20684 committed in master by "Amar Tumballi" <amarts> with a commit message- snapview/server: Set uid,gid,and groups for gfapi call

Before calling gfapi from snapd, we need to set uid, gid
and groups in the context. This is required to do the
validation from posix acl xlator.

Change-Id: I181bea2570a69554ff363bf5a52478ff0363ea47
fixes: bz#1614168
Signed-off-by: Mohammed Rafi KC <rkavunga>

Comment 3 Shyamsundar 2018-10-23 15:16:35 UTC
This bug is getting closed because a release has been made available that should address the reported issue. In case the problem is still not fixed with glusterfs-5.0, please open a new bug report.

glusterfs-5.0 has been announced on the Gluster mailinglists [1], packages for several distributions should become available in the near future. Keep an eye on the Gluster Users mailinglist [2] and the update infrastructure for your distribution.

[1] https://lists.gluster.org/pipermail/announce/2018-October/000115.html
[2] https://www.gluster.org/pipermail/gluster-users/