Bug 1058797 - GlusterFS replicate translator does not respect the setuid permission flag during volume heal
Summary: GlusterFS replicate translator does not respect the setuid permission flag du...
Keywords:
Status: CLOSED CURRENTRELEASE
Alias: None
Product: GlusterFS
Classification: Community
Component: replicate
Version: 3.4.1
Hardware: x86_64
OS: Linux
unspecified
high
Target Milestone: ---
Assignee: Ravishankar N
QA Contact:
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2014-01-28 14:39 UTC by Anirban Ghoshal
Modified: 2014-04-17 11:52 UTC (History)
2 users (show)

Fixed In Version: glusterfs-3.5.0
Doc Type: Bug Fix
Doc Text:
Clone Of:
Environment:
Last Closed: 2014-04-17 11:52:42 UTC
Regression: ---
Mount Type: ---
Documentation: ---
CRM:
Verified Versions:
Embargoed:


Attachments (Terms of Use)

Description Anirban Ghoshal 2014-01-28 14:39:38 UTC
Description of problem:

Consider two servers Server1 and Server2. Each has an XFS brick at /mnt/bricks/testvol. Suppose you have a file with setuid permissions somewhere (say, permissions, -rwsr-xr-x). If you have a replicated glusterfs volume built upon the two aforesaid bricks, and you copy /bin/su to the volume (accessed, in my case using the glusterfs native client) on Server1 while Server2 is down/inaccessible, then after Server2 comes back up, heal will create the file in Server2:/mnt/bricks/testvol. However, in doing so, it will drop the setuid ('s') permission and new file will have permissions (-rwxr-xr-x). 

Version-Release number of selected component (if applicable):
Tested on3.4.1, and 3.4.0alpha.

How reproducible:
100%

Steps to Reproduce:
1. Create a distributed glusterfs volume 'testvol' on Server1, and mount it using the glusterfs native client at /testvol on Server1.
2. Copy a file with setuid permission flag on into /testvol, just so:

Server1$ ls -l /bin/su
-rwsr-xr-x 1 root root 84742 Jan 17  2014 /bin/su

Server1$ cp -a /bin/su /testvol

Server1$ ls -l /testvol/su
-rwsr-xr-x 1 root root 84742 Jan 17  2014 /testvol/bin/su
 
3. Now, make testvol a twin-replicated volume by adding a brick on Server2 and setting replica count to 2. Allow heal to kick in. Mount testvol sing the glusterfs native client at /testvol on Server2

Actual results:

After heal completion, it will be seen that, on Server1,
Server1$ ls -l /testvol/su
-rwsr-xr-x 1 root root 84742 Jan 17  2014 /testvol/bin/su
Server1$ ls -l /mnt/bricks/testvol/su
-rwsr-xr-x 1 root root 84742 Jan 17  2014 /mnt/bricks/testvol/su

However, on Server2,
Server2$ ls -l /testvol/su
-rwxr-xr-x 1 root root 84742 Jan 17  2014 /testvol/bin/su
Server2$ ls -l /mnt/bricks/testvol/su
-rwxr-xr-x 1 root root 84742 Jan 17  2014 /mnt/bricks/testvol/su

The stat would then also reflect the permissions of the files in their respective local bricks.

Expected results:

Heal should preserve the setuid flag while creating the node for su on Server2. Additionally, a user stat-ing a file over any mountpoint in a replicated volume should have essentially identical view to the file.

Additional info:

If both servers are accessible, then this problem will not be seen if you copy the file into the volume (at either Server1 or Server2). Something to do with the split command execution mode of AFR?

Comment 1 Anand Avati 2014-01-29 17:37:07 UTC
REVIEW: http://review.gluster.org/6862 (storage/posix: perform chmod after chown) posted (#1) for review on master by Ravishankar N (ravishankar)

Comment 2 Anand Avati 2014-01-30 11:53:35 UTC
REVIEW: http://review.gluster.org/6862 (storage/posix: perform chmod after chown) posted (#2) for review on master by Ravishankar N (ravishankar)

Comment 3 Anand Avati 2014-02-03 07:19:21 UTC
REVIEW: http://review.gluster.org/6862 (storage/posix: perform chmod after chown) posted (#3) for review on master by Ravishankar N (ravishankar)

Comment 4 Anand Avati 2014-02-03 09:25:15 UTC
REVIEW: http://review.gluster.org/6862 (storage/posix: perform chmod after chown) posted (#4) for review on master by Ravishankar N (ravishankar)

Comment 5 Anand Avati 2014-02-03 09:31:10 UTC
REVIEW: http://review.gluster.org/6862 (storage/posix: perform chmod after chown.) posted (#5) for review on master by Ravishankar N (ravishankar)

Comment 6 Anand Avati 2014-02-04 07:18:31 UTC
REVIEW: http://review.gluster.org/6862 (storage/posix: perform chmod after chown.) posted (#6) for review on master by Ravishankar N (ravishankar)

Comment 7 Anand Avati 2014-02-07 10:13:09 UTC
REVIEW: http://review.gluster.org/6862 (storage/posix: perform chmod after chown.) posted (#7) for review on master by Ravishankar N (ravishankar)

Comment 8 Anand Avati 2014-02-10 05:23:55 UTC
REVIEW: http://review.gluster.org/6862 (storage/posix: perform chmod after chown.) posted (#8) for review on master by Ravishankar N (ravishankar)

Comment 9 Anand Avati 2014-02-10 10:39:54 UTC
REVIEW: http://review.gluster.org/6862 (storage/posix: perform chmod after chown.) posted (#9) for review on master by Ravishankar N (ravishankar)

Comment 10 Anand Avati 2014-02-10 11:12:08 UTC
REVIEW: http://review.gluster.org/6862 (storage/posix: perform chmod after chown.) posted (#10) for review on master by Ravishankar N (ravishankar)

Comment 11 Anand Avati 2014-02-12 06:13:41 UTC
COMMIT: http://review.gluster.org/6862 committed in master by Anand Avati (avati) 
------
commit 8148dc2eab154e94d2c9e041cc0abbba9845ce51
Author: Ravishankar N <ranaraya>
Date:   Wed Jan 29 12:09:42 2014 +0000

    storage/posix: perform chmod after chown.
    
    Problem:
    When a replica brick is added to a volume, set-user-ID and set-group-ID
    permission bits of files are not set correctly in the new brick. The issue
    is in the posix_setattr() call where we do a chmod followed by a chown.
    
    But according to the man pages for chown:
    When the owner or group of an executable file are changed by an unprivileged
    user the S_ISUID and S_ISGID mode bits are cleared.  POSIX does not specify
    whether this also  should  happen  when  root does the chown().
    
    Fix:
    Swap the chmod and chown calls in posix_setattr()
    
    Change-Id: I094e47a995c210d2fdbc23ae7a5718286e7a9cf8
    BUG: 1058797
    Signed-off-by: Ravishankar N <ravishankar>
    Reviewed-on: http://review.gluster.org/6862
    Tested-by: Gluster Build System <jenkins.com>
    Reviewed-by: Pranith Kumar Karampuri <pkarampu>
    Reviewed-by: Anand Avati <avati>

Comment 12 Anand Avati 2014-02-12 11:24:59 UTC
REVIEW: http://review.gluster.org/6987 (storage/posix: perform chmod after chown.) posted (#1) for review on release-3.4 by Ravishankar N (ravishankar)

Comment 13 Anand Avati 2014-02-12 11:34:44 UTC
REVIEW: http://review.gluster.org/6988 (storage/posix: perform chmod after chown.) posted (#1) for review on release-3.5 by Ravishankar N (ravishankar)

Comment 14 Anand Avati 2014-02-17 09:51:20 UTC
COMMIT: http://review.gluster.org/6988 committed in release-3.5 by Vijay Bellur (vbellur) 
------
commit 5b2d308f4b3969e37ceca7648165add1badb0de3
Author: Ravishankar N <ranaraya>
Date:   Wed Jan 29 12:09:42 2014 +0000

    storage/posix: perform chmod after chown.
    
    Problem:
    When a replica brick is added to a volume, set-user-ID and set-group-ID
    permission bits of files are not set correctly in the new brick. The issue
    is in the posix_setattr() call where we do a chmod followed by a chown.
    
    But according to the man pages for chown:
    When the owner or group of an executable file are changed by an unprivileged
    user the S_ISUID and S_ISGID mode bits are cleared.  POSIX does not specify
    whether this also  should  happen  when  root does the chown().
    
    Fix:
    Swap the chmod and chown calls in posix_setattr()
    
    BUG: 1058797
    Change-Id: Id2fbd8394cf6faf669f414775409f20f46009f2b
    Signed-off-by: Ravishankar N <ravishankar>
    Reviewed-on: http://review.gluster.org/6988
    Tested-by: Gluster Build System <jenkins.com>
    Reviewed-by: Vijay Bellur <vbellur>

Comment 15 Niels de Vos 2014-04-17 11:52:42 UTC
This bug is getting closed because a release has been made available that should address the reported issue. In case the problem is still not fixed with glusterfs-3.5.0, please reopen this bug report.

glusterfs-3.5.0 has been announced on the Gluster Developers mailinglist [1], packages for several distributions should become available in the near future. Keep an eye on the Gluster Users mailinglist [2] and the update infrastructure for your distribution.

[1] http://thread.gmane.org/gmane.comp.file-systems.gluster.devel/6137
[2] http://thread.gmane.org/gmane.comp.file-systems.gluster.user


Note You need to log in before you can comment on or make changes to this bug.