Description of problem: Consider two servers Server1 and Server2. Each has an XFS brick at /mnt/bricks/testvol. Suppose you have a file with setuid permissions somewhere (say, permissions, -rwsr-xr-x). If you have a replicated glusterfs volume built upon the two aforesaid bricks, and you copy /bin/su to the volume (accessed, in my case using the glusterfs native client) on Server1 while Server2 is down/inaccessible, then after Server2 comes back up, heal will create the file in Server2:/mnt/bricks/testvol. However, in doing so, it will drop the setuid ('s') permission and new file will have permissions (-rwxr-xr-x). Version-Release number of selected component (if applicable): Tested on3.4.1, and 3.4.0alpha. How reproducible: 100% Steps to Reproduce: 1. Create a distributed glusterfs volume 'testvol' on Server1, and mount it using the glusterfs native client at /testvol on Server1. 2. Copy a file with setuid permission flag on into /testvol, just so: Server1$ ls -l /bin/su -rwsr-xr-x 1 root root 84742 Jan 17 2014 /bin/su Server1$ cp -a /bin/su /testvol Server1$ ls -l /testvol/su -rwsr-xr-x 1 root root 84742 Jan 17 2014 /testvol/bin/su 3. Now, make testvol a twin-replicated volume by adding a brick on Server2 and setting replica count to 2. Allow heal to kick in. Mount testvol sing the glusterfs native client at /testvol on Server2 Actual results: After heal completion, it will be seen that, on Server1, Server1$ ls -l /testvol/su -rwsr-xr-x 1 root root 84742 Jan 17 2014 /testvol/bin/su Server1$ ls -l /mnt/bricks/testvol/su -rwsr-xr-x 1 root root 84742 Jan 17 2014 /mnt/bricks/testvol/su However, on Server2, Server2$ ls -l /testvol/su -rwxr-xr-x 1 root root 84742 Jan 17 2014 /testvol/bin/su Server2$ ls -l /mnt/bricks/testvol/su -rwxr-xr-x 1 root root 84742 Jan 17 2014 /mnt/bricks/testvol/su The stat would then also reflect the permissions of the files in their respective local bricks. Expected results: Heal should preserve the setuid flag while creating the node for su on Server2. Additionally, a user stat-ing a file over any mountpoint in a replicated volume should have essentially identical view to the file. Additional info: If both servers are accessible, then this problem will not be seen if you copy the file into the volume (at either Server1 or Server2). Something to do with the split command execution mode of AFR?
REVIEW: http://review.gluster.org/6862 (storage/posix: perform chmod after chown) posted (#1) for review on master by Ravishankar N (ravishankar)
REVIEW: http://review.gluster.org/6862 (storage/posix: perform chmod after chown) posted (#2) for review on master by Ravishankar N (ravishankar)
REVIEW: http://review.gluster.org/6862 (storage/posix: perform chmod after chown) posted (#3) for review on master by Ravishankar N (ravishankar)
REVIEW: http://review.gluster.org/6862 (storage/posix: perform chmod after chown) posted (#4) for review on master by Ravishankar N (ravishankar)
REVIEW: http://review.gluster.org/6862 (storage/posix: perform chmod after chown.) posted (#5) for review on master by Ravishankar N (ravishankar)
REVIEW: http://review.gluster.org/6862 (storage/posix: perform chmod after chown.) posted (#6) for review on master by Ravishankar N (ravishankar)
REVIEW: http://review.gluster.org/6862 (storage/posix: perform chmod after chown.) posted (#7) for review on master by Ravishankar N (ravishankar)
REVIEW: http://review.gluster.org/6862 (storage/posix: perform chmod after chown.) posted (#8) for review on master by Ravishankar N (ravishankar)
REVIEW: http://review.gluster.org/6862 (storage/posix: perform chmod after chown.) posted (#9) for review on master by Ravishankar N (ravishankar)
REVIEW: http://review.gluster.org/6862 (storage/posix: perform chmod after chown.) posted (#10) for review on master by Ravishankar N (ravishankar)
COMMIT: http://review.gluster.org/6862 committed in master by Anand Avati (avati) ------ commit 8148dc2eab154e94d2c9e041cc0abbba9845ce51 Author: Ravishankar N <ranaraya> Date: Wed Jan 29 12:09:42 2014 +0000 storage/posix: perform chmod after chown. Problem: When a replica brick is added to a volume, set-user-ID and set-group-ID permission bits of files are not set correctly in the new brick. The issue is in the posix_setattr() call where we do a chmod followed by a chown. But according to the man pages for chown: When the owner or group of an executable file are changed by an unprivileged user the S_ISUID and S_ISGID mode bits are cleared. POSIX does not specify whether this also should happen when root does the chown(). Fix: Swap the chmod and chown calls in posix_setattr() Change-Id: I094e47a995c210d2fdbc23ae7a5718286e7a9cf8 BUG: 1058797 Signed-off-by: Ravishankar N <ravishankar> Reviewed-on: http://review.gluster.org/6862 Tested-by: Gluster Build System <jenkins.com> Reviewed-by: Pranith Kumar Karampuri <pkarampu> Reviewed-by: Anand Avati <avati>
REVIEW: http://review.gluster.org/6987 (storage/posix: perform chmod after chown.) posted (#1) for review on release-3.4 by Ravishankar N (ravishankar)
REVIEW: http://review.gluster.org/6988 (storage/posix: perform chmod after chown.) posted (#1) for review on release-3.5 by Ravishankar N (ravishankar)
COMMIT: http://review.gluster.org/6988 committed in release-3.5 by Vijay Bellur (vbellur) ------ commit 5b2d308f4b3969e37ceca7648165add1badb0de3 Author: Ravishankar N <ranaraya> Date: Wed Jan 29 12:09:42 2014 +0000 storage/posix: perform chmod after chown. Problem: When a replica brick is added to a volume, set-user-ID and set-group-ID permission bits of files are not set correctly in the new brick. The issue is in the posix_setattr() call where we do a chmod followed by a chown. But according to the man pages for chown: When the owner or group of an executable file are changed by an unprivileged user the S_ISUID and S_ISGID mode bits are cleared. POSIX does not specify whether this also should happen when root does the chown(). Fix: Swap the chmod and chown calls in posix_setattr() BUG: 1058797 Change-Id: Id2fbd8394cf6faf669f414775409f20f46009f2b Signed-off-by: Ravishankar N <ravishankar> Reviewed-on: http://review.gluster.org/6988 Tested-by: Gluster Build System <jenkins.com> Reviewed-by: Vijay Bellur <vbellur>
This bug is getting closed because a release has been made available that should address the reported issue. In case the problem is still not fixed with glusterfs-3.5.0, please reopen this bug report. glusterfs-3.5.0 has been announced on the Gluster Developers mailinglist [1], packages for several distributions should become available in the near future. Keep an eye on the Gluster Users mailinglist [2] and the update infrastructure for your distribution. [1] http://thread.gmane.org/gmane.comp.file-systems.gluster.devel/6137 [2] http://thread.gmane.org/gmane.comp.file-systems.gluster.user