Created attachment 909511 [details] Script to test [lack of] suid/sgid bit propagation Description of problem: When doing add-brick on a volume, suid and sgid bits are not preserved on the new brick, leading to [seemingly] random values reported to clients. Version-Release number of selected component (if applicable): gluster*-3.5.0-3.fc20.x86_64 (but with 3dc56cbd16b1074d7ca1a4fe4c5bf44400eb63ff reverted due to shortage of IPv4 at our site). How reproducible: Always Steps to Reproduce: 1. Create a gluster volume with one brick 2. Add a directory with suid/sgid/sticky bits set 3. Add a new brick to the volume Actual results (from attached script): Done gluster-01 peer probe: success. Probe on localhost not needed volume create: testvol: success: please start the volume to access data volume start: testvol: success Before add-brick 755 /mnt/gluster 7775 /mnt/gluster/test 2755 /mnt/gluster/test/dir1 volume add-brick: success After add-brick 755 /mnt/gluster 7775 /mnt/gluster/test 755 /mnt/gluster/test/dir1 gluster-01: 7775 /data/disk1/gluster/test gluster-01: 2755 /data/disk1/gluster/test/dir1 gluster-01: 1775 /data/disk2/gluster/test gluster-01: 755 /data/disk2/gluster/test/dir1 Expected results: Done gluster-01 peer probe: success. Probe on localhost not needed volume create: testvol: success: please start the volume to access data volume start: testvol: success Before add-brick 755 /mnt/gluster 7775 /mnt/gluster/test 2755 /mnt/gluster/test/dir1 volume add-brick: success After add-brick 755 /mnt/gluster 7775 /mnt/gluster/test 2755 /mnt/gluster/test/dir1 gluster-01: 7775 /data/disk1/gluster/test gluster-01: 2755 /data/disk1/gluster/test/dir1 gluster-01: 7775 /data/disk2/gluster/test gluster-01: 2755 /data/disk2/gluster/test/dir1 Additional info:
Created attachment 912148 [details] More predictable triggereing of anomalies './bug.sh 02777' Run as './bug.sh 02777'. I think this points at mkdir_p as one possible offender.
Created attachment 912196 [details] Yet another test-script Run with ./bug.sh
REVIEW: http://review.gluster.org/8208 (DHT/permissoin: Let setattr consume stat built from lookup in heal path) posted (#1) for review on master by susant palai (spalai)
With refs/changes/08/8208/1 and refs/changes/03/8203/2 applied to current head protection bits seems to be OK. https://bugzilla.redhat.com/show_bug.cgi?id=1113050 is still blocking :-(
REVIEW: http://review.gluster.org/8208 (DHT/permissoin: Let setattr consume stat built from lookup in heal path) posted (#2) for review on master by Susant Palai (spalai)
REVIEW: http://review.gluster.org/8208 (DHT/permissoin: Let setattr consume stat built from lookup in heal path) posted (#3) for review on master by Susant Palai (spalai)
COMMIT: http://review.gluster.org/8208 committed in master by Raghavendra G (rgowdapp) ------ commit 010da8e41edc510c4c0236a4ec23e9e628faebe7 Author: Susant Palai <spalai> Date: Mon Jun 30 14:04:34 2014 -0400 DHT/permissoin: Let setattr consume stat built from lookup in heal path setattr call post mkdir(selfheal) ends up using the mode bits returned by mkdir,which miss the required suid, sgid and sticky bit. Hence, the fix is to use the mode bits from local->stbuf which was used to create the missing directories. Change-Id: I478708c80e28edc6509b784b0ad83952fc074a5b BUG: 1110262 Signed-off-by: Susant Palai <spalai> Reviewed-on: http://review.gluster.org/8208 Tested-by: NetBSD Build System <jenkins.org> Reviewed-by: Shyamsundar Ranganathan <srangana> Reviewed-by: Raghavendra G <rgowdapp> Tested-by: Raghavendra G <rgowdapp>
pre-release version is ambiguous and about to be removed as a choice. If you believe this is still a bug, please change the status back to NEW and choose the appropriate, applicable version for it.
This bug is getting closed because a release has been made available that should address the reported issue. In case the problem is still not fixed with glusterfs-3.8.0, please open a new bug report. glusterfs-3.8.0 has been announced on the Gluster mailinglists [1], packages for several distributions should become available in the near future. Keep an eye on the Gluster Users mailinglist [2] and the update infrastructure for your distribution. [1] http://blog.gluster.org/2016/06/glusterfs-3-8-released/ [2] http://thread.gmane.org/gmane.comp.file-systems.gluster.user