Bug 1110262

Summary: suid,sgid,sticky bit on directories not preserved when doing add-brick
Product: [Community] GlusterFS Reporter: Anders Blomdell <anders.blomdell>
Component: unclassifiedAssignee: Susant Kumar Palai <spalai>
Status: CLOSED CURRENTRELEASE QA Contact:
Severity: unspecified Docs Contact:
Priority: unspecified    
Version: pre-releaseCC: gluster-bugs, nsathyan, rgowdapp, spalai, srangana
Target Milestone: ---Keywords: Reopened
Target Release: ---   
Hardware: Unspecified   
OS: Unspecified   
Whiteboard:
Fixed In Version: glusterfs-3.8.0 Doc Type: Bug Fix
Doc Text:
Story Points: ---
Clone Of: Environment:
Last Closed: 2016-06-16 12:38:43 UTC Type: Bug
Regression: --- Mount Type: ---
Documentation: --- CRM:
Verified Versions: Category: ---
oVirt Team: --- RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: --- Target Upstream Version:
Embargoed:
Bug Depends On: 1113050    
Bug Blocks: 1117822    
Attachments:
Description Flags
Script to test [lack of] suid/sgid bit propagation
none
More predictable triggereing of anomalies './bug.sh 02777'
none
Yet another test-script none

Description Anders Blomdell 2014-06-17 10:26:35 UTC
Created attachment 909511 [details]
Script to test [lack of] suid/sgid bit propagation

Description of problem:

When doing add-brick on a volume, suid and sgid bits are not preserved 
on the new brick, leading to [seemingly] random values reported to clients.
 
Version-Release number of selected component (if applicable):

gluster*-3.5.0-3.fc20.x86_64 
(but with 3dc56cbd16b1074d7ca1a4fe4c5bf44400eb63ff reverted due to shortage
of IPv4 at our site).

How reproducible:

Always 

Steps to Reproduce:
1. Create a gluster volume with one brick
2. Add a directory with suid/sgid/sticky bits set
3. Add a new brick to the volume


Actual results (from attached script):

  Done gluster-01
  peer probe: success. Probe on localhost not needed
  volume create: testvol: success: please start the volume to access data
  volume start: testvol: success

  Before add-brick
  755 /mnt/gluster
  7775 /mnt/gluster/test
  2755 /mnt/gluster/test/dir1
  volume add-brick: success

  After add-brick
  755 /mnt/gluster
  7775 /mnt/gluster/test
  755 /mnt/gluster/test/dir1
  gluster-01: 7775 /data/disk1/gluster/test
  gluster-01: 2755 /data/disk1/gluster/test/dir1
  gluster-01: 1775 /data/disk2/gluster/test
  gluster-01: 755 /data/disk2/gluster/test/dir1

Expected results:

  Done gluster-01
  peer probe: success. Probe on localhost not needed
  volume create: testvol: success: please start the volume to access data
  volume start: testvol: success

Before add-brick

  755 /mnt/gluster
  7775 /mnt/gluster/test
  2755 /mnt/gluster/test/dir1
  volume add-brick: success

  After add-brick
  755 /mnt/gluster
  7775 /mnt/gluster/test
  2755 /mnt/gluster/test/dir1
  gluster-01: 7775 /data/disk1/gluster/test
  gluster-01: 2755 /data/disk1/gluster/test/dir1
  gluster-01: 7775 /data/disk2/gluster/test
  gluster-01: 2755 /data/disk2/gluster/test/dir1

Additional info:

Comment 1 Anders Blomdell 2014-06-25 16:23:12 UTC
Created attachment 912148 [details]
More predictable triggereing of anomalies './bug.sh 02777'

Run as './bug.sh 02777'. I think this points at mkdir_p as one possible offender.

Comment 2 Anders Blomdell 2014-06-25 17:59:40 UTC
Created attachment 912196 [details]
Yet another test-script

Run with ./bug.sh

Comment 3 Anand Avati 2014-07-01 06:02:10 UTC
REVIEW: http://review.gluster.org/8208 (DHT/permissoin: Let setattr consume stat built from lookup in heal path) posted (#1) for review on master by susant palai (spalai)

Comment 4 Anders Blomdell 2014-07-08 15:14:32 UTC
With refs/changes/08/8208/1 and refs/changes/03/8203/2 applied to current head protection bits seems to be OK.

https://bugzilla.redhat.com/show_bug.cgi?id=1113050 is still blocking :-(

Comment 5 Anand Avati 2015-05-21 06:54:31 UTC
REVIEW: http://review.gluster.org/8208 (DHT/permissoin: Let setattr consume stat built from lookup in heal path) posted (#2) for review on master by Susant Palai (spalai)

Comment 6 Anand Avati 2015-05-21 11:11:43 UTC
REVIEW: http://review.gluster.org/8208 (DHT/permissoin: Let setattr consume stat built from lookup in heal path) posted (#3) for review on master by Susant Palai (spalai)

Comment 7 Anand Avati 2015-06-02 05:54:20 UTC
COMMIT: http://review.gluster.org/8208 committed in master by Raghavendra G (rgowdapp) 
------
commit 010da8e41edc510c4c0236a4ec23e9e628faebe7
Author: Susant Palai <spalai>
Date:   Mon Jun 30 14:04:34 2014 -0400

    DHT/permissoin: Let setattr consume stat built from lookup in heal path
    
    setattr call post mkdir(selfheal) ends up using the mode bits
    returned by mkdir,which miss the required suid, sgid and sticky bit.
    Hence, the fix is to use the mode bits from local->stbuf which was used
    to create the missing directories.
    
    Change-Id: I478708c80e28edc6509b784b0ad83952fc074a5b
    BUG: 1110262
    Signed-off-by: Susant Palai <spalai>
    Reviewed-on: http://review.gluster.org/8208
    Tested-by: NetBSD Build System <jenkins.org>
    Reviewed-by: Shyamsundar Ranganathan <srangana>
    Reviewed-by: Raghavendra G <rgowdapp>
    Tested-by: Raghavendra G <rgowdapp>

Comment 8 Kaleb KEITHLEY 2015-10-22 15:40:20 UTC
pre-release version is ambiguous and about to be removed as a choice.

If you believe this is still a bug, please change the status back to NEW and choose the appropriate, applicable version for it.

Comment 9 Niels de Vos 2016-06-16 12:38:43 UTC
This bug is getting closed because a release has been made available that should address the reported issue. In case the problem is still not fixed with glusterfs-3.8.0, please open a new bug report.

glusterfs-3.8.0 has been announced on the Gluster mailinglists [1], packages for several distributions should become available in the near future. Keep an eye on the Gluster Users mailinglist [2] and the update infrastructure for your distribution.

[1] http://blog.gluster.org/2016/06/glusterfs-3-8-released/
[2] http://thread.gmane.org/gmane.comp.file-systems.gluster.user

Comment 10 Niels de Vos 2016-06-16 16:17:49 UTC
This bug is getting closed because a release has been made available that should address the reported issue. In case the problem is still not fixed with glusterfs-3.8.0, please open a new bug report.

glusterfs-3.8.0 has been announced on the Gluster mailinglists [1], packages for several distributions should become available in the near future. Keep an eye on the Gluster Users mailinglist [2] and the update infrastructure for your distribution.

[1] http://blog.gluster.org/2016/06/glusterfs-3-8-released/
[2] http://thread.gmane.org/gmane.comp.file-systems.gluster.user