Bug 1082671 - gluster fails to propagate permissions on the root of a gluster export when adding bricks
Summary: gluster fails to propagate permissions on the root of a gluster export when a...
Keywords:
Status: CLOSED EOL
Alias: None
Product: Red Hat Gluster Storage
Classification: Red Hat Storage
Component: glusterfs
Version: 2.1
Hardware: All
OS: Linux
medium
medium
Target Milestone: ---
: ---
Assignee: Susant Kumar Palai
QA Contact: storage-qa-internal@redhat.com
URL:
Whiteboard: dht-data-loss
: 1003966 (view as bug list)
Depends On:
Blocks: 1294035 1368012 1374573
TreeView+ depends on / blocked
 
Reported: 2014-03-31 15:19 UTC by Harold Miller
Modified: 2018-12-04 17:51 UTC (History)
6 users (show)

Fixed In Version:
Doc Type: Bug Fix
Doc Text:
Clone Of:
: 1294035 (view as bug list)
Environment:
Last Closed: 2015-12-24 06:46:38 UTC
Embargoed:


Attachments (Terms of Use)

Description Harold Miller 2014-03-31 15:19:37 UTC
Description of problem:	gluster fails to propagate permissions on the root 
of a gluster export when adding bricks.  This causes unexpected behavior on the client side.


Version-Release number of selected component (if applicable):


How reproducible: Every time


Steps to Reproduce:
++server config (three nodes with same config):
[root@rhsa3 ~]# lsblk
NAME                        MAJ:MIN RM  SIZE RO TYPE MOUNTPOINT
xvda1                       202:1    0  100G  0 disk /
xvdb                        202:16   0  150G  0 disk 
└─xvdb1                     202:17   0  150G  0 part 
  ├─vg_rhs-rhslvdemo (dm-0) 253:0    0    1G  0 lvm  /storage
  ├─vg_rhs-brick1 (dm-1)    253:1    0    2G  0 lvm  /bricks/1
  ├─vg_rhs-brick2 (dm-2)    253:2    0    2G  0 lvm  /bricks/2
  ├─vg_rhs-brick3 (dm-3)    253:3    0    2G  0 lvm  /bricks/3
  ├─vg_rhs-brick4 (dm-4)    253:4    0    2G  0 lvm  /bricks/4
  ├─vg_rhs-brick5 (dm-5)    253:5    0    2G  0 lvm  /bricks/5
  ├─vg_rhs-brick6 (dm-6)    253:6    0    2G  0 lvm  /bricks/6
  ├─vg_rhs-brick7 (dm-7)    253:7    0    2G  0 lvm  /bricks/7
  └─vg_rhs-brick8 (dm-8)    253:8    0    2G  0 lvm  /bricks/8

[root@rhsa3 ~]# xfs_info /bricks/5
<snip>

[root@rhsa3 ~]# gluster volume create vol5 rhsa{1,2}:/bricks/5/brick 
volume create: vol5: success: please start the volume to access data
[root@rhsa3 ~]# gluster volume start vol5
volume start: vol5: success

++On client: 
[student@rhsac vol2]$ sudo mkdir -p /mnt/native/vol5
[student@rhsac vol2]$ ls -ld /mnt/native/vol5
drwxr-xr-x. 2 root root 4096 Mar 21 12:52 /mnt/native/vol5
[student@rhsac vol2]$ sudo mount /mnt/native/vol5
[student@rhsac vol2]$ mount | grep vol5
rhsa1:/vol5 on /mnt/native/vol5 type fuse.glusterfs (rw,default_permissions,allow_other,max_read=131072)
[student@rhsac vol2]$ sudo chmod a+rwx,+t /mnt/native/vol5
[student@rhsac vol2]$ ls -ld /mnt/native/vol5
drwxrwxrwt. 3 root root 46 Mar 21 12:49 /mnt/native/vol5
[student@rhsac vol2]$ echo 'hello world' > /mnt/native/vol5/x
[student@rhsac vol2]$ ls -l /mnt/native/vol5
total 1
-rw-rw-r--. 1 student student 12 Mar 21 12:55 x

++On server: 
[root@rhsa3 ~]# ls -ld /bricks/5/brick
drwxr-xr-x 2 root root 6 Mar 19 23:52 /bricks/5/brick
[root@rhsa3 ~]# gluster volume add-brick vol5 rhsa3:/bricks/5/brick
volume add-brick: success

++On client:
[student@rhsac vol5]$ for x in `seq 10 22`; do dd if=/dev/zero of=./file_r$x bs=2K count=1; done
1+0 records in
1+0 records out
2048 bytes (2.0 kB) copied, 0.00178166 s, 1.1 MB/s
1+0 records in
1+0 records out
2048 bytes (2.0 kB) copied, 0.00129699 s, 1.6 MB/s
dd: opening `./file_r12': Permission denied
dd: opening `./file_r13': Permission denied
dd: opening `./file_r14': Permission denied
dd: opening `./file_r15': Permission denied
dd: opening `./file_r16': Permission denied
dd: opening `./file_r17': Permission denied
dd: opening `./file_r18': Permission denied
dd: opening `./file_r19': Permission denied
dd: opening `./file_r20': Permission denied
dd: opening `./file_r21': Permission denied
dd: opening `./file_r22': Permission denied

++/brick/5/bricks on each server:
rhsa1: drwxrwxrwt 3 root   root     61 Mar 21 17:00 .
rhsa2: drwxrwxrwt  3 root   root     85 Mar 21 17:00 .
rhsa3: drwxr-xr-x 3 root root 23 Mar 21 16:58 .

++gluster volume rebalance vol5 fix-layout start - no change in dir perms
++gluster volume rebalance vol5 start - no change in perms, files rebalanced with failures
[root@rhsa3 ~]# gluster volume rebalance vol5 status
                                    Node Rebalanced-files          size       scanned      failures       skipped               status   run time in secs
                               ---------      -----------   -----------   -----------   -----------   -----------         ------------     --------------
                               localhost                0        0Bytes             9             0             0            completed               0.00
                                   rhsa1                0        0Bytes             0             1             0               failed               0.00
                                   rhsa2                0        0Bytes             0             1             0               failed               0.00
volume rebalance: vol5: success: 

++On client: Notice inconsistancy in affected directory --
[student@rhsac vol5]$ ls -ld .
drwxr-xr-x. 3 root root 200 Mar 21 13:05 .
[student@rhsac vol5]$ ls -ld .
drwxrwxrwt. 3 root root 200 Mar 21 13:05 .
[student@rhsac vol5]$ ls -ld .
drwxrwxrwt. 3 root root 200 Mar 21 13:05 .
[student@rhsac vol5]$ ls -ld .
drwxrwxrwt. 3 root root 200 Mar 21 13:05 .
[student@rhsac vol5]$ ls -ld .
drwxrwxrwt. 3 root root 200 Mar 21 13:05 .
[student@rhsac vol5]$ ls -ld .
drwxrwxrwt. 3 root root 200 Mar 21 13:05 .
[student@rhsac vol5]$ ls -ld .
drwxrwxrwt. 3 root root 200 Mar 21 13:05 .
[student@rhsac vol5]$ ls -ld .
drwxrwxrwt. 3 root root 200 Mar 21 13:05 .
[student@rhsac vol5]$ ls -ld .
drwxrwxrwt. 3 root root 200 Mar 21 13:05 .
[student@rhsac vol5]$ ls -ld .
drwxr-xr-x. 3 root root 200 Mar 21 13:05 .
[student@rhsac vol5]$ ls -ld .
drwxr-xr-x. 3 root root 200 Mar 21 13:05 .
[student@rhsac vol5]$ 

++workaround: replace permissions on client/force rewrite

[student@rhsac vol5]$ sudo chmod a+rwx,+t /mnt/native/vol5

[root@rhsa3 ~]# gluster volume rebalance vol5 start force
volume rebalance: vol5: success: Starting rebalance on volume vol5 has been successful.
ID: 12ae2478-7674-4b73-9d0e-ebdf6fbc03dd
[root@rhsa3 ~]# gluster volume rebalance vol5 status
                                    Node Rebalanced-files          size       scanned      failures       skipped               status   run time in secs
                               ---------      -----------   -----------   -----------   -----------   -----------         ------------     --------------
                               localhost                0        0Bytes             9             0             0            completed               0.00
                                   rhsa1                0        0Bytes            10             0             0            completed               0.00
                                   rhsa2                3         2.0KB            12             0             0            completed               0.00
volume rebalance: vol5: success:

Comment 1 Nagaprasad Sathyanarayana 2014-05-06 11:43:42 UTC
Dev ack to 3.0 RHS BZs

Comment 5 Susant Kumar Palai 2015-04-06 07:25:40 UTC
*** Bug 1003966 has been marked as a duplicate of this bug. ***

Comment 6 Susant Kumar Palai 2015-12-24 06:46:38 UTC
Cloning this to 3.1.


Note You need to log in before you can comment on or make changes to this bug.