Bug 1003966 - DHT : while creating files as a non privileged User, getting error Permission denied
Summary: DHT : while creating files as a non privileged User, getting error Permissio...
Keywords:
Status: CLOSED DUPLICATE of bug 1082671
Alias: None
Product: Red Hat Gluster Storage
Classification: Red Hat Storage
Component: distribute
Version: 2.1
Hardware: x86_64
OS: Linux
medium
medium
Target Milestone: ---
: ---
Assignee: Nithya Balachandran
QA Contact: amainkar
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2013-09-03 15:08 UTC by Rachana Patel
Modified: 2015-05-13 18:15 UTC (History)
6 users (show)

Fixed In Version:
Doc Type: Bug Fix
Doc Text:
Clone Of:
Environment:
Last Closed: 2015-04-06 07:25:40 UTC
Embargoed:


Attachments (Terms of Use)
log (5.62 MB, application/x-xz)
2013-09-03 15:09 UTC, Rachana Patel
no flags Details

Description Rachana Patel 2013-09-03 15:08:15 UTC
Description of problem:
DHT : while creating files as a non privileged User, getting error  Permission denied

Version-Release number of selected component (if applicable):
3.4.0.30rhs-2.el6_4.x86_64

How reproducible:
not tried

Steps to Reproduce:
1. had a DHT volume with 5 bricks; removed 3 bricks, added 1 bricks and deleted all data from mount point as below

server :-
[root@DHT1 ~]# gluster volume create dht 10.70.37.195://rhs/brick1/d1 10.70.37.195://rhs/brick1/d2 10.70.37.195://rhs/brick1/d3 10.70.37.66://rhs/brick1/d1 10.70.37.66://rhs/brick1/d2
volume create: dht: success: please start the volume to access data

[root@DHT1 ~]# gluster volume start dht
volume start: dht: success

[root@DHT1 ~]# gluster volume remove-brick dht 10.70.37.195://rhs/brick1/d1 start
volume remove-brick start: success
ID: f77c451a-2032-4027-9c29-6e57d1d8b176

[root@DHT1 ~]# gluster volume remove-brick dht 10.70.37.195://rhs/brick1/d1 status
                                    Node Rebalanced-files          size       scanned      failures       skipped         status run-time in secs
                               ---------      -----------   -----------   -----------   -----------   -----------   ------------   --------------
                               localhost                0        0Bytes             0             0             0      completed             0.00
                             10.70.37.66                0        0Bytes             0             0             0    not started             0.00

[root@DHT1 ~]# gluster volume remove-brick dht 10.70.37.195://rhs/brick1/d1 stop
                                    Node Rebalanced-files          size       scanned      failures       skipped         status run-time in secs
                               ---------      -----------   -----------   -----------   -----------   -----------   ------------   --------------
                               localhost                0        0Bytes             0             0             0      completed             0.00
                             10.70.37.66                0        0Bytes             0             0             0    not started             0.00
'remove-brick' process may be in the middle of a file migration.
The process will be fully stopped once the migration of the file is complete.
Please check remove-brick process for completion before doing any further brick related tasks on the volume.

[root@DHT1 ~]# gluster volume remove-brick dht 10.70.37.195://rhs/brick1/d1 status
                                    Node Rebalanced-files          size       scanned      failures       skipped         status run-time in secs
                               ---------      -----------   -----------   -----------   -----------   -----------   ------------   --------------
                               localhost                0        0Bytes             0             0             0      completed             0.00
                             10.70.37.66                0        0Bytes             0             0             0    not started             0.00

[root@DHT1 ~]# gluster volume remove-brick dht 10.70.37.195://rhs/brick1/d1 status
                                    Node Rebalanced-files          size       scanned      failures       skipped         status run-time in secs
                               ---------      -----------   -----------   -----------   -----------   -----------   ------------   --------------
                               localhost                0        0Bytes             0             0             0      completed             0.00
                             10.70.37.66                0        0Bytes             0             0             0    not started             0.00

[root@DHT1 ~]# gluster volume remove-brick dht 10.70.37.195://rhs/brick1/d1 
Removing brick(s) can result in data loss. Do you want to Continue? (y/n) y
volume remove-brick commit force: success

[root@DHT1 ~]# gluster volume remove-brick dht 10.70.37.195://rhs/brick1/d1 status
                                    Node Rebalanced-files          size       scanned      failures       skipped         status run-time in secs
                               ---------      -----------   -----------   -----------   -----------   -----------   ------------   --------------
                               localhost                0        0Bytes             0             0             0    not started             0.00
                             10.70.37.66                0        0Bytes             0             0             0    not started             0.00
[root@DHT1 ~]# gluster volume status dht
Status of volume: dht
Gluster process						Port	Online	Pid
------------------------------------------------------------------------------
Brick 10.70.37.195:/rhs/brick1/d2			49153	Y	3588
Brick 10.70.37.195:/rhs/brick1/d3			49154	Y	3599
Brick 10.70.37.66:/rhs/brick1/d1			49152	Y	8231
Brick 10.70.37.66:/rhs/brick1/d2			49153	Y	8242
NFS Server on localhost					2049	Y	4753
NFS Server on 10.70.37.66				2049	Y	9321
 
There are no active volume tasks

[root@DHT1 ~]# gluster volume remove-brick dht 10.70.37.195://rhs/brick1/d3 commit
Removing brick(s) can result in data loss. Do you want to Continue? (y/n) y
volume remove-brick commit: success
[root@DHT1 ~]# gluster volume status dht
Status of volume: dht
Gluster process						Port	Online	Pid
------------------------------------------------------------------------------
Brick 10.70.37.195:/rhs/brick1/d2			49153	Y	3588
Brick 10.70.37.66:/rhs/brick1/d1			49152	Y	8231
Brick 10.70.37.66:/rhs/brick1/d2			49153	Y	8242
NFS Server on localhost					2049	Y	4814
NFS Server on 10.70.37.66				2049	Y	9344
 
There are no active volume tasks

[root@DHT1 ~]# gluster v info dht
 
Volume Name: dht
Type: Distribute
Volume ID: 55e30768-af49-4ab9-8f2a-87fd0af87a69
Status: Started
Number of Bricks: 3
Transport-type: tcp
Bricks:
Brick1: 10.70.37.195:/rhs/brick1/d2
Brick2: 10.70.37.66:/rhs/brick1/d1
Brick3: 10.70.37.66:/rhs/brick1/d2

[root@DHT1 ~]# gluster volume remove-brick dht 10.70.37.195://rhs/brick1/d2 commit
Removing brick(s) can result in data loss. Do you want to Continue? (y/n) y
volume remove-brick commit: success

[root@DHT1 ~]# gluster v info dht
 
Volume Name: dht
Type: Distribute
Volume ID: 55e30768-af49-4ab9-8f2a-87fd0af87a69
Status: Started
Number of Bricks: 2
Transport-type: tcp
Bricks:
Brick1: 10.70.37.66:/rhs/brick1/d1
Brick2: 10.70.37.66:/rhs/brick1/d2

[root@DHT1 ~]# gluster volume add-brick  dht 10.70.37.195://rhs/brick1/d4
volume add-brick: success


client:-
[root@rhs-client22 ~]# mkdir /mnt/dht
[root@rhs-client22 ~]# chmod 777 /mnt/dht
[root@rhs-client22 ~]# mount -t glusterfs 10.70.37.66:/dht /mnt/dht
[root@rhs-client22 ~]# su u1
[u1@rhs-client22 root]$ cd /mnt/dht
[u1@rhs-client22 dht]$ touch f{1..20}
[u1@rhs-client22 dht]$ ls
f1  f10  f11  f12  f13  f14  f15  f16  f17  f18  f19  f2  f20  f3  f4  f5  f6  f7  f8  f9
[u1@rhs-client22 dht]
[u1@rhs-client22 dht]$ rm -rf *
[u1@rhs-client22 dht]$ ls
$ ls
f11  f12  f13  f14  f15  f16  f17  f3  f4  f6  f7  f8  f9

2. run rebalance and verified it is completed

[root@DHT1 ~]# gluster volume rebalance dht start
volume rebalance: dht: success: Starting rebalance on volume dht has been successful.
ID: 299f72bb-a1d1-45f3-be2b-a5def7f04eb9
[root@DHT1 ~]# gluster volume rebalance dht status
                                    Node Rebalanced-files          size       scanned      failures       skipped         status run time in secs
                               ---------      -----------   -----------   -----------   -----------   -----------   ------------   --------------
                               localhost                0        0Bytes             0             0             0      completed             0.00
                             10.70.37.66                0        0Bytes             0             0             0      completed             0.00
volume rebalance: dht: success: 
[root@DHT1 ~]# gluster volume status dht
Status of volume: dht
Gluster process						Port	Online	Pid
------------------------------------------------------------------------------
Brick 10.70.37.66:/rhs/brick1/d1			49152	Y	8231
Brick 10.70.37.66:/rhs/brick1/d2			49153	Y	8242
Brick 10.70.37.195:/rhs/brick1/d4			49158	Y	5496
NFS Server on localhost					2049	Y	5727
NFS Server on 10.70.37.66				2049	Y	10176
 
           Task                                      ID         Status
           ----                                      --         ------
      Rebalance    299f72bb-a1d1-45f3-be2b-a5def7f04eb9              3


3. now tried to create data from mount point as a non privileged User.


[u1@rhs-client22 dht]$ touch f{1..10}
touch: cannot touch `f2': Permission denied
touch: cannot touch `f3': Permission denied
touch: cannot touch `f4': Permission denied
touch: cannot touch `f5': Permission denied
touch: cannot touch `f6': Permission denied
touch: cannot touch `f7': Permission denied
touch: cannot touch `f8': Permission denied
touch: cannot touch `f9': Permission denied
touch: cannot touch `f10': Permission denied


Actual results:
file creation for f1 is successful but for any other files it says ' Permission denied'

Expected results:


Additional info:


client log snippet
[2013-09-03 12:31:17.007455] D [common-utils.c:248:gf_resolve_ip6] 1-resolver: returning ip-10.70.37.195 (port-24007) for hostname: 10.70.37.195 and port: 24007
[2013-09-03 12:31:17.007992] D [client.c:2050:client_rpc_notify] 1-dht-client-1: got RPC_CLNT_CONNECT
[2013-09-03 12:31:17.008114] D [client-handshake.c:185:client_start_ping] 1-dht-client-1: returning as transport is already disconnected OR there are no frames (1 || 1)
[2013-09-03 12:31:17.008424] D [client-handshake.c:1692:server_has_portmap] 1-dht-client-1: detected portmapper on server
[2013-09-03 12:31:17.008516] D [client-handshake.c:185:client_start_ping] 1-dht-client-1: returning as transport is already disconnected OR there are no frames (1 || 1)
[2013-09-03 12:31:17.008934] D [client-handshake.c:1741:client_query_portmap_cbk] 1-dht-client-1: failed to get the port number for remote subvolume. Please run 'gluster volume status' on server to see if brick process is running.
[2013-09-03 12:31:17.009018] D [socket.c:492:__socket_rwv] 1-dht-client-1: EOF on socket
[2013-09-03 12:31:17.009048] D [socket.c:2237:socket_event_handler] 1-transport: disconnecting now
[2013-09-03 12:31:17.009100] D [client.c:2103:client_rpc_notify] 1-dht-client-1: disconnected from 10.70.37.195:24007. Client process will keep trying to connect to glusterd until brick's port is available. 
[2013-09-03 12:31:18.086212] D [dht-common.c:283:dht_discover_cbk] 4-dht-dht: lookup of /f1 on dht-client-0 returned error (No such file or directory)
[2013-09-03 12:31:18.086321] D [dht-common.c:283:dht_discover_cbk] 4-dht-dht: lookup of /f1 on dht-client-2 returned error (No such file or directory)
[2013-09-03 12:31:18.091330] D [fuse-resolve.c:83:fuse_resolve_entry_cbk] 0-fuse: 00000000-0000-0000-0000-000000000001/f2: failed to resolve (No such file or directory)
[2013-09-03 12:31:18.094118] D [fuse-resolve.c:83:fuse_resolve_entry_cbk] 0-fuse: 00000000-0000-0000-0000-000000000001/f2: failed to resolve (No such file or directory)
[2013-09-03 12:31:18.098316] D [fuse-resolve.c:83:fuse_resolve_entry_cbk] 0-fuse: 00000000-0000-0000-0000-000000000001/f5: failed to resolve (No such file or directory)
[2013-09-03 12:31:18.101044] D [fuse-resolve.c:83:fuse_resolve_entry_cbk] 0-fuse: 00000000-0000-0000-0000-000000000001/f5: failed to resolve (No such file or directory)
[2013-09-03 12:31:18.106937] D [fuse-resolve.c:83:fuse_resolve_entry_cbk] 0-fuse: 00000000-0000-0000-0000-000000000001/f10: failed to resolve (No such file or directory)
[2013-09-03 12:31:18.109922] D [fuse-resolve.c:83:fuse_resolve_entry_cbk] 0-fuse: 00000000-0000-0000-0000-000000000001/f10: failed to resolve (No such file or directory)
[2013-09-03 12:31:19.008035] D [name.c:155:client_fill_address_family] 1-dht-client-0: address-family not specified, guessing it to be inet from (remote-host: 10.70.37.195)
[2013-09-03 12:31:19.013690] D [common-utils.c:248:gf_resolve_ip6] 1-resolver: returning ip-10.70.37.195 (port-24007) for hostname: 10.70.37.195 and port: 24007
[2013-09-03 12:31:19.014214] D [client.c:2050:client_rpc_notify] 1-dht-client-0: got RPC_CLNT_CONNECT
[2013-09-03 12:31:19.014330] D [client-handshake.c:185:client_start_ping] 1-dht-client-0: returning as transport is already disconnected OR there are no frames (1 || 1)
[2013-09-03 12:31:19.014642] D [client-handshake.c:1692:server_has_portmap] 1-dht-client-0: detected portmapper on server
[2013-09-03 12:31:19.014703] D [client-handshake.c:185:client_start_ping] 1-dht-client-0: returning as transport is already disconnected OR there are no frames (1 || 1)
[2013-09-03 12:31:19.015282] D [client-handshake.c:1741:client_query_portmap_cbk] 1-dht-client-0: failed to get the port number for remote subvolume. Please run 'gluster volume status' on server to see if brick process is running.
[2013-09-03 12:31:19.015355] D [socket.c:492:__socket_rwv] 1-dht-client-0: EOF on socket
[2013-09-03 12:31:19.015380] D [socket.c:2237:socket_event_handler] 1-transport: disconnecting now
[2013-09-03 12:31:19.015418] D [client.c:2103:client_rpc_notify] 1-dht-client-0: disconnected from 10.70.37.195:24007. Client process will keep trying to connect to glusterd until brick's port is available. 
[2013-09-03 12:31:20.014199] D [name.c:155:client_fill_address_family] 1-dht-client-1: address-family not specified, guessing it to be inet from (remote-host: 10.70.37.195)
[2013-09-03 12:31:20.018427] D [common-utils.c:248:gf_resolve_ip6] 1-resolver: returning ip-10.70.37.195 (port-24007) for hostname: 10.70.37.195 and port: 24007


....
[2013-09-03 13:06:57.915675] D [socket.c:2237:socket_event_handler] 1-transport: disconnecting now
[2013-09-03 13:06:57.915752] D [client.c:2103:client_rpc_notify] 1-dht-client-1: disconnected from 10.70.37.195:24007. Client process will keep trying to connect to glusterd until brick's port is available. 
[2013-09-03 13:06:59.462443] D [fuse-resolve.c:83:fuse_resolve_entry_cbk] 0-fuse: 00000000-0000-0000-0000-000000000001/new: failed to resolve (No such file or directory)
[2013-09-03 13:06:59.465416] D [fuse-resolve.c:83:fuse_resolve_entry_cbk] 0-fuse: 00000000-0000-0000-0000-000000000001/new: failed to resolve (No such file or directory)

Comment 1 Rachana Patel 2013-09-03 15:09:39 UTC
Created attachment 793235 [details]
log

Comment 4 Scott Haines 2013-09-27 17:08:05 UTC
Targeting for 3.0.0 (Denali) release.

Comment 6 Susant Kumar Palai 2014-03-26 03:13:16 UTC
Rachana,
  Changing the permission of the mount point to 777 before mounting the glusterfs volume will have no effect and default permission of 755 will be overwritten on the mount point. Hence, non privileged users should not be able to create any files in the first place. 
 
But from the bug description it seems you were able to create files which is the bug in this case. Hence, the summary of this bug needs change.

Comment 7 Rachana Patel 2014-03-26 06:42:56 UTC
sorry for inconvenience caused. read those steps as
client:-
[root@rhs-client22 ~]# mkdir /mnt/dht
[root@rhs-client22 ~]# mount -t glusterfs 10.70.37.66:/dht /mnt/dht
[root@rhs-client22 ~]# chmod 777 /mnt/dht
[root@rhs-client22 ~]# su u1

(change in step 2 and 3)

About bug:-

1) When we change permission of mount point it changes permission of all bricks.
e.g. create volume and change permission from mount point and verify from backend.
Volume Name: down
Type: Distribute
Volume ID: d2ea90ea-4d81-4ae3-82f5-925a19a5f53e
Status: Started
Number of Bricks: 2
Transport-type: tcp
Bricks:
Brick1: 10.70.35.153:/rhs/brick1/n1
Brick2: 10.70.35.153:/rhs/brick1/n2

backend:-
ls /rhs/brick1
drwxrwxrwx 589 root root 16384 Mar 24 17:21 n1
drwxrwxrwx 589 root root 16384 Mar 24 17:20 n2

2) now adding brick and running rebalance is not changing / healing permission of newly added brick.

e.g. add brick to that volume, run rebalance and check brick permission.

Volume Name: down
Type: Distribute
Volume ID: d2ea90ea-4d81-4ae3-82f5-925a19a5f53e
Status: Started
Number of Bricks: 3
Transport-type: tcp
Bricks:
Brick1: 10.70.35.153:/rhs/brick1/n1
Brick2: 10.70.35.153:/rhs/brick1/n2
Brick2: 10.70.35.153:/rhs/brick1/n3

backend:-
ls /rhs/brick1
drwxrwxrwx 589 root root 16384 Mar 24 17:21 n1
drwxrwxrwx 589 root root 16384 Mar 24 17:20 n2
drwxr-xr-x 589 root root 16384 Mar 24 17:21 n3     <-------------------


---> so files created by non root users and hashing to that bricks are failing with permission denied and files not hashing to that brick will be created

Comment 8 Nagaprasad Sathyanarayana 2014-05-06 11:43:41 UTC
Dev ack to 3.0 RHS BZs

Comment 10 Susant Kumar Palai 2015-04-06 07:25:40 UTC

*** This bug has been marked as a duplicate of bug 1082671 ***

Comment 11 Susant Kumar Palai 2015-04-06 07:26:39 UTC
Marked duplicate as bug 1082671 has same reproducer as this one.


Note You need to log in before you can comment on or make changes to this bug.