Bug 1312421 - glusterfs mount-point return permission denied
glusterfs mount-point return permission denied
Status: CLOSED WORKSFORME
Product: GlusterFS
Classification: Community
Component: fuse (Show other bugs)
3.7.8
x86_64 Linux
medium Severity medium
: ---
: ---
Assigned To: Niels de Vos
: Triaged
Depends On:
Blocks:
  Show dependency treegraph
 
Reported: 2016-02-26 11:22 EST by bitchecker
Modified: 2016-03-07 06:45 EST (History)
3 users (show)

See Also:
Fixed In Version:
Doc Type: Bug Fix
Doc Text:
Story Points: ---
Clone Of:
Environment:
Last Closed: 2016-03-07 06:45:59 EST
Type: Bug
Regression: ---
Mount Type: ---
Documentation: ---
CRM:
Verified Versions:
Category: ---
oVirt Team: ---
RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: ---


Attachments (Terms of Use)

  None (edit)
Description bitchecker 2016-02-26 11:22:26 EST
Description of problem:

I've a glusterfs replicated storage on two nodes, volume is started and all info like:

# gluster volume info

or

# gluster volume status

show good status of volume and no errors.

If i try to mount volume from a client, mount will works but i can't do any operation on mount-point because i get:

Permission denied

Version-Release number of selected component (if applicable):

glusterfs 3.7.8 built on Feb  9 2016 06:29:54
Comment 2 Niels de Vos 2016-02-26 12:18:33 EST
Please provide some further details:

 - What version are you using on the client, what version on the server?
 - client-side logs from /var/log/glusterfs/ for the mountpoint
 - server-side logs from /var/log/glusterfs/ for glusterd and affected bricks
 - output of "gluster volume info"
 - Did you try to disable the firewall and SELinux to rule out issues with that?
Comment 3 bitchecker 2016-02-29 03:44:02 EST
the version is the same on server and clients:

# gluster --version
glusterfs 3.7.8 built on Feb  9 2016 06:29:54
Repository revision: git://git.gluster.com/glusterfs.git
Copyright (c) 2006-2011 Gluster Inc. <http://www.gluster.com>
GlusterFS comes with ABSOLUTELY NO WARRANTY.
You may redistribute copies of GlusterFS under the terms of the GNU General Public License.

this is mnt.log on client:

[2016-02-29 08:23:06.103035] I [MSGID: 100030] [glusterfsd.c:2318:main] 0-/usr/sbin/glusterfs: Started running /usr/sbin/glusterfs version 3.7.8 (args: /usr/sbin/glusterfs --volfile-server=gluster01 --volfile-id=/volume /mnt)
[2016-02-29 08:23:06.128488] I [MSGID: 101190] [event-epoll.c:632:event_dispatch_epoll_worker] 0-epoll: Started thread with index 1
[2016-02-29 08:23:06.146453] I [MSGID: 114020] [client.c:2106:notify] 0-volume-client-0: parent translators are ready, attempting connect on transport
[2016-02-29 08:23:06.150067] I [MSGID: 114020] [client.c:2106:notify] 0-volume-client-1: parent translators are ready, attempting connect on transport
[2016-02-29 08:23:06.152210] I [MSGID: 101190] [event-epoll.c:632:event_dispatch_epoll_worker] 0-epoll: Started thread with index 2
Final graph:
+------------------------------------------------------------------------------+
  1: volume volume-client-0
  2:     type protocol/client
  3:     option ping-timeout 42
  4:     option remote-host gluster01
  5:     option remote-subvolume /data/brick/volume
  6:     option transport-type socket
  7:     option send-gids true
  8: end-volume
  9:  
 10: volume volume-client-1
 11:     type protocol/client
 12:     option ping-timeout 42
 13:     option remote-host gluster02
 14:     option remote-subvolume /data/brick/volume
 15:     option transport-type socket
 16:     option send-gids true
 17: end-volume
 18:  
 19: volume volume-replicate-0
 20:     type cluster/replicate
 21:     subvolumes volume-client-0 volume-client-1
 22: end-volume
 23:  
 24: volume volume-dht
 25:     type cluster/distribute
 26:     option lookup-optimize on
 27:     option readdir-optimize on
 28:     subvolumes volume-replicate-0
 29: end-volume
 30:  
 31: volume volume-write-behind
 32:     type performance/write-behind
 33:     option cache-size 10MB
 34:     subvolumes volume-dht
 35: end-volume
 36:  
 37: volume volume-read-ahead
 38:     type performance/read-ahead
 39:     subvolumes volume-write-behind
 40: end-volume
 41:  
 42: volume volume-readdir-ahead
 43:     type performance/readdir-ahead
 44:     subvolumes volume-read-ahead
 45: end-volume
 46:  
 47: volume volume-io-cache
 48:     type performance/io-cache
 49:     option cache-size 500MB
 50:     subvolumes volume-readdir-ahead
 51: end-volume
 52:  
 53: volume volume-quick-read
 54:     type performance/quick-read
 55:     option cache-size 500MB
 56:     subvolumes volume-io-cache
 57: end-volume
 58:  
 59: volume volume-open-behind
 60:     type performance/open-behind
 61:     option read-after-open yes
 62:     subvolumes volume-quick-read
 63: end-volume
 64:  
 65: volume volume-md-cache
 66:     type performance/md-cache
 67:     subvolumes volume-open-behind
 68: end-volume
 69:  
 70: volume volume-io-threads
 71:     type performance/io-threads
 72:     option thread-count 64
 73:     subvolumes volume-md-cache
 74: end-volume
 75:  
 76: volume volume
 77:     type debug/io-stats
 78:     option latency-measurement off
 79:     option count-fop-hits off
 80:     subvolumes volume-io-threads
 81: end-volume
 82:  
 83: volume meta-autoload
 84:     type meta
 85:     subvolumes volume
 86: end-volume
 87:  
+------------------------------------------------------------------------------+
[2016-02-29 08:23:06.157794] I [rpc-clnt.c:1847:rpc_clnt_reconfig] 0-volume-client-0: changing port to 49152 (from 0)
[2016-02-29 08:23:06.161656] I [rpc-clnt.c:1847:rpc_clnt_reconfig] 0-volume-client-1: changing port to 49152 (from 0)
[2016-02-29 08:23:06.165111] I [MSGID: 114057] [client-handshake.c:1437:select_server_supported_programs] 0-volume-client-0: Using Program GlusterFS 3.3, Num (1298437), Version (330)
[2016-02-29 08:23:06.166987] I [MSGID: 114057] [client-handshake.c:1437:select_server_supported_programs] 0-volume-client-1: Using Program GlusterFS 3.3, Num (1298437), Version (330)
[2016-02-29 08:23:06.167130] I [MSGID: 114046] [client-handshake.c:1213:client_setvolume_cbk] 0-volume-client-0: Connected to volume-client-0, attached to remote volume '/data/brick/volume'.
[2016-02-29 08:23:06.167168] I [MSGID: 114047] [client-handshake.c:1224:client_setvolume_cbk] 0-volume-client-0: Server and Client lk-version numbers are not same, reopening the fds
[2016-02-29 08:23:06.167270] I [MSGID: 108005] [afr-common.c:3976:afr_notify] 0-volume-replicate-0: Subvolume 'volume-client-0' came back up; going online.
[2016-02-29 08:23:06.169974] I [MSGID: 114046] [client-handshake.c:1213:client_setvolume_cbk] 0-volume-client-1: Connected to volume-client-1, attached to remote volume '/data/brick/volume'.
[2016-02-29 08:23:06.170002] I [MSGID: 114047] [client-handshake.c:1224:client_setvolume_cbk] 0-volume-client-1: Server and Client lk-version numbers are not same, reopening the fds
[2016-02-29 08:23:06.180160] I [MSGID: 114035] [client-handshake.c:193:client_set_lk_version_cbk] 0-volume-client-0: Server lk version = 1
[2016-02-29 08:23:06.181166] I [fuse-bridge.c:5139:fuse_graph_setup] 0-fuse: switched to graph 0
[2016-02-29 08:23:06.181283] I [MSGID: 114035] [client-handshake.c:193:client_set_lk_version_cbk] 0-volume-client-1: Server lk version = 1
[2016-02-29 08:23:06.181507] I [fuse-bridge.c:4060:fuse_init] 0-glusterfs-fuse: FUSE inited with protocol versions: glusterfs 7.22 kernel 7.22
[2016-02-29 08:23:40.009558] I [MSGID: 109063] [dht-layout.c:702:dht_layout_normalize] 0-volume-dht: Found anomalies in /.trashcan (gfid = 00000000-0000-0000-0000-000000000000). Holes=1 overlaps=0
[2016-02-29 08:23:40.010431] W [MSGID: 114031] [client-rpc-fops.c:2325:client3_3_setattr_cbk] 0-volume-client-0: remote operation failed [Operazione non permessa]
[2016-02-29 08:23:40.010505] W [MSGID: 114031] [client-rpc-fops.c:2325:client3_3_setattr_cbk] 0-volume-client-1: remote operation failed [Operazione non permessa]
[2016-02-29 08:23:40.011762] I [MSGID: 109036] [dht-common.c:8043:dht_log_new_layout_for_dir_selfheal] 0-volume-dht: Setting layout of /.trashcan with [Subvol_name: volume-replicate-0, Err: -1 , Start: 0 , Stop: 4294967295 , Hash: 1 ], 
[2016-02-29 08:23:40.012480] W [MSGID: 114031] [client-rpc-fops.c:1088:client3_3_setxattr_cbk] 0-volume-client-0: remote operation failed [Permesso negato]
[2016-02-29 08:23:40.012553] W [MSGID: 114031] [client-rpc-fops.c:1088:client3_3_setxattr_cbk] 0-volume-client-1: remote operation failed [Permesso negato]
[2016-02-29 08:24:05.925338] E [MSGID: 114031] [client-rpc-fops.c:321:client3_3_mkdir_cbk] 0-volume-client-0: remote operation failed. Path: /test [Permesso negato]
[2016-02-29 08:24:05.925412] E [MSGID: 114031] [client-rpc-fops.c:321:client3_3_mkdir_cbk] 0-volume-client-1: remote operation failed. Path: /test [Permesso negato]
[2016-02-29 08:24:05.925680] W [fuse-bridge.c:462:fuse_entry_cbk] 0-glusterfs-fuse: 13: MKDIR() /test => -1 (Permesso negato)

this is glustershd.log on server:

[2016-02-29 02:36:01.732925] I [glusterfsd-mgmt.c:1596:mgmt_getspec_cbk] 0-glusterfs: No change in volfile, continuing


this is data-brick-volume.log on server:

[2016-02-29 02:36:01.793819] I [glusterfsd-mgmt.c:1596:mgmt_getspec_cbk] 0-glusterfs: No change in volfile, continuing
[2016-02-29 08:22:55.722325] I [MSGID: 115036] [server.c:552:server_rpc_notify] 0-volume-server: disconnecting connection from docker02-19249-2016/02/26-16:02:39:319291-volume-client-0-0-0
[2016-02-29 08:22:55.722485] I [MSGID: 101055] [client_t.c:420:gf_client_unref] 0-volume-server: Shutting down connection docker02-19249-2016/02/26-16:02:39:319291-volume-client-0-0-0
[2016-02-29 08:23:06.114704] I [MSGID: 115029] [server-handshake.c:612:server_setvolume] 0-volume-server: accepted client from docker02-10733-2016/02/29-08:23:06:90500-volume-client-0-0-0 (version: 3.7.8)
[2016-02-29 08:23:39.959252] I [MSGID: 115072] [server-rpc-fops.c:1773:server_setattr_cbk] 0-volume-server: 21: SETATTR /.trashcan (00000000-0000-0000-0000-000000000005) ==> (Operazione non permessa) [Operazione non permessa]
[2016-02-29 08:23:39.961333] I [MSGID: 115060] [server-rpc-fops.c:890:_gf_server_log_setxattr_failure] 0-volume-server: 26: SETXATTR /.trashcan (00000000-0000-0000-0000-000000000005) ==> trusted.glusterfs.dht
[2016-02-29 08:23:39.961365] I [MSGID: 115060] [server-rpc-fops.c:918:server_setxattr_cbk] 0-volume-server: Permesso negato
[2016-02-29 08:24:05.874148] E [MSGID: 115056] [server-rpc-fops.c:509:server_mkdir_cbk] 0-volume-server: 39: MKDIR /test (00000000-0000-0000-0000-000000000001/test) ==> (Permesso negato) [Permesso negato]


this is "gluster volume info":

# gluster volume info 
 
Volume Name: volume
Type: Replicate
Volume ID: 3c6c72bf-f85d-4c4c-a536-19a75e2ae71b
Status: Started
Number of Bricks: 1 x 2 = 2
Transport-type: tcp
Bricks:
Brick1: gluster01:/data/brick/volume
Brick2: gluster02:/data/brick/volume
Options Reconfigured:
performance.readdir-ahead: enable
auth.allow: *
cluster.readdir-optimize: on
cluster.lookup-optimize: on
server.allow-insecure: on
features.trash-max-filesize: 1GB
features.trash: on
performance.write-behind-window-size: 10MB
nfs.disable: on
server.root-squash: ON
performance.io-thread-count: 64
performance.io-cache: on
performance.client-io-threads: on
nfs.export-volumes: off
nfs.addr-namelookup: off
performance.cache-size: 500MB


SElinux & firewallD are disabled.
Comment 4 bitchecker 2016-03-04 05:34:31 EST
any updates?
i've my infrastructure blocked for this problem!
Comment 5 bitchecker 2016-03-07 05:22:08 EST
as suggested by @rastar on IRC channel, i update info with:

gluster volume status:

Status of volume: volume
Gluster process                             TCP Port  RDMA Port  Online  Pid
------------------------------------------------------------------------------
Brick gluster01:/data/brick/volume      49152     0          Y       2108 
Brick gluster02:/data/brick/volume      49152     0          Y       2109 
Self-heal Daemon on localhost               N/A       N/A        Y       2102 
Self-heal Daemon on gluster02               N/A       N/A        Y       2103 
 
Task Status of Volume volume
------------------------------------------------------------------------------
There are no active volume tasks

"dh -h" on first server:

File system              Dim. Usati Dispon. Uso% Montato su
/dev/mapper/centos-root  4,0G  1,2G    2,9G  29% /
devtmpfs                 483M     0    483M   0% /dev
tmpfs                    493M     0    493M   0% /dev/shm
tmpfs                    493M  6,7M    487M   2% /run
tmpfs                    493M     0    493M   0% /sys/fs/cgroup
/dev/sdb1                 10G   33M     10G   1% /data/brick
/dev/sda1                497M  148M    350M  30% /boot
tmpfs                     99M     0     99M   0% /run/user/0

"df -h" on second server:

File system              Dim. Usati Dispon. Uso% Montato su
/dev/mapper/centos-root  4,0G  1,2G    2,9G  29% /
devtmpfs                 483M     0    483M   0% /dev
tmpfs                    493M     0    493M   0% /dev/shm
tmpfs                    493M  6,7M    487M   2% /run
tmpfs                    493M     0    493M   0% /sys/fs/cgroup
/dev/sda1                497M  148M    350M  30% /boot
/dev/sdb1                 10G   33M     10G   1% /data/brick
tmpfs                     99M     0     99M   0% /run/user/0

"ps aux | grep gluster" on first server:

avahi      733  0.0  0.1  28116  1508 ?        Ss   mar03   0:00 avahi-daemon: running [glusterfs1.local]
root       970  0.0  2.2 667456 22952 ?        Ssl  mar03   0:11 /usr/sbin/glusterd -p /var/run/glusterd.pid --log-level INFO
root      2102  0.0  2.7 616716 27324 ?        Ssl  mar03   0:09 /usr/sbin/glusterfs -s localhost --volfile-id gluster/glustershd -p /var/lib/glusterd/glustershd/run/glustershd.pid -l /var/log/glusterfs/glustershd.log -S /var/run/gluster/2e8b6abea9d77a38e2540d59717e058c.socket --xlator-option *replicate*.node-uuid=2e213ba9-a512-412e-a7a9-cee0b98abd2f
root      2108  0.0  3.8 901448 38820 ?        Ssl  mar03   0:26 /usr/sbin/glusterfsd -s gluster01 --volfile-id volume.gluster01.data-brick-volume -p /var/lib/glusterd/vols/volume/run/gluster01-data-brick-volume.pid -S /var/run/gluster/4deb782b2475ab253a925df28f26131f.socket --brick-name /data/brick/volume -l /var/log/glusterfs/bricks/data-brick-volume.log --xlator-option *-posix.glusterd-uuid=2e213ba9-a512-412e-a7a9-cee0b98abd2f --brick-port 49152 --xlator-option volume-server.listen-port=49152

"ps aux | grep gluster" on second server:

avahi      737  0.0  0.1  28112  1688 ?        Ss   mar03   0:00 avahi-daemon: running [glusterfs2.local]
root       970  0.0  2.0 667460 20660 ?        Ssl  mar03   0:11 /usr/sbin/glusterd -p /var/run/glusterd.pid --log-level INFO
root      2103  0.0  2.2 600320 22260 ?        Ssl  mar03   0:08 /usr/sbin/glusterfs -s localhost --volfile-id gluster/glustershd -p /var/lib/glusterd/glustershd/run/glustershd.pid -l /var/log/glusterfs/glustershd.log -S /var/run/gluster/1e2471cab262fc7e30dfec5b09ccb81e.socket --xlator-option *replicate*.node-uuid=5f4726a7-1d44-4bf2-8755-0a06868e9086
root      2109  0.0  3.8 900416 38836 ?        Ssl  mar03   0:26 /usr/sbin/glusterfsd -s gluster02 --volfile-id volume.gluster02.data-brick-volume -p /var/lib/glusterd/vols/volume/run/gluster02-data-brick-volume.pid -S /var/run/gluster/6342b2fba5b55dee8aefabf8c6b4bc64.socket --brick-name /data/brick/volume -l /var/log/glusterfs/bricks/data-brick-volume.log --xlator-option *-posix.glusterd-uuid=5f4726a7-1d44-4bf2-8755-0a06868e9086 --brick-port 49152 --xlator-option volume-server.listen-port=49152
Comment 6 bitchecker 2016-03-07 06:45:59 EST
solved running: gluster vol set volume server.root-squash off

Note You need to log in before you can comment on or make changes to this bug.