Bug 1797099 - After upgrade from gluster 7.0 to 7.2 posix-acl.c:262:posix_acl_log_permit_denied
Summary: After upgrade from gluster 7.0 to 7.2 posix-acl.c:262:posix_acl_log_permit_de...
Keywords:
Status: CLOSED UPSTREAM
Alias: None
Product: GlusterFS
Classification: Community
Component: posix-acl
Version: 7
Hardware: x86_64
OS: Linux
unspecified
medium
Target Milestone: ---
Assignee: bugs@gluster.org
QA Contact:
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2020-01-31 21:48 UTC by Strahil Nikolov
Modified: 2020-03-12 12:21 UTC (History)
6 users (show)

Fixed In Version:
Doc Type: If docs needed, set a value
Doc Text:
Clone Of:
Environment:
Last Closed: 2020-03-12 12:21:02 UTC
Regression: ---
Mount Type: ---
Documentation: ---
CRM:
Verified Versions:


Attachments (Terms of Use)
Trace Logs from gluster2 (choose local on) (13.04 MB, application/gzip)
2020-01-31 21:48 UTC, Strahil Nikolov
no flags Details

Description Strahil Nikolov 2020-01-31 21:48:16 UTC
Created attachment 1656814 [details]
Trace Logs from gluster2 (choose local on)

Description of problem:
After upgrade from ovirt 4.3.8 to 4.3.9 RC1 and Gluster 7.0 to 7.2 -> ACL is denying access to some shards.


[2020-01-31 21:14:28.967838] I [MSGID: 139001] [posix-acl.c:262:posix_acl_log_permit_denied] 0-data_fast-access-control: client: CTX_ID:3b25391c-1eb3-424d-a1e8-1a2c08ffb556-GRAPH_ID:0-PID:2207
5-HOST:ovirt2.localdomain-PC_NAME:data_fast-client-1-RECON_NO:-1, gfid: be318638-e8a0-4c6d-977d-7a937aa84806, req(uid:107,gid:107,perm:1,ngrps:4), ctx(uid:0,gid:0,in-groups:0,perm:000,updated-
fop:INVALID, acl:-) [Permission denied]


Version-Release number of selected component (if applicable):
glusterfs-7.2-1.el7.x86_64
glusterfs-coreutils-0.2.0-1.el7.x86_64
glusterfs-devel-7.2-1.el7.x86_64
python2-gluster-7.2-1.el7.x86_64
glusterfs-libs-7.2-1.el7.x86_64
glusterfs-fuse-7.2-1.el7.x86_64

How reproducible:
Always. Cluster cannot be used at all 

Steps to Reproduce:
1.Upgrade the Engine & reboot
2.Upgrade 1 of the hosts
3.Upgrade another Host
4.Upgrade the last host

Actual results:
Replica volume is not accessible. ACL is denying access, but there is no ACL in mount options -> 'backup-volfile-servers=gluster2:ovirt3'

[root@ovirt2 bricks]# gluster volume info data_fast
 
Volume Name: data_fast
Type: Replicate
Volume ID: 378804bf-2975-44d8-84c2-b541aa87f9ef
Status: Started
Snapshot Count: 0
Number of Bricks: 1 x (2 + 1) = 3
Transport-type: tcp
Bricks:
Brick1: gluster1:/gluster_bricks/data_fast/data_fast
Brick2: gluster2:/gluster_bricks/data_fast/data_fast
Brick3: ovirt3:/gluster_bricks/data_fast/data_fast (arbiter)
Options Reconfigured:
diagnostics.client-log-level: TRACE
diagnostics.brick-log-level: TRACE
performance.client-io-threads: on
nfs.disable: on
transport.address-family: inet
performance.quick-read: off
performance.read-ahead: off
performance.io-cache: off
performance.low-prio-threads: 32
network.remote-dio: on
cluster.eager-lock: enable
cluster.quorum-type: auto
cluster.server-quorum-type: server
cluster.data-self-heal-algorithm: full
cluster.locking-scheme: granular
cluster.shd-max-threads: 8
cluster.shd-wait-qlength: 10000
features.shard: on
user.cifs: off
cluster.choose-local: on
client.event-threads: 4
server.event-threads: 4
storage.owner-uid: 36
storage.owner-gid: 36
performance.strict-o-direct: on
network.ping-timeout: 30
cluster.granular-entry-heal: enable
cluster.enable-shared-storage: enable


Expected results:
ACL should not prevent access to qemu.

Additional info:
1. Cluster was completely powered off and then on -> No result
2. All affected volumes were powered off and then on -> No result
3. Run a dummy acl to reset the cache -> find /rhev/data-center/mnt/glusterSD/ -exec setfacl -u:root:rwx {} \; -> No result
4. Run recursive chown on -> chown -R 36:36 /rhev/data-center/mnt/glusterSD/ -> no result

Note: The same has happened when upgrading from 6.5 to 6.6 which led me to upgrade to 7.0 .

Comment 1 Strahil Nikolov 2020-02-02 19:48:10 UTC
Downgrading to Gluster v7.1 (gluster & clients stopped , all nodes rebooted) does not solve the issue.
Downgrading to 7.0 - works.

Packages I'm currently using (and work):
[root@ovirt2 glusterfs]# rpm -qa | grep -E '^gluster|python2-gluster'
glusterfs-coreutils-0.2.0-1.el7.x86_64
glusterfs-extra-xlators-7.0-1.el7.x86_64
glusterfs-api-devel-7.0-1.el7.x86_64
glusterfs-client-xlators-7.0-1.el7.x86_64
glusterfs-geo-replication-7.0-1.el7.x86_64
glusterfs-fuse-7.0-1.el7.x86_64
glusterfs-libs-7.0-1.el7.x86_64
glusterfs-api-7.0-1.el7.x86_64
glusterfs-cli-7.0-1.el7.x86_64
glusterfs-resource-agents-7.0-1.el7.noarch
glusterfs-7.0-1.el7.x86_64
glusterfs-server-7.0-1.el7.x86_64
glusterfs-devel-7.0-1.el7.x86_64
glusterfs-rdma-7.0-1.el7.x86_64
python2-gluster-7.0-1.el7.x86_64
glusterfs-events-7.0-1.el7.x86_64

Comment 2 Ravishankar N 2020-02-17 10:36:13 UTC
Hi Strahil, I'm not an expert on ACLs but is it possible to give me a test setup to debug?

Comment 3 Strahil Nikolov 2020-02-21 05:43:50 UTC
Hey Ravi,

Currently I cannot afford to loose the lab.
I will update the ticket , once I have the ability to upgrade to v7.3 (at least one month from now).

Would you recommend enabling the trace logs during the upgrade ?
Any other suggestions for the upgrade process ?

My setup started as oVirt lab (4.2.7) 14 months ago with gluster v3.Due to a bug on gluster 5.5/5.6 - I have upgrared to 6.x.
Later after issues  in v6.5  , I have managed to resolve the ACL issue by upgrading to 7.0.

My data_ volumes  were  affected and  the shards of each file were not accessible.
The strange thing is that the engine volume & data volumes were not affected in the 6.5 issue that forced me to v7.0 and those volumes were also  not affected in this one.

The only difference is that data_fast consists of 2 NVMe bricks instead of regular ssd (engine) and spinning disks (data).

root@ovirt1 ~]# gluster volume info all

Volume Name: data
Type: Replicate
Volume ID: ff1b73d2-de13-4b5f-af55-bedda66e8180
Status: Started
Snapshot Count: 0
Number of Bricks: 1 x (2 + 1) = 3
Transport-type: tcp
Bricks:
Brick1: gluster1:/gluster_bricks/data/data
Brick2: gluster2:/gluster_bricks/data/data
Brick3: ovirt3:/gluster_bricks/data/data (arbiter)
Options Reconfigured:
cluster.choose-local: off
performance.client-io-threads: off
nfs.disable: on
transport.address-family: inet
performance.quick-read: off
performance.read-ahead: off
performance.io-cache: off
performance.low-prio-threads: 32
network.remote-dio: on
cluster.eager-lock: enable
cluster.quorum-type: auto
cluster.server-quorum-type: server
cluster.data-self-heal-algorithm: full
cluster.locking-scheme: granular
cluster.shd-max-threads: 8
cluster.shd-wait-qlength: 10000
features.shard: on
user.cifs: off
storage.owner-uid: 36
storage.owner-gid: 36
network.ping-timeout: 30
performance.strict-o-direct: on
cluster.granular-entry-heal: enable
server.event-threads: 4
client.event-threads: 4
cluster.enable-shared-storage: enable

Volume Name: data_fast
Type: Replicate
Volume ID: 378804bf-2975-44d8-84c2-b541aa87f9ef
Status: Started
Snapshot Count: 0
Number of Bricks: 1 x (2 + 1) = 3
Transport-type: tcp
Bricks:
Brick1: gluster1:/gluster_bricks/data_fast/data_fast
Brick2: gluster2:/gluster_bricks/data_fast/data_fast
Brick3: ovirt3:/gluster_bricks/data_fast/data_fast (arbiter)Options Reconfigured:
storage.fips-mode-rchecksum: on
performance.client-io-threads: on
nfs.disable: on
transport.address-family: inet
performance.quick-read: off
performance.read-ahead: off
performance.io-cache: off
performance.low-prio-threads: 32
network.remote-dio: on
cluster.eager-lock: enable
cluster.quorum-type: auto
cluster.server-quorum-type: server
cluster.data-self-heal-algorithm: full
cluster.locking-scheme: granular
cluster.shd-max-threads: 8
cluster.shd-wait-qlength: 10000
features.shard: on
user.cifs: off
cluster.choose-local: on
client.event-threads: 4
server.event-threads: 4
storage.owner-uid: 36
storage.owner-gid: 36
performance.strict-o-direct: on
network.ping-timeout: 30
cluster.granular-entry-heal: enable
cluster.enable-shared-storage: enable

Volume Name: data_fast2
Type: Replicate
Volume ID: 58a41eab-29a1-4b4d-904f-837eb3d7597e
Status: Started
Snapshot Count: 0
Number of Bricks: 1 x (2 + 1) = 3
Transport-type: tcp
Bricks:
Brick1: gluster1:/gluster_bricks/data_fast2/data_fast2
Brick2: gluster2:/gluster_bricks/data_fast2/data_fast2
Brick3: ovirt3:/gluster_bricks/data_fast2/data_fast2 (arbiter)
Options Reconfigured:
performance.client-io-threads: on
nfs.disable: on
transport.address-family: inet
performance.quick-read: off
performance.read-ahead: off
performance.io-cache: off
performance.low-prio-threads: 32
network.remote-dio: on
cluster.eager-lock: enable
cluster.quorum-type: auto
cluster.server-quorum-type: server
cluster.data-self-heal-algorithm: full
cluster.locking-scheme: granular
cluster.shd-max-threads: 8
cluster.shd-wait-qlength: 10000
features.shard: on
user.cifs: off
cluster.choose-local: on
client.event-threads: 4
server.event-threads: 4
storage.owner-uid: 36
storage.owner-gid: 36
performance.strict-o-direct: on
network.ping-timeout: 30
cluster.granular-entry-heal: enable
cluster.enable-shared-storage: enable

Volume Name: data_fast3
Type: Replicate
Volume ID: 2bef6141-fc50-41fe-8db4-edcddf925f2a
Status: Started
Snapshot Count: 0
Number of Bricks: 1 x (2 + 1) = 3
Transport-type: tcp
Bricks:
Brick1: gluster1:/gluster_bricks/data_fast3/data_fast3
Brick2: gluster2:/gluster_bricks/data_fast3/data_fast3
Brick3: ovirt3:/gluster_bricks/data_fast3/data_fast3 (arbiter)
Options Reconfigured:
performance.client-io-threads: on
nfs.disable: on
transport.address-family: inet
performance.quick-read: off
performance.read-ahead: off
performance.io-cache: off
performance.low-prio-threads: 32
network.remote-dio: on
cluster.eager-lock: enable
cluster.quorum-type: auto
cluster.server-quorum-type: server
cluster.data-self-heal-algorithm: full
cluster.locking-scheme: granular
cluster.shd-max-threads: 8
cluster.shd-wait-qlength: 10000
features.shard: on
user.cifs: off
cluster.choose-local: on
client.event-threads: 4
server.event-threads: 4
storage.owner-uid: 36
storage.owner-gid: 36
performance.strict-o-direct: on
network.ping-timeout: 30
cluster.granular-entry-heal: enable
cluster.enable-shared-storage: enable

Volume Name: data_fast4
Type: Replicate
Volume ID: 6b98de22-1f3c-4e40-a73d-90d425df986f
Status: Started
Snapshot Count: 0
Number of Bricks: 1 x (2 + 1) = 3
Transport-type: tcp
Bricks:
Brick1: gluster1:/gluster_bricks/data_fast4/data_fast4
Brick2: gluster2:/gluster_bricks/data_fast4/data_fast4
Brick3: ovirt3:/gluster_bricks/data_fast4/data_fast4 (arbiter)
Options Reconfigured:
performance.client-io-threads: on
nfs.disable: on
transport.address-family: inet
performance.quick-read: off
performance.read-ahead: off
performance.io-cache: off
performance.low-prio-threads: 32
network.remote-dio: on
cluster.eager-lock: enable
cluster.quorum-type: auto
cluster.server-quorum-type: server
cluster.data-self-heal-algorithm: full
cluster.locking-scheme: granular
cluster.shd-max-threads: 8
cluster.shd-wait-qlength: 10000
features.shard: on
user.cifs: off
cluster.choose-local: on
client.event-threads: 4
server.event-threads: 4
storage.owner-uid: 36
storage.owner-gid: 36
performance.strict-o-direct: on
network.ping-timeout: 30
cluster.granular-entry-heal: enable
cluster.enable-shared-storage: enable

Volume Name: engine
Type: Replicate
Volume ID: 30ca1cc2-f2f7-4749-9e2e-cee9d7099ded
Status: Started
Snapshot Count: 2
Number of Bricks: 1 x (2 + 1) = 3
Transport-type: tcp
Bricks:
Brick1: gluster1:/run/gluster/snaps/e3e22cbf22c349df95f8421591fead04/brick1/engine
Brick2: gluster2:/run/gluster/snaps/e3e22cbf22c349df95f8421591fead04/brick2/engine
Brick3: ovirt3:/run/gluster/snaps/e3e22cbf22c349df95f8421591fead04/brick3/engine (arbiter)
Options Reconfigured:
storage.owner-gid: 36
storage.owner-uid: 36
client.event-threads: 4
server.event-threads: 4
performance.client-io-threads: off
nfs.disable: on
transport.address-family: inet
performance.quick-read: off
performance.read-ahead: off
performance.io-cache: off
performance.low-prio-threads: 32
network.remote-dio: on
cluster.eager-lock: enable
cluster.quorum-type: auto
cluster.server-quorum-type: server
cluster.data-self-heal-algorithm: full
cluster.locking-scheme: granular
cluster.shd-max-threads: 8
cluster.shd-wait-qlength: 10000
features.shard: on
user.cifs: off
network.ping-timeout: 30
performance.strict-o-direct: on
cluster.granular-entry-heal: enable
cluster.choose-local: on
features.quota: off
features.inode-quota: off
features.quota-deem-statfs: off
features.barrier: disable
cluster.enable-shared-storage: enable

Volume Name: gluster_shared_storage
Type: Replicate
Volume ID: a95052ae-d641-4834-bbc5-6f87898c369b
Status: Started
Snapshot Count: 0
Number of Bricks: 1 x 3 = 3
Transport-type: tcp
Bricks:
Brick1: gluster2:/var/lib/glusterd/ss_brick
Brick2: ovirt3:/var/lib/glusterd/ss_brick
Brick3: gluster1:/var/lib/glusterd/ss_brick
Options Reconfigured:
cluster.granular-entry-heal: enable
client.event-threads: 4
server.event-threads: 4
network.remote-dio: on
transport.address-family: inet
nfs.disable: on
features.shard: on
user.cifs: off
cluster.choose-local: off
cluster.enable-shared-storage: enable
[root@ovirt1 ~]#

gluster1-> is  the gluster IP on ovirt1
gluster2-> is  the gluster IP on ovirt2
ovirt3  is the arbiter


My mount points in oVirt have only 'backup-volfile-servers=gluster2:ovirt3' and no ACL option was set anywhere.
The pool is also its own client.

Comment 4 Ravishankar N 2020-02-24 06:26:37 UTC
I think we need to find why the FOP is coming with different permissions than what is stored in the inode context:

[2020-01-31 21:14:28.967838] I [MSGID: 139001] [posix-acl.c:262:posix_acl_log_permit_denied] 0-data_fast-access-control: client: CTX_ID:3b25391c-1eb3-424d-a1e8-1a2c08ffb556-GRAPH_ID:0-PID:2207
5-HOST:ovirt2.localdomain-PC_NAME:data_fast-client-1-RECON_NO:-1, gfid: be318638-e8a0-4c6d-977d-7a937aa84806, req(uid:107,gid:107,perm:1,ngrps:4), ctx(uid:0,gid:0,in-groups:0,perm:000,updated-
fop:INVALID, acl:-) [Permission denied]

So for gfid be318638-e8a0-4c6d-977d-7a937aa84806, a file operation came with the permissions "req(uid:107,gid:107,perm:1,ngrps:4)"
whereas the one stored in the inode context in memory is  "ctx(uid:0,gid:0,in-groups:0,perm:000,".
Also, the fop is also displayed as INVALID while it should have been something like LOOKUP, WRITE etc. (i.e. whatever the fop type was). 
So I was guessing this could be due to incorrect inode context. I was hoping if we had a test setup we could attach gdb to the brick and see what was going on.

Jiffin, do you see anything obvious in this bug?

Comment 5 Jiffin 2020-02-27 10:07:31 UTC
(In reply to Ravishankar N from comment #4)
> I think we need to find why the FOP is coming with different permissions
> than what is stored in the inode context:
> 
> [2020-01-31 21:14:28.967838] I [MSGID: 139001]
> [posix-acl.c:262:posix_acl_log_permit_denied] 0-data_fast-access-control:
> client: CTX_ID:3b25391c-1eb3-424d-a1e8-1a2c08ffb556-GRAPH_ID:0-PID:2207
> 5-HOST:ovirt2.localdomain-PC_NAME:data_fast-client-1-RECON_NO:-1, gfid:
> be318638-e8a0-4c6d-977d-7a937aa84806, req(uid:107,gid:107,perm:1,ngrps:4),
> ctx(uid:0,gid:0,in-groups:0,perm:000,updated-
> fop:INVALID, acl:-) [Permission denied]
> 
> So for gfid be318638-e8a0-4c6d-977d-7a937aa84806, a file operation came with
> the permissions "req(uid:107,gid:107,perm:1,ngrps:4)"
> whereas the one stored in the inode context in memory is 
> "ctx(uid:0,gid:0,in-groups:0,perm:000,".
> Also, the fop is also displayed as INVALID while it should have been
> something like LOOKUP, WRITE etc. (i.e. whatever the fop type was). 
> So I was guessing this could be due to incorrect inode context. I was hoping
> if we had a test setup we could attach gdb to the brick and see what was
> going on.
> 
> Jiffin, do you see anything obvious in this bug?

I came to know it is shard volume, I am not sure how acl xlator works with all the shards, do it develops ctx for each shard or only the head.

Comment 6 Krutika Dhananjay 2020-02-28 05:42:47 UTC
(In reply to Jiffin from comment #5)
> (In reply to Ravishankar N from comment #4)
> > I think we need to find why the FOP is coming with different permissions
> > than what is stored in the inode context:
> > 
> > [2020-01-31 21:14:28.967838] I [MSGID: 139001]
> > [posix-acl.c:262:posix_acl_log_permit_denied] 0-data_fast-access-control:
> > client: CTX_ID:3b25391c-1eb3-424d-a1e8-1a2c08ffb556-GRAPH_ID:0-PID:2207
> > 5-HOST:ovirt2.localdomain-PC_NAME:data_fast-client-1-RECON_NO:-1, gfid:
> > be318638-e8a0-4c6d-977d-7a937aa84806, req(uid:107,gid:107,perm:1,ngrps:4),
> > ctx(uid:0,gid:0,in-groups:0,perm:000,updated-
> > fop:INVALID, acl:-) [Permission denied]
> > 
> > So for gfid be318638-e8a0-4c6d-977d-7a937aa84806, a file operation came with
> > the permissions "req(uid:107,gid:107,perm:1,ngrps:4)"
> > whereas the one stored in the inode context in memory is 
> > "ctx(uid:0,gid:0,in-groups:0,perm:000,".
> > Also, the fop is also displayed as INVALID while it should have been
> > something like LOOKUP, WRITE etc. (i.e. whatever the fop type was). 
> > So I was guessing this could be due to incorrect inode context. I was hoping
> > if we had a test setup we could attach gdb to the brick and see what was
> > going on.
> > 
> > Jiffin, do you see anything obvious in this bug?
> 
> I came to know it is shard volume, I am not sure how acl xlator works with
> all the shards, do it develops ctx for each shard or only the head.

At the posix acl layer, different shards will be treated as separate inodes. So the special knowledge that they are shards won't exist in the brick stack.

Secondly, the gfid in the log message above - be318638-e8a0-4c6d-977d-7a937aa8480 - suggests it's "/.shard" directory, not a shard file as such.

-Krutika

Comment 7 Strahil Nikolov 2020-03-09 15:46:15 UTC
OK,

I  can try to update  my oVirt Lab and Gluster to 7.3.
Should  I enable the trace logs  during the upgrade  ?

Comment 8 Strahil Nikolov 2020-03-10 22:42:39 UTC
The arbiter was upgraded.
1 file never got healed and I removed it from the arbiter - so this one is fixed.

2 volumes show heal is pending for "/" as follows:

[root@ovirt1 /]# gluster volume heal data_fast2 info
Brick gluster1:/gluster_bricks/data_fast2/data_fast2
/ 
Status: Connected
Number of entries: 1

Brick gluster2:/gluster_bricks/data_fast2/data_fast2
/ 
Status: Connected
Number of entries: 1

Brick ovirt3:/gluster_bricks/data_fast2/data_fast2
Status: Connected
Number of entries: 0

[root@ovirt1 /]# gluster-heal-info
Brick gluster1:/gluster_bricks/data/data
Status: Connected
Total Number of entries: 1
Number of entries in heal pending: 1
Number of entries in split-brain: 0
Number of entries possibly healing: 0

Brick gluster2:/gluster_bricks/data/data
Status: Connected
Total Number of entries: 1
Number of entries in heal pending: 1
Number of entries in split-brain: 0
Number of entries possibly healing: 0

Brick ovirt3:/gluster_bricks/data/data
Status: Connected
Total Number of entries: 0
Number of entries in heal pending: 0
Number of entries in split-brain: 0
Number of entries possibly healing: 0


Once I resolve this one , I will proceed with the update of the 2 data nodes.

Comment 9 Strahil Nikolov 2020-03-10 22:48:18 UTC
The arbiter is healed, I will update you once the other nodes are upgraded.

Comment 10 Strahil Nikolov 2020-03-10 23:54:50 UTC
All nodes are up and running. Everything is healed - issue is valid for gluster v7.3.

VMs using "data_fast" , "data_fast2" , "data_fast3" & "data_fast4" (striped LV in linux) fails to power up.

Brick log contains:
[2020-03-10 23:44:20.126951] E [MSGID: 115050] [server-rpc-fops_v2.c:157:server4_lookup_cbk] 0-data_fast-server: 44466: LOOKUP /.shard/f476698a-d8d2-4ab2-b9c4-4c276c2eef43.79 (be318638-e8a0-4c6d-977d-7a937aa84806/f476698a-d8d2-4ab2-b9c4-4c276c2eef43.79), client: CTX_ID:c1ae3077-41b1-4e69-ac98-034e3790c2ac-GRAPH_ID:0-PID:431-HOST:ovirt1.localdomain-PC_NAME:data_fast-client-0-RECON_NO:-0, error-xlator: data_fast-access-control [Permission denied]


[root@ovirt1 /]# mount -t glusterfs -o aux-gfid-mount gluster1:/data_fast /mnt
[root@ovirt1 mnt]# getfattr -n trusted.glusterfs.pathinfo -e text /mnt/.gfid/f476698a-d8d2-4ab2-b9c4-4c276c2eef43
getfattr: Removing leading '/' from absolute path names
# file: mnt/.gfid/f476698a-d8d2-4ab2-b9c4-4c276c2eef43
trusted.glusterfs.pathinfo="(<REPLICATE:data_fast-replicate-0> <POSIX(/gluster_bricks/data_fast/data_fast):ovirt3.localdomain:/gluster_bricks/data_fast/data_fast/396604d9-2a9e-49cd-9563-fdc79981f67b/images/7d11479e-1a02-4a74-a9be-14b4e56faaa1/3e9a41f6-652e-4439-bd24-3e7621c27f4a> <POSIX(/gluster_bricks/data_fast/data_fast):ovirt2.localdomain:/gluster_bricks/data_fast/data_fast/396604d9-2a9e-49cd-9563-fdc79981f67b/images/7d11479e-1a02-4a74-a9be-14b4e56faaa1/3e9a41f6-652e-4439-bd24-3e7621c27f4a> <POSIX(/gluster_bricks/data_fast/data_fast):ovirt1.localdomain:/gluster_bricks/data_fast/data_fast/396604d9-2a9e-49cd-9563-fdc79981f67b/images/7d11479e-1a02-4a74-a9be-14b4e56faaa1/3e9a41f6-652e-4439-bd24-3e7621c27f4a>)"


As you can see the issue is with a shard and not with the shard directory.
[root@ovirt1 mnt]# ls -lZ /gluster_bricks/data_fast/data_fast/396604d9-2a9e-49cd-9563-fdc79981f67b/images/7d11479e-1a02-4a74-a9be-14b4e56faaa1/3e9a41f6-652e-4439-bd24-3e7621c27f4a
-rw-rw----. vdsm kvm system_u:object_r:glusterd_brick_t:s0 /gluster_bricks/data_fast/data_fast/396604d9-2a9e-49cd-9563-fdc79981f67b/images/7d11479e-1a02-4a74-a9be-14b4e56faaa1/3e9a41f6-652e-4439-bd24-3e7621c27f4a


[root@ovirt1 mnt]# ll /rhev/data-center/mnt/glusterSD/gluster1\:_data__fast/396604d9-2a9e-49cd-9563-fdc79981f67b/images/7d11479e-1a02-4a74-a9be-14b4e56faaa1/3e9a41f6-652e-4439-bd24-3e7621c27f4a
-rw-rw----. 1 vdsm kvm 5368709120 Jan 31 01:41 /rhev/data-center/mnt/glusterSD/gluster1:_data__fast/396604d9-2a9e-49cd-9563-fdc79981f67b/images/7d11479e-1a02-4a74-a9be-14b4e56faaa1/3e9a41f6-652e-4439-bd24-3e7621c27f4a

[root@ovirt1 mnt]# dd if=/rhev/data-center/mnt/glusterSD/gluster1\:_data__fast/396604d9-2a9e-49cd-9563-fdc79981f67b/images/7d11479e-1a02-4a74-a9be-14b4e56faaa1/3e9a41f6-652e-4439-bd24-3e7621c27f4a of=/dev/null bs=4M status=progress
5247074304 bytes (5.2 GB) copied, 12.060622 s, 435 MB/s
1280+0 records in
1280+0 records out
5368709120 bytes (5.4 GB) copied, 12.309 s, 436 MB/s
[root@ovirt1 mnt]# 


Previously (v6.5,v7.2) , when using dd with vdsm user - access is denied when the first shard is accessed (aprox 65MB). When user root reads the file and immediately vdsm user reads it - everything is fine (temporarily).

Comment 11 Strahil Nikolov 2020-03-11 20:38:51 UTC
The main question is why at the posix acl layer,  shards will be treated as separate inodes.
Also , won't it be faster if posix acl layer skips the shards -> less lookups , faster gluster ?

Comment 12 Worker Ant 2020-03-12 12:21:02 UTC
This bug is moved to https://github.com/gluster/glusterfs/issues/876, and will be tracked there from now on. Visit GitHub issues URL for further details


Note You need to log in before you can comment on or make changes to this bug.