Bug 1386366 - The FUSE client log is filling up with posix_acl_default and posix_acl_access messages
Summary: The FUSE client log is filling up with posix_acl_default and posix_acl_access...
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: Red Hat Gluster Storage
Classification: Red Hat Storage
Component: glusterfs
Version: rhgs-3.1
Hardware: Unspecified
OS: Unspecified
medium
medium
Target Milestone: ---
: RHGS 3.2.0
Assignee: rjoseph
QA Contact: Byreddy
URL:
Whiteboard:
Depends On:
Blocks: 1351528 1391387 1392867 1392868 1392869
TreeView+ depends on / blocked
 
Reported: 2016-10-18 18:49 UTC by Oonkwee Lim
Modified: 2020-02-14 18:02 UTC (History)
13 users (show)

Fixed In Version: glusterfs-3.8.4-4
Doc Type: If docs needed, set a value
Doc Text:
Clone Of:
: 1391387 (view as bug list)
Environment:
Last Closed: 2017-03-23 06:12:18 UTC
Embargoed:


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
Red Hat Knowledge Base (Solution) 2705301 0 None None None 2016-11-18 11:45:21 UTC
Red Hat Product Errata RHSA-2017:0486 0 normal SHIPPED_LIVE Moderate: Red Hat Gluster Storage 3.2.0 security, bug fix, and enhancement update 2017-03-23 09:18:45 UTC

Description Oonkwee Lim 2016-10-18 18:49:26 UTC
Description of problem:

I had created a volume and mounted it with the -o acl option.

I had immediately setfacl on the mount point

setfacl -d --set o::rx /mnt-acl

[root@v3136a mnt-acl]# getfacl .
# file: .
# owner: root
# group: root
user::rwx
group::r-x
other::r-x
default:user::rwx
default:group::r-x
default:other::r-x

I created a sub-directory and the posix_acl_default was propagated:

[root@v3136a mnt-acl]# getfacl test1.d
# file: test1.d
# owner: root
# group: root
user::rwx
group::r-x
other::r-x
default:user::rwx
default:group::r-x
default:other::r-x

But whenever I did a vi on a file I would get the following errors without fail:

[2016-10-17 20:15:36.187211] I [dict.c:473:dict_get] (-->/usr/lib64/glusterfs/3.7.9/xlator/debug/io-stats.so(io_stats_lookup_cbk+0x167) [0x7f85be851a97] -->/usr/lib64/glusterfs/3.7.9/xlator/system/posix-acl.so(posix_acl_lookup_cbk+0x23a) [0x7f85be63c71a] -->/usr/lib64/libglusterfs.so.0(dict_get+0x63) [0x7f85d18086e3] ) 0-dict: !this || key=system.posix_acl_access [Invalid argument]
[2016-10-17 20:15:36.192142] I [dict.c:473:dict_get] (-->/usr/lib64/glusterfs/3.7.9/xlator/debug/io-stats.so(io_stats_lookup_cbk+0x167) [0x7f85be851a97] -->/usr/lib64/glusterfs/3.7.9/xlator/system/posix-acl.so(posix_acl_lookup_cbk+0x28c) [0x7f85be63c76c] -->/usr/lib64/libglusterfs.so.0(dict_get+0x63) [0x7f85d18086e3] ) 0-dict: !this || key=system.posix_acl_default [Invalid argument]

I would assumed that such errors would not appear from the admin guide:
Effects of a Default ACL
The following are the ways in which the permissions of a directory's default ACLs are passed to the files and subdirectories in it:
A subdirectory inherits the default ACLs of the parent directory both as its default ACLs and as an access ACLs.
A file inherits the default ACLs as its access ACLs.

The errors are filling up the client log pretty fast.



Version-Release number of selected component (if applicable):
RHGS 3.1.3 RHEL 6.8

How reproducible:
Easliy

Steps to Reproduce:
1. mount with the -o acl mount option
2. setfacl -d --set o::rx /mnt-acl 
3. vi any file

Actual results:
posix_acl_default and posix_acl_access messages will appear in the log

Expected results:
posix_acl_default and posix_acl_access messages should not appear in the log

Additional info:

Comment 5 rjoseph 2016-10-24 12:41:11 UTC
Yes, you are right, the error is only seen with existing file and with stat-prefetch on.

The initial analysis shows that if no explicit ACL is set on the file md-cache will not fetch the extended attribute related to ACL. And thus acl xlator is throwing this error. I will do some more analysis and post the findings here.

Comment 6 rjoseph 2016-11-04 09:29:59 UTC
As mentioned in comment 5 the problem is seen only when stat-prefetch is on and the cached file does not have any acl set. Following patch is posted in upstream to prevent the unnecessary logs generated.

http://review.gluster.org/15769/

Comment 7 rjoseph 2016-11-08 12:11:16 UTC
Downstream patch: https://code.engineering.redhat.com/gerrit/89419

Comment 10 Atin Mukherjee 2016-11-10 03:59:14 UTC
upstream mainline : http://review.gluster.org/15769
upstream 3.9 : http://review.gluster.org/15801
downstream patch : https://code.engineering.redhat.com/gerrit/#/c/89419/

Comment 11 rjoseph 2016-11-10 04:02:19 UTC
Upstream 3.8 patch: http://review.gluster.org/15799
Upstream 3.7 patch: http://review.gluster.org/15798

Comment 16 Alok 2016-11-17 09:02:49 UTC
acked+ for the accelerated fix. Please ensure that the fix makes it to 3.2 as well.

Comment 18 Byreddy 2016-11-17 10:03:14 UTC
I am able to reproduce this issue using 3.1.3 build using the steps mentioned in the description section of bug.

Repeated the same steps using the rhgs3.2 build ( glusterfs-3.8.4-5.el6rhs.x86_64 ),i didn't see the issue reported, Fix is working GOOD,


This testing is done on RHEL6 Env.

Comment 20 Byreddy 2016-11-17 11:51:21 UTC
(In reply to Byreddy from comment #18)
> I am able to reproduce this issue using 3.1.3 build using the steps
> mentioned in the description section of bug.
> 
> Repeated the same steps using the rhgs3.2 build (
> glusterfs-3.8.4-5.el6rhs.x86_64 ),i didn't see the issue reported, Fix is
> working GOOD,
> 
> 
> This testing is done on RHEL6 Env.

Tested same thing on RHEL7 as well using glusterfs-3.8.4-5.el7rhgs.x86_64 version bits, it's working fine.

I will test with hot fix build provided and update the result here.

Comment 34 errata-xmlrpc 2017-03-23 06:12:18 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://rhn.redhat.com/errata/RHSA-2017-0486.html


Note You need to log in before you can comment on or make changes to this bug.