Bug 1640109 - Default ACL cannot be removed
Summary: Default ACL cannot be removed
Keywords:
Status: CLOSED CURRENTRELEASE
Alias: None
Product: GlusterFS
Classification: Community
Component: md-cache
Version: 4.1
Hardware: x86_64
OS: Linux
unspecified
medium
Target Milestone: ---
Assignee: Vijay Bellur
QA Contact:
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2018-10-17 11:25 UTC by homma
Modified: 2019-06-18 10:02 UTC (History)
4 users (show)

Fixed In Version: glusterfs-6.x
Clone Of: 1599275
Environment:
Last Closed: 2019-06-18 10:02:12 UTC
Regression: ---
Mount Type: ---
Documentation: ---
CRM:
Verified Versions:
Embargoed:


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
Red Hat Bugzilla 1599275 0 unspecified CLOSED Default ACL cannot be removed 2021-02-22 00:41:40 UTC

Internal Links: 1599275

Description homma 2018-10-17 11:25:48 UTC
+++ This bug was initially created as a clone of Bug #1599275 +++

Description of problem:
Default ACL cannot be removed.

Version-Release number of selected component (if applicable):
glusterfs 3.12.6-1.el7 from centos-gluster312 repository

How reproducible:
Always

Steps to Reproduce:

1. Create a new directory and set default ACL.

$ mkdir test
$ setfacl -m d:g::rwx test
$ getfacl test
# file: test
# owner: root
# group: root
user::rwx
group::r-x
other::r-x
default:user::rwx
default:group::rwx
default:other::r-x

2. Remove the default ACL. The command completes with no error, but default ACL is not removed.

$ setfacl -k test

Actual results:

$ getfacl test
# file: test
# owner: root
# group: root
user::rwx
group::r-x
other::r-x
default:user::rwx
default:group::rwx
default:other::r-x

Expected results:

$ getfacl test
# file: test
# owner: root
# group: root
user::rwx
group::r-x
other::r-x

Additional info:

I have a replicated volume with 2 nodes, and the operation was done from a fuse client.

Volume Name: www
Type: Replicate
Volume ID: 797ded04-3a3b-497a-a16b-15a75f7e1550
Status: Started
Snapshot Count: 0
Number of Bricks: 1 x 2 = 2
Transport-type: tcp
Bricks:
Brick1: fs21.localdomain:/glusterfs/www/brick1/brick
Brick2: fs22.localdomain:/glusterfs/www/brick1/brick
Options Reconfigured:
changelog.changelog: on
geo-replication.indexing: on
performance.client-io-threads: off
network.ping-timeout: 10
server.manage-gids: on
storage.build-pgfid: on
transport.address-family: inet
nfs.disable: on

Comment 1 homma 2018-10-17 12:29:11 UTC
The same problem exists for version 4.1.5.

After some investigation, I found that the cause of the problem is md-cache.

In md-cache.c, mdc_removexattr() returns without processing REMOVEXATTR when the xattr key exists in cache.
So the existing keys (eligible for caching) will never be removed.

I also found a minor problem in is_mdc_key_satisfied(), which prints many "doesn't satisfy caching requirements" trace messages, even when the key is eligible for caching.

I have confermed that default acls are removed as expected if I disable md-cache by setting performance.md-cache-pass-through=on.

Comment 2 homma 2019-03-29 09:10:49 UTC
The problem seems to be resolved by commit 36e2ec3c88eba7a1bcd8aa6f64e4672349ff1d0c on master branch, but not on release-4.1 and release-5 branches.
Please consider applying the fix to release 5.

Comment 3 Amar Tumballi 2019-06-18 10:02:12 UTC
Hi Homma, we are focusing on glusterfs-6.0 and beyond for further validation of bugs, as this release and beyond has many stability fixes. Please upgrade to glusterfs-6.x and we would be happy to help further.

https://review.gluster.org/#/c/glusterfs/+/21411/


Note You need to log in before you can comment on or make changes to this bug.