+++ This bug was initially created as a clone of Bug #1599275 +++ Description of problem: Default ACL cannot be removed. Version-Release number of selected component (if applicable): glusterfs 3.12.6-1.el7 from centos-gluster312 repository How reproducible: Always Steps to Reproduce: 1. Create a new directory and set default ACL. $ mkdir test $ setfacl -m d:g::rwx test $ getfacl test # file: test # owner: root # group: root user::rwx group::r-x other::r-x default:user::rwx default:group::rwx default:other::r-x 2. Remove the default ACL. The command completes with no error, but default ACL is not removed. $ setfacl -k test Actual results: $ getfacl test # file: test # owner: root # group: root user::rwx group::r-x other::r-x default:user::rwx default:group::rwx default:other::r-x Expected results: $ getfacl test # file: test # owner: root # group: root user::rwx group::r-x other::r-x Additional info: I have a replicated volume with 2 nodes, and the operation was done from a fuse client. Volume Name: www Type: Replicate Volume ID: 797ded04-3a3b-497a-a16b-15a75f7e1550 Status: Started Snapshot Count: 0 Number of Bricks: 1 x 2 = 2 Transport-type: tcp Bricks: Brick1: fs21.localdomain:/glusterfs/www/brick1/brick Brick2: fs22.localdomain:/glusterfs/www/brick1/brick Options Reconfigured: changelog.changelog: on geo-replication.indexing: on performance.client-io-threads: off network.ping-timeout: 10 server.manage-gids: on storage.build-pgfid: on transport.address-family: inet nfs.disable: on
The same problem exists for version 4.1.5. After some investigation, I found that the cause of the problem is md-cache. In md-cache.c, mdc_removexattr() returns without processing REMOVEXATTR when the xattr key exists in cache. So the existing keys (eligible for caching) will never be removed. I also found a minor problem in is_mdc_key_satisfied(), which prints many "doesn't satisfy caching requirements" trace messages, even when the key is eligible for caching. I have confermed that default acls are removed as expected if I disable md-cache by setting performance.md-cache-pass-through=on.
The problem seems to be resolved by commit 36e2ec3c88eba7a1bcd8aa6f64e4672349ff1d0c on master branch, but not on release-4.1 and release-5 branches. Please consider applying the fix to release 5.
Hi Homma, we are focusing on glusterfs-6.0 and beyond for further validation of bugs, as this release and beyond has many stability fixes. Please upgrade to glusterfs-6.x and we would be happy to help further. https://review.gluster.org/#/c/glusterfs/+/21411/