Description of problem: ======================= While trying to reproduce one of the goldman issue, hit this and the brick logs are getting filled up pretty fast and inturn /. [2016-04-06 01:28:19.152827] E [posix-acl.c:359:posix_acl_ctx_get] (--> /usr/lib64/libglusterfs.so.0(_gf_log_callingfn+0x1e0)[0x7f352984a620] (--> /usr/lib64/glusterfs/3.7.1/xlator/features/access-control.so(posix_acl_ctx_get+0xa1)[0x7f35166a22f1] (--> /usr/lib64/glusterfs/3.7.1/xlator/features/access-control.so(__posix_acl_get+0x19)[0x7f35166a2489] (--> /usr/lib64/glusterfs/3.7.1/xlator/features/access-control.so(posix_acl_get+0x5c)[0x7f35166a251c] (--> /usr/lib64/glusterfs/3.7.1/xlator/features/access-control.so(posix_acl_lookup_cbk+0x1cd)[0x7f35166a637d] ))))) 0-vol1-access-control: inode ctx is NULL for 00000000-0000-0000-0000-000000000000 [2016-04-06 01:28:20.535181] E [posix-acl.c:359:posix_acl_ctx_get] (--> /usr/lib64/libglusterfs.so.0(_gf_log_callingfn+0x1e0)[0x7f352984a620] (--> /usr/lib64/glusterfs/3.7.1/xlator/features/access-control.so(posix_acl_ctx_get+0xa1)[0x7f35166a22f1] (--> /usr/lib64/glusterfs/3.7.1/xlator/features/access-control.so(__posix_acl_get+0x19)[0x7f35166a2489] (--> /usr/lib64/glusterfs/3.7.1/xlator/features/access-control.so(posix_acl_get+0x5c)[0x7f35166a251c] (--> /usr/lib64/glusterfs/3.7.1/xlator/features/access-control.so(posix_acl_lookup_cbk+0x1cd)[0x7f35166a637d] ))))) 0-vol1-access-control: inode ctx is NULL for 00000000-0000-0000-0000-000000000000 [2016-04-06 01:28:21.489531] E [posix-acl.c:359:posix_acl_ctx_get] (--> /usr/lib64/libglusterfs.so.0(_gf_log_callingfn+0x1e0)[0x7f352984a620] (--> /usr/lib64/glusterfs/3.7.1/xlator/features/access-control.so(posix_acl_ctx_get+0xa1)[0x7f35166a22f1] (--> /usr/lib64/glusterfs/3.7.1/xlator/features/access-control.so(__posix_acl_get+0x19)[0x7f35166a2489] (--> /usr/lib64/glusterfs/3.7.1/xlator/features/access-control.so(posix_acl_get+0x5c)[0x7f35166a251c] (--> /usr/lib64/glusterfs/3.7.1/xlator/features/access-control.so(posix_acl_lookup_cbk+0x1cd)[0x7f35166a637d] ))))) 0-vol1-access-control: inode ctx is NULL for 00000000-0000-0000-0000-000000000000 [root@dhcp42-134 bricks]# grep "posix-acl.c:359:posix_acl_ctx_get" rhs-brick* | wc -l 11776152 [root@dhcp42-134 bricks]# gluster v info vol1 Volume Name: vol1 Type: Distributed-Replicate Volume ID: d98c7790-e80f-4479-b79d-d20c430381f3 Status: Started Number of Bricks: 18 x 2 = 36 Transport-type: tcp Bricks: Brick1: dhcp42-134:/rhs/brick1/b1 Brick2: dhcp42-124:/rhs/brick1/b2 Brick3: dhcp42-131:/rhs/brick1/b3 Brick4: dhcp42-141:/rhs/brick1/b4 Brick5: dhcp42-134:/rhs/brick2/b5 Brick6: dhcp42-124:/rhs/brick2/b6 Brick7: dhcp42-131:/rhs/brick2/b7 Brick8: dhcp42-141:/rhs/brick2/b8 Brick9: dhcp42-134:/rhs/brick3/b9 Brick10: dhcp42-124:/rhs/brick3/b10 Brick11: dhcp42-131:/rhs/brick3/b11 Brick12: dhcp42-141:/rhs/brick3/b12 Brick13: dhcp42-134:/rhs/brick1/b13 Brick14: dhcp42-124:/rhs/brick1/b14 Brick15: dhcp42-131:/rhs/brick1/b15 Brick16: dhcp42-141:/rhs/brick1/b16 Brick17: dhcp42-134:/rhs/brick2/b17 Brick18: dhcp42-124:/rhs/brick2/b18 Brick19: dhcp42-131:/rhs/brick2/b19 Brick20: dhcp42-141:/rhs/brick2/b20 Brick21: dhcp42-134:/rhs/brick3/b21 Brick22: dhcp42-124:/rhs/brick3/b22 Brick23: dhcp42-131:/rhs/brick3/b23 Brick24: dhcp42-141:/rhs/brick3/b24 Brick25: dhcp42-134:/rhs/brick1/b25 Brick26: dhcp42-124:/rhs/brick1/b26 Brick27: dhcp42-131:/rhs/brick1/b27 Brick28: dhcp42-141:/rhs/brick1/b28 Brick29: dhcp42-134:/rhs/brick2/b29 Brick30: dhcp42-124:/rhs/brick2/b30 Brick31: dhcp42-131:/rhs/brick2/b31 Brick32: dhcp42-141:/rhs/brick2/b32 Brick33: dhcp42-134:/rhs/brick3/b33 Brick34: dhcp42-124:/rhs/brick3/b34 Brick35: dhcp42-131:/rhs/brick3/b35 Brick36: dhcp42-141:/rhs/brick3/b36 Options Reconfigured: features.quota-deem-statfs: on performance.stat-prefetch: off features.inode-quota: on features.quota: on performance.readdir-ahead: on [root@dhcp42-134 bricks]# Steps that are done: ==================== 1. created 6x2 volume and nfs mount on the clients. quota is enabled. 2. Created directories multiple level deep and width 3. Added bricks (18x2) 4. Created same set of directories under the parent directory from multiple clients 5. started rebalance and lookups from 2 of the clients. Version-Release number of selected component (if applicable): ============================================================= [root@dhcp42-134 bricks]# gluster --version glusterfs 3.7.1 built on Apr 2 2016 11:55:18 Repository revision: git://git.gluster.com/glusterfs.git Copyright (c) 2006-2011 Gluster Inc. <http://www.gluster.com> GlusterFS comes with ABSOLUTELY NO WARRANTY. You may redistribute copies of GlusterFS under the terms of the GNU General Public License. [root@dhcp42-134 bricks]# [root@dhcp42-134 bricks]# rpm -qa |grep gluster glusterfs-3.7.1-17.el6rhs.11.hotfix.sfdc01581565.x86_64 glusterfs-debuginfo-3.7.1-17.el6rhs.11.hotfix.sfdc01581565.x86_64 glusterfs-server-3.7.1-17.el6rhs.11.hotfix.sfdc01581565.x86_64 glusterfs-geo-replication-3.7.1-17.el6rhs.11.hotfix.sfdc01581565.x86_64 glusterfs-api-devel-3.7.1-17.el6rhs.11.hotfix.sfdc01581565.x86_64 glusterfs-libs-3.7.1-17.el6rhs.11.hotfix.sfdc01581565.x86_64 glusterfs-rdma-3.7.1-17.el6rhs.11.hotfix.sfdc01581565.x86_64 glusterfs-resource-agents-3.7.1-17.el6rhs.11.hotfix.sfdc01581565.noarch glusterfs-client-xlators-3.7.1-17.el6rhs.11.hotfix.sfdc01581565.x86_64 glusterfs-cli-3.7.1-17.el6rhs.11.hotfix.sfdc01581565.x86_64 glusterfs-fuse-3.7.1-17.el6rhs.11.hotfix.sfdc01581565.x86_64 glusterfs-devel-3.7.1-17.el6rhs.11.hotfix.sfdc01581565.x86_64 glusterfs-api-3.7.1-17.el6rhs.11.hotfix.sfdc01581565.x86_64 [root@dhcp42-134 bricks]# How reproducible: ================ 100% Steps to Reproduce: =================== As in description Actual results: Expected results: Additional info:
Bug that was tried to reproduce : https://bugzilla.redhat.com/show_bug.cgi?id=1323042
Patch submitted upstream: mainline: http://review.gluster.org/#/c/13961/ release-3.7: http://review.gluster.org/#/c/13962/
Patch submitted: https://code.engineering.redhat.com/gerrit/71896
I could hit this issue on glusterfs-3.7.9-2 on geo-replication brick logs. Patch mentioned at comment 5 is cherry picked "Change has been successfully cherry-picked as 14abb0c32e9ab9bd43c1476dcc3e2125f80ad08d by Raghavendra Gowdappa Apr 20 9:49 AM" Whereas the gluster version mentions: [root@dhcp47-24 master]# gluster --version glusterfs 3.7.9 built on Apr 19 2016 07:02:38 Repository revision: git://git.gluster.com/glusterfs.git Copyright (c) 2006-2011 Gluster Inc. <http://www.gluster.com> GlusterFS comes with ABSOLUTELY NO WARRANTY. You may redistribute copies of GlusterFS under the terms of the GNU General Public License. [root@dhcp47-24 master]# Moving this bug back to MODIFIED Log Snippet: ============ sh%3A%2F%2Froot%4010.70.47.5%3Agluster%3A%2F%2F127.0.0.1%3Aslave.%2Fbricks%2Fbrick1%2Fmaster_brick2.gluster.log-20160424:[2016-04-22 10:34:50.660524] E [posix-acl.c:357:posix_acl_ctx_get] (--> /usr/lib64/libglusterfs.so.0(_gf_log_callingfn+0x1f2)[0x7f55e0f07902] (--> /usr/lib64/glusterfs/3.7.9/xlator/system/posix-acl.so(posix_acl_ctx_get+0xa9)[0x7f55d1e632f9] (--> /usr/lib64/glusterfs/3.7.9/xlator/system/posix-acl.so(__posix_acl_get+0x19)[0x7f55d1e63459] (--> /usr/lib64/glusterfs/3.7.9/xlator/system/posix-acl.so(posix_acl_get+0x5c)[0x7f55d1e634ec] (--> /usr/lib64/glusterfs/3.7.9/xlator/system/posix-acl.so(posix_acl_lookup_cbk+0x1cd)[0x7f55d1e672dd] ))))) 0-posix-acl-autoload: inode ctx is NULL for 00000000-0000-0000-0000-000000000000 ssh%3A%2F%2Froot%4010.70.47.5%3Agluster%3A%2F%2F127.0.0.1%3Aslave.%2Fbricks%2Fbrick1%2Fmaster_brick2.gluster.log-20160424:[2016-04-22 10:34:50.663416] E [posix-acl.c:357:posix_acl_ctx_get] (--> /usr/lib64/libglusterfs.so.0(_gf_log_callingfn+0x1f2)[0x7f55e0f07902] (--> /usr/lib64/glusterfs/3.7.9/xlator/system/posix-acl.so(posix_acl_ctx_get+0xa9)[0x7f55d1e632f9] (--> /usr/lib64/glusterfs/3.7.9/xlator/system/posix-acl.so(__posix_acl_get+0x19)[0x7f55d1e63459] (--> /usr/lib64/glusterfs/3.7.9/xlator/system/posix-acl.so(posix_acl_get+0x5c)[0x7f55d1e634ec] (--> /usr/lib64/glusterfs/3.7.9/xlator/system/posix-acl.so(posix_acl_lookup_cbk+0x1cd)[0x7f55d1e672dd] ))))) 0-posix-acl-autoload: inode ctx is NULL for 00000000-0000-0000-0000-000000000000 ssh%3A%2F%2Froot%4010.70.47.5%3Agluster%3A%2F%2F127.0.0.1%3Aslave.%2Fbricks%2Fbrick1%2Fmaster_brick2.gluster.log-20160424:[2016-04-22 10:34:50.667093] E [posix-acl.c:357:posix_acl_ctx_get] (--> /usr/lib64/libglusterfs.so.0(_gf_log_callingfn+0x1f2)[0x7f55e0f07902] (--> /usr/lib64/glusterfs/3.7.9/xlator/system/posix-acl.so(posix_acl_ctx_get+0xa9)[0x7f55d1e632f9] (--> /usr/lib64/glusterfs/3.7.9/xlator/system/posix-acl.so(__posix_acl_get+0x19)[0x7f55d1e63459] (--> /usr/lib64/glusterfs/3.7.9/xlator/system/posix-acl.so(posix_acl_get+0x5c)[0x7f55d1e634ec] (--> /usr/lib64/glusterfs/3.7.9/xlator/system/posix-acl.so(posix_acl_lookup_cbk+0x1cd)[0x7f55d1e672dd] ))))) 0-posix-acl-autoload: inode ctx is NULL for 00000000-0000-0000-0000-000000000000 ssh%3A%2F%2Froot%4010.70.47.5%3Agluster%3A%2F%2F127.0.0.1%3Aslave.%2Fbricks%2Fbrick1%2Fmaster_brick2.gluster.log-20160424:[2016-04-22 10:34:50.672591] E [posix-acl.c:357:posix_acl_ctx_get] (--> /usr/lib64/libglusterfs.so.0(_gf_log_callingfn+0x1f2)[0x7f55e0f07902] (--> /usr/lib64/glusterfs/3.7.9/xlator/system/posix-acl.so(posix_acl_ctx_get+0xa9)[0x7f55d1e632f9] (--> /usr/lib64/glusterfs/3.7.9/xlator/system/posix-acl.so(__posix_acl_get+0x19)[0x7f55d1e63459] (--> /usr/lib64/glusterfs/3.7.9/xlator/system/posix-acl.so(posix_acl_get+0x5c)[0x7f55d1e634ec] (--> /usr/lib64/glusterfs/3.7.9/xlator/system/posix-acl.so(posix_acl_lookup_cbk+0x1cd)[0x7f55d1e672dd] ))))) 0-posix-acl-autoload: inode ctx is NULL for 00000000-0000-0000-0000-000000000000
These logs are flooded with tiered volume as well (build - glusterfs-server-3.7.9-2.el7rhgs.x86_64) [2016-04-29 09:55:12.856299] E [posix-acl.c:357:posix_acl_ctx_get] (--> /lib64/libglusterfs.so.0(_gf_log_callingfn+0x192)[0x7f7b13df3b42] (--> /usr/lib64/glusterfs/3.7.9/xlator/features/access-control.so(posix_acl_ctx_get+0x88)[0x7f7b04c143c8] (--> /usr/lib64/glusterfs/3.7.9/xlator/features/access-control.so(__posix_acl_get+0x11)[0x7f7b04c14541] (--> /usr/lib64/glusterfs/3.7.9/xlator/features/access-control.so(posix_acl_get+0x5e)[0x7f7b04c1490e] (--> /usr/lib64/glusterfs/3.7.9/xlator/features/access-control.so(posix_acl_lookup_cbk+0x1e5)[0x7f7b04c18515] ))))) 0-ganesha-tier-access-control: inode ctx is NULL for 00000000-0000-0000-0000-000000000000 [2016-04-29 09:55:13.013036] E [posix-acl.c:357:posix_acl_ctx_get] (--> /lib64/libglusterfs.so.0(_gf_log_callingfn+0x192)[0x7f7b13df3b42] (--> /usr/lib64/glusterfs/3.7.9/xlator/features/access-control.so(posix_acl_ctx_get+0x88)[0x7f7b04c143c8] (--> /usr/lib64/glusterfs/3.7.9/xlator/features/access-control.so(__posix_acl_get+0x11)[0x7f7b04c14541] (--> /usr/lib64/glusterfs/3.7.9/xlator/features/access-control.so(posix_acl_get+0x5e)[0x7f7b04c1490e] (--> /usr/lib64/glusterfs/3.7.9/xlator/features/access-control.so(posix_acl_lookup_cbk+0x1e5)[0x7f7b04c18515] ))))) 0-ganesha-tier-access-control: inode ctx is NULL for 00000000-0000-0000-0000-000000000000
The log messages reported in the bug are no more seen in build - glusterfs-3.7.9-4.el7rhgs.x86_64. The logs were consistently seen in previous builds. Marking the bug as verified.
Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory, and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://access.redhat.com/errata/RHBA-2016:1240