Created attachment 1028678 [details] coredump of the brick Description of problem: On one of the cluster of RHGS I have seen coredump of glusterfsd. Crash seen once quota limit-usage command was used. limit-usage command is used to set limit on a volume or directory Version-Release number of selected component (if applicable): glusterfs-3.7.0-2.el6rhs.x86_64 nfs-ganesha-2.2.0-0.el6.x86_64 How reproducible: Issue only on one cluster Steps to Reproduce: 1. create a volume of 6x2 type, start it 2. enable quota 3. execute the command, gluster volume quota <volname> limit-usage / <size> Actual results: step3, failed and coredump for all bricks (gdb) #0 upcall_cache_invalidate (frame=0x7f65dcd150ec, this=0x7f65cc012620, client=0x0, inode=0x7f65b9f8c06c, flags=32, stbuf=0x0, p_stbuf=0x0, oldp_stbuf=0x0) at upcall-internal.c:496 #1 0x00007f65d1b0db42 in up_lookup_cbk (frame=0x7f65dcd150ec, cookie=<value optimized out>, this=0x7f65cc012620, op_ret=0, op_errno=22, inode=0x7f65b9f8c06c, stbuf=0x7f659a5e4a40, xattr=0x7f65dc70fb34, postparent=0x7f659a5e49d0) at upcall.c:767 #2 0x00007f65d1d1ceb3 in pl_lookup_cbk (frame=0x7f65dcd115cc, cookie=<value optimized out>, this=<value optimized out>, op_ret=0, op_errno=22, inode=0x7f65b9f8c06c, buf=0x7f659a5e4a40, xdata=0x7f65dc70fb34, postparent=0x7f659a5e49d0) at posix.c:2036 #3 0x00007f65d1f36d0e in posix_acl_lookup_cbk (frame=0x7f65dcd1712c, cookie=<value optimized out>, this=0x7f65cc00ff30, op_ret=0, op_errno=22, inode=0x7f65b9f8c06c, buf=0x7f659a5e4a40, xattr=0x7f65dc70fb34, postparent=0x7f659a5e49d0) at posix-acl.c:806 #4 0x00007f65d2142cc9 in br_stub_lookup_cbk (frame=0x7f65dcd12948, cookie=<value optimized out>, this=<value optimized out>, op_ret=0, op_errno=22, inode=0x7f65b9f8c06c, stbuf=0x7f659a5e4a40, xattr=0x7f65dc70fb34, postparent=0x7f659a5e49d0) at bit-rot-stub.c:1639 #5 0x00007f65d2a0dac3 in ctr_lookup_cbk (frame=0x7f65dcd1564c, cookie=<value optimized out>, this=<value optimized out>, op_ret=0, op_errno=22, inode=0x7f65b9f8c06c, buf=0x7f659a5e4a40, dict=0x7f65dc70fb34, postparent=0x7f659a5e49d0) at changetimerecorder.c:259 #6 0x00007f65d2e4de68 in posix_lookup (frame=0x7f65dcd11474, this=<value optimized out>, loc=0x7f65dc7a8808, xdata=<value optimized out>) at posix.c:208 #7 0x00007f65dde8266d in default_lookup (frame=0x7f65dcd11474, this=0x7f65cc008ed0, loc=0x7f65dc7a8808, xdata=0x7f65dc70fcd8) at defaults.c:2139 #8 0x00007f65d2a13038 in ctr_lookup (frame=0x7f65dcd1564c, this=0x7f65cc00a4e0, loc=0x7f65dc7a8808, xdata=0x7f65dc70fcd8) at changetimerecorder.c:310 #9 0x00007f65dde8266d in default_lookup (frame=0x7f65dcd1564c, this=0x7f65cc00cbe0, loc=0x7f65dc7a8808, xdata=0x7f65dc70fcd8) at defaults.c:2139 #10 0x00007f65d2141b69 in br_stub_lookup (frame=<value optimized out>, this=0x7f65cc00ead0, loc=0x7f65dc7a8808, xdata=0x7f65dc70fcd8) at bit-rot-stub.c:1690 #11 0x00007f65d1f35832 in posix_acl_lookup (frame=0x7f65dcd1712c, this=0x7f65cc00ff30, loc=0x7f65dc7a8808, xattr=<value optimized out>) at posix-acl.c:858 #12 0x00007f65d1d1c642 in pl_lookup (frame=0x7f65dcd115cc, this=0x7f65cc0112b0, loc=0x7f65dc7a8808, xdata=0x7f65dc70fcd8) at posix.c:2080 #13 0x00007f65d1b0a0b2 in up_lookup (frame=0x7f65dcd150ec, this=0x7f65cc012620, loc=0x7f65dc7a8808, xattr_req=0x7f65dc70fcd8) at upcall.c:793 #14 0x00007f65dde84c9c in default_lookup_resume (frame=0x7f65dcd156f8, this=0x7f65cc013a70, loc=0x7f65dc7a8808, xdata=0x7f65dc70fcd8) at defaults.c:1694 #15 0x00007f65ddea1b60 in call_resume (stub=0x7f65dc7a87c8) at call-stub.c:2576 #16 0x00007f65d1900398 in iot_worker (data=0x7f65cc04e7d0) at io-threads.c:214 #17 0x00000036e2e079d1 in start_thread () from /lib64/libpthread.so.0 #18 0x00000036e2ae88fd in clone () from /lib64/libc.so.6 Expected results: volume-usage should be successful and coredump should not be there Additional info:
bt looks similar to that of bug1221457. Please confirm if there was nfs-ganesha configured and running with ACLs enabled on your system when you had seen this issue. If yes, as stated before, there was an issue with ACLs then. ACLs feature has been merged (with the above issues fixed) in the latest nfs-ganesha upstream. Please check if you re-hit this issue with those changes in. Thanks!
No, acls was disabled
This issue is always reproducible. In 'upcall_cache_invalidate', we refer to 'frame->root->client' to get client_uid and process for cache-invalidation. But looks like few server-side internally generated fops like 'quota/marker' will not have any client associated with the frame. Hence we need a check for clients to be valid before processing for upcall cache-invalidation.
Enabled quota on ganesha setup and set limit-usage. Didn't see any glusterfsd crashes. Hence marking this bug verified on build glusterfs-3.7.1-6.el6rhs
Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory, and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://rhn.redhat.com/errata/RHSA-2015-1495.html