Bug 1224183 - quota: glusterfsd crash once quota limit-usage is executed
Summary: quota: glusterfsd crash once quota limit-usage is executed
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: Red Hat Gluster Storage
Classification: Red Hat Storage
Component: nfs-ganesha
Version: rhgs-3.1
Hardware: x86_64
OS: Linux
high
high
Target Milestone: ---
: RHGS 3.1.0
Assignee: Soumya Koduri
QA Contact: Anil Shah
URL:
Whiteboard:
Depends On:
Blocks: 1202842
TreeView+ depends on / blocked
 
Reported: 2015-05-22 10:08 UTC by Saurabh
Modified: 2016-01-19 06:14 UTC (History)
9 users (show)

Fixed In Version: glusterfs-3.7.1-2
Doc Type: Bug Fix
Doc Text:
Clone Of:
Environment:
Last Closed: 2015-07-29 04:49:14 UTC
Embargoed:


Attachments (Terms of Use)
coredump of the brick (3.69 MB, application/x-xz)
2015-05-22 10:08 UTC, Saurabh
no flags Details


Links
System ID Private Priority Status Summary Last Updated
Red Hat Product Errata RHSA-2015:1495 0 normal SHIPPED_LIVE Important: Red Hat Gluster Storage 3.1 update 2015-07-29 08:26:26 UTC

Description Saurabh 2015-05-22 10:08:58 UTC
Created attachment 1028678 [details]
coredump of the brick

Description of problem:
On one of the cluster of RHGS I have seen coredump of glusterfsd. Crash seen once quota limit-usage command was used. limit-usage command is used to set limit on a volume or directory

Version-Release number of selected component (if applicable):
glusterfs-3.7.0-2.el6rhs.x86_64
nfs-ganesha-2.2.0-0.el6.x86_64

How reproducible:
Issue only on one cluster

Steps to Reproduce:
1. create a volume of 6x2 type, start it
2. enable quota
3. execute the command, gluster volume quota <volname> limit-usage / <size>

Actual results:
step3, 
failed and coredump for all bricks
(gdb) 
#0  upcall_cache_invalidate (frame=0x7f65dcd150ec, this=0x7f65cc012620, client=0x0, inode=0x7f65b9f8c06c, flags=32, stbuf=0x0, p_stbuf=0x0, oldp_stbuf=0x0) at upcall-internal.c:496
#1  0x00007f65d1b0db42 in up_lookup_cbk (frame=0x7f65dcd150ec, cookie=<value optimized out>, this=0x7f65cc012620, op_ret=0, op_errno=22, inode=0x7f65b9f8c06c, stbuf=0x7f659a5e4a40, xattr=0x7f65dc70fb34, 
    postparent=0x7f659a5e49d0) at upcall.c:767
#2  0x00007f65d1d1ceb3 in pl_lookup_cbk (frame=0x7f65dcd115cc, cookie=<value optimized out>, this=<value optimized out>, op_ret=0, op_errno=22, inode=0x7f65b9f8c06c, buf=0x7f659a5e4a40, 
    xdata=0x7f65dc70fb34, postparent=0x7f659a5e49d0) at posix.c:2036
#3  0x00007f65d1f36d0e in posix_acl_lookup_cbk (frame=0x7f65dcd1712c, cookie=<value optimized out>, this=0x7f65cc00ff30, op_ret=0, op_errno=22, inode=0x7f65b9f8c06c, buf=0x7f659a5e4a40, 
    xattr=0x7f65dc70fb34, postparent=0x7f659a5e49d0) at posix-acl.c:806
#4  0x00007f65d2142cc9 in br_stub_lookup_cbk (frame=0x7f65dcd12948, cookie=<value optimized out>, this=<value optimized out>, op_ret=0, op_errno=22, inode=0x7f65b9f8c06c, stbuf=0x7f659a5e4a40, 
    xattr=0x7f65dc70fb34, postparent=0x7f659a5e49d0) at bit-rot-stub.c:1639
#5  0x00007f65d2a0dac3 in ctr_lookup_cbk (frame=0x7f65dcd1564c, cookie=<value optimized out>, this=<value optimized out>, op_ret=0, op_errno=22, inode=0x7f65b9f8c06c, buf=0x7f659a5e4a40, 
    dict=0x7f65dc70fb34, postparent=0x7f659a5e49d0) at changetimerecorder.c:259
#6  0x00007f65d2e4de68 in posix_lookup (frame=0x7f65dcd11474, this=<value optimized out>, loc=0x7f65dc7a8808, xdata=<value optimized out>) at posix.c:208
#7  0x00007f65dde8266d in default_lookup (frame=0x7f65dcd11474, this=0x7f65cc008ed0, loc=0x7f65dc7a8808, xdata=0x7f65dc70fcd8) at defaults.c:2139
#8  0x00007f65d2a13038 in ctr_lookup (frame=0x7f65dcd1564c, this=0x7f65cc00a4e0, loc=0x7f65dc7a8808, xdata=0x7f65dc70fcd8) at changetimerecorder.c:310
#9  0x00007f65dde8266d in default_lookup (frame=0x7f65dcd1564c, this=0x7f65cc00cbe0, loc=0x7f65dc7a8808, xdata=0x7f65dc70fcd8) at defaults.c:2139
#10 0x00007f65d2141b69 in br_stub_lookup (frame=<value optimized out>, this=0x7f65cc00ead0, loc=0x7f65dc7a8808, xdata=0x7f65dc70fcd8) at bit-rot-stub.c:1690
#11 0x00007f65d1f35832 in posix_acl_lookup (frame=0x7f65dcd1712c, this=0x7f65cc00ff30, loc=0x7f65dc7a8808, xattr=<value optimized out>) at posix-acl.c:858
#12 0x00007f65d1d1c642 in pl_lookup (frame=0x7f65dcd115cc, this=0x7f65cc0112b0, loc=0x7f65dc7a8808, xdata=0x7f65dc70fcd8) at posix.c:2080
#13 0x00007f65d1b0a0b2 in up_lookup (frame=0x7f65dcd150ec, this=0x7f65cc012620, loc=0x7f65dc7a8808, xattr_req=0x7f65dc70fcd8) at upcall.c:793
#14 0x00007f65dde84c9c in default_lookup_resume (frame=0x7f65dcd156f8, this=0x7f65cc013a70, loc=0x7f65dc7a8808, xdata=0x7f65dc70fcd8) at defaults.c:1694
#15 0x00007f65ddea1b60 in call_resume (stub=0x7f65dc7a87c8) at call-stub.c:2576
#16 0x00007f65d1900398 in iot_worker (data=0x7f65cc04e7d0) at io-threads.c:214
#17 0x00000036e2e079d1 in start_thread () from /lib64/libpthread.so.0
#18 0x00000036e2ae88fd in clone () from /lib64/libc.so.6

Expected results:
volume-usage should be successful and coredump should not be there

Additional info:

Comment 2 Soumya Koduri 2015-06-01 08:54:48 UTC
bt looks similar to that of bug1221457.

Please confirm if there was nfs-ganesha configured and running with ACLs enabled on your system when you had seen this issue. 
If yes, as stated before, there was an issue with ACLs then. ACLs feature has been merged (with the above issues fixed) in the latest nfs-ganesha upstream. Please check if you re-hit this issue with those changes in. Thanks!

Comment 3 Saurabh 2015-06-02 04:44:15 UTC
No, acls was disabled

Comment 4 Soumya Koduri 2015-06-02 07:17:41 UTC
This issue is always reproducible.  In 'upcall_cache_invalidate', we refer to 'frame->root->client' to get client_uid and process for cache-invalidation. But looks like few server-side internally generated fops like 'quota/marker' will
not have any client associated with the frame. Hence we need a check for clients to be valid before processing for upcall cache-invalidation.

Comment 6 Anil Shah 2015-07-02 06:12:28 UTC
Enabled quota on ganesha setup and set limit-usage.
Didn't see any glusterfsd crashes. Hence marking this bug verified on build glusterfs-3.7.1-6.el6rhs

Comment 7 errata-xmlrpc 2015-07-29 04:49:14 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://rhn.redhat.com/errata/RHSA-2015-1495.html


Note You need to log in before you can comment on or make changes to this bug.