This service will be undergoing maintenance at 00:00 UTC, 2016-08-01. It is expected to last about 1 hours

Bug 824444

Summary: Crash in glusterd after enabling and disabling quota
Product: [Community] GlusterFS Reporter: shylesh <shmohan>
Component: glusterdAssignee: Raghavendra Bhat <rabhat>
Status: CLOSED WORKSFORME QA Contact:
Severity: high Docs Contact:
Priority: medium    
Version: pre-releaseCC: amarts, gluster-bugs
Target Milestone: ---   
Target Release: ---   
Hardware: x86_64   
OS: Linux   
Whiteboard:
Fixed In Version: Doc Type: Bug Fix
Doc Text:
Story Points: ---
Clone Of: Environment:
Last Closed: 2012-07-13 02:18:30 EDT Type: Bug
Regression: --- Mount Type: ---
Documentation: --- CRM:
Verified Versions: Category: ---
oVirt Team: --- RHEL 7.3 requirements from Atomic Host:
Attachments:
Description Flags
quota glusterd crash none

Description shylesh 2012-05-23 09:28:55 EDT
Created attachment 586356 [details]
quota glusterd crash

Description of problem:
Enabled , limit-usage set and did rebalance after sometime disabled quota then glusterd crashed

Version-Release number of selected component (if applicable):
3.3.0qa42

How reproducible:


Steps to Reproduce:
1. created a distributed-replicate volume 
2. enabled quota and fill up some files
3. add-brick and did rebalance
4. Disabled quota 
  
Actual results:
glusterd crashed

Expected results:


Additional info:
Program terminated with signal 6, Aborted.
#0  0x00000033c7432885 in raise () from /lib64/libc.so.6
Missing separate debuginfos, use: debuginfo-install glibc-2.12-1.47.el6_2.9.x86_64 libgcc-4.4.6-3.el6.x86_64 openssl-1.0.0-20.el6_2.4.x86_64 zlib-1.2.3-27.el6.x86_64
(gdb) bt
#0  0x00000033c7432885 in raise () from /lib64/libc.so.6
#1  0x00000033c7434065 in abort () from /lib64/libc.so.6
#2  0x00000033c746f977 in __libc_message () from /lib64/libc.so.6
#3  0x00000033c7475296 in malloc_printerr () from /lib64/libc.so.6
#4  0x00007f04e04c3fbc in __gf_free (free_ptr=0x19d4db0) at mem-pool.c:258
#5  0x00007f04e0487dd7 in data_destroy (data=0x7f04ded51564) at dict.c:135
#6  0x00007f04e0488af5 in data_unref (this=0x7f04ded51564) at dict.c:470
#7  0x00007f04e04886ef in dict_del (this=0x7f04def30908, key=0x7f04dcd2b127 "features.limit-usage") at dict.c:355
#8  0x00007f04dccf9fb6 in glusterd_quota_disable (volinfo=0x19d8bf0, op_errstr=0x7fff8a3bf660) at glusterd-quota.c:514
#9  0x00007f04dccfad0b in glusterd_op_quota (dict=0x7f04def31094, op_errstr=0x7fff8a3bf660) at glusterd-quota.c:716
#10 0x00007f04dccbfd9e in glusterd_op_commit_perform (op=GD_OP_QUOTA, dict=0x7f04def31094, op_errstr=0x7fff8a3bf660, 
    rsp_dict=0x7f04def318c8) at glusterd-op-sm.c:3009
#11 0x00007f04dccbf4bc in glusterd_op_ac_commit_op (event=0x1a03380, ctx=0x19e7bd0) at glusterd-op-sm.c:2775
#12 0x00007f04dccc42da in glusterd_op_sm () at glusterd-op-sm.c:4594
#13 0x00007f04dccac011 in glusterd_handle_commit_op (req=0x7f04dcc18910) at glusterd-handler.c:656
#14 0x00007f04e02610b7 in rpcsvc_handle_rpc_call (svc=0x1963b80, trans=0x196c8e0, msg=0x197e020) at rpcsvc.c:513
#15 0x00007f04e026146f in rpcsvc_notify (trans=0x196c8e0, mydata=0x1963b80, event=RPC_TRANSPORT_MSG_RECEIVED, data=0x197e020)
    at rpcsvc.c:612
#16 0x00007f04e0266f30 in rpc_transport_notify (this=0x196c8e0, event=RPC_TRANSPORT_MSG_RECEIVED, data=0x197e020)
    at rpc-transport.c:489
#17 0x00007f04dca0b28c in socket_event_poll_in (this=0x196c8e0) at socket.c:1677
#18 0x00007f04dca0b810 in socket_event_handler (fd=6, idx=1, data=0x196c8e0, poll_in=1, poll_out=0, poll_err=0) at socket.c:1792
#19 0x00007f04e04c2e4c in event_dispatch_epoll_handler (event_pool=0x195ed20, events=0x196bcc0, i=0) at event.c:785
#20 0x00007f04e04c306f in event_dispatch_epoll (event_pool=0x195ed20) at event.c:847
#21 0x00007f04e04c33fa in event_dispatch (event_pool=0x195ed20) at event.c:947
#22 0x0000000000408426 in main (argc=2, argv=0x7fff8a3bfca8) at glusterfsd.c:1674


Attached the core
Comment 1 Amar Tumballi 2012-07-11 06:14:12 EDT
need to check if this happens with 3.3.0
Comment 2 shylesh 2012-07-12 04:52:14 EDT
(In reply to comment #1)
> need to check if this happens with 3.3.0

This issue is not reproducible on latest master