Bug 764289 (GLUSTER-2557) - Quota:limit-usage fails glusterd
Summary: Quota:limit-usage fails glusterd
Keywords:
Status: CLOSED CURRENTRELEASE
Alias: GLUSTER-2557
Product: GlusterFS
Classification: Community
Component: quota
Version: mainline
Hardware: x86_64
OS: Linux
medium
medium
Target Milestone: ---
Assignee: Junaid
QA Contact:
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2011-03-18 10:37 UTC by Saurabh
Modified: 2015-12-01 16:45 UTC (History)
2 users (show)

Fixed In Version:
Doc Type: Bug Fix
Doc Text:
Clone Of:
Environment:
Last Closed:
Regression: ---
Mount Type: ---
Documentation: ---
CRM:
Verified Versions:


Attachments (Terms of Use)

Description Saurabh 2011-03-18 10:37:01 UTC
This time I have a distribute volume with two bricks on the same node, 
the volume is yet not started and the quota is enabled and while trying to set limit-usage. The glusterd crashes,

Logs:-


gluster> volume info dist2

Volume Name: dist2
Type: Distribute
Status: Created
Number of Bricks: 2
Transport-type: tcp
Bricks:
Brick1: 10.1.12.135:/mnt/dist2-a
Brick2: 10.1.12.135:/mnt/dist2-b
Options Reconfigured:
features.limit-usage: /dist1:2MB
monitor.xtime-marker: on
features.quota: on
gluster> 
gluster> volume quota dist1 remove 
Usage: volume quota <VOLNAME> <enable|disable|limit-usage|list|remove> [args] [path]
gluster> volume quota dist1 remove /dist1

gluster> volume quota dist2 limit-usage /dist2 2MB
gluster> volume start dist2
Connection failed. Please check if gluster daemon is operational.
[root@centos-qa-client-3 sbin]# 


##################bt of the core#############


#0  0x0000003a8a630265 in raise () from /lib64/libc.so.6
(gdb) bt
#0  0x0000003a8a630265 in raise () from /lib64/libc.so.6
#1  0x0000003a8a631d10 in abort () from /lib64/libc.so.6
#2  0x0000003a8a66a84b in __libc_message () from /lib64/libc.so.6
#3  0x0000003a8a67230f in _int_free () from /lib64/libc.so.6
#4  0x0000003a8a67276b in free () from /lib64/libc.so.6
#5  0x00002b5ca7844a04 in __gf_free (free_ptr=0x78ef5c0) at mem-pool.c:259
#6  0x00002b5ca781ba1b in data_destroy (data=0x78ef5e0) at dict.c:140
#7  0x00002b5ca781c3c1 in _dict_set (this=0x78ee750, key=<value optimized out>, value=0x78f1360) at dict.c:245
#8  dict_set (this=0x78ee750, key=<value optimized out>, value=0x78f1360) at dict.c:300
#9  0x00002aaaaaae2d21 in glusterd_quota_limit_usage (volinfo=0x78eec60, dict=0x78eb9d0, op_errstr=0x7ffff6f1a1f8) at glusterd-op-sm.c:4519
#10 0x00002aaaaaae3b64 in glusterd_op_quota (dict=0x78eb9d0, op_errstr=0x7ffff6f1a1f8) at glusterd-op-sm.c:4626
#11 0x00002aaaaaaed9b3 in glusterd_op_stage_validate (op=<value optimized out>, dict=0x78eb9d0, op_errstr=0x7ffff6f1a1f8, rsp_dict=0xffffffffffffffff)
    at glusterd-op-sm.c:6707
#12 0x00002aaaaaaeed86 in glusterd_op_ac_send_stage_op (event=<value optimized out>, ctx=<value optimized out>) at glusterd-op-sm.c:5869
#13 0x00002aaaaaadb3ef in glusterd_op_sm () at glusterd-op-sm.c:7589
#14 0x00002aaaaaac98a2 in glusterd_handle_quota (req=0x2aaaaad25024) at glusterd-handler.c:1760
#15 0x00002b5ca7a7bfcc in rpcsvc_handle_rpc_call (svc=0x78e2060, trans=<value optimized out>, msg=0x7e68040) at rpcsvc.c:480
#16 0x00002b5ca7a7c1cc in rpcsvc_notify (trans=0x78ecb70, mydata=0x735, event=<value optimized out>, data=0x7e68040) at rpcsvc.c:576
#17 0x00002b5ca7a7d0f7 in rpc_transport_notify (this=0x735, event=RPC_TRANSPORT_MSG_SENT, data=0xffffffffffffffff) at rpc-transport.c:899
#18 0x00002aaaaadd8f7f in socket_event_poll_in (this=0x78ecb70) at socket.c:1641
#19 0x00002aaaaadd9128 in socket_event_handler (fd=<value optimized out>, idx=3, data=0x78ecb70, poll_in=1, poll_out=0, poll_err=0) at socket.c:1756
#20 0x00002b5ca7843a51 in event_dispatch_epoll_handler (event_pool=0x78e0360) at event.c:794
#21 event_dispatch_epoll (event_pool=0x78e0360) at event.c:856
#22 0x0000000000405218 in main (argc=1, argv=0x7ffff6f1aad8) at glusterfsd.c:1476
(gdb)

Comment 1 Junaid 2011-03-30 09:37:18 UTC
Fixed as part of http://patches.gluster.com/patch/6619/ patch.

Comment 2 Saurabh 2011-03-31 07:22:09 UTC
Crash is not seen anymore and the limit can be set before starting the volume.


Note You need to log in before you can comment on or make changes to this bug.