Bug 765400 (GLUSTER-3668) - [glusterfs-3.3.0qa11]: glusterd crashed when gluster volume status command was issued
Summary: [glusterfs-3.3.0qa11]: glusterd crashed when gluster volume status command wa...
Keywords:
Status: CLOSED CURRENTRELEASE
Alias: GLUSTER-3668
Product: GlusterFS
Classification: Community
Component: glusterd
Version: pre-release
Hardware: x86_64
OS: Linux
medium
medium
Target Milestone: ---
Assignee: Raghavendra Bhat
QA Contact:
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2011-09-29 12:30 UTC by Raghavendra Bhat
Modified: 2011-10-03 11:13 UTC (History)
2 users (show)

Fixed In Version:
Doc Type: Bug Fix
Doc Text:
Clone Of:
Environment:
Last Closed:
Regression: ---
Mount Type: ---
Documentation: ---
CRM:
Verified Versions: glusterfs-3.3.0qa13
Embargoed:


Attachments (Terms of Use)

Description Raghavendra Bhat 2011-09-29 12:30:38 UTC
glusterd segfaulted when gluster volume status <volname> command was issued with the following backtrace.


Core was generated by `glusterd'.
Program terminated with signal 6, Aborted.
#0  0x00000030b8e30265 in raise () from /lib64/libc.so.6
(gdb) bt
#0  0x00000030b8e30265 in raise () from /lib64/libc.so.6
#1  0x00000030b8e31d10 in abort () from /lib64/libc.so.6
#2  0x00000030b8e296e6 in __assert_fail () from /lib64/libc.so.6
#3  0x00002aaaaaafb6c3 in glusterd_volume_rebalance_use_rsp_dict (rsp_dict=0x2aaabde38ae0)
    at ../../../../../xlators/mgmt/glusterd/src/glusterd-rpc-ops.c:1356
#4  0x00002aaaaaafbdf0 in glusterd3_1_commit_op_cbk (req=0x2aaaabd7004c, iov=0x2aaaabd7008c, count=1, myframe=0x2b430ecd70d4)
    at ../../../../../xlators/mgmt/glusterd/src/glusterd-rpc-ops.c:1521
#5  0x00002b430dd9525e in rpc_clnt_handle_reply (clnt=0xeec3460, pollin=0x2aaabc008710) at ../../../../rpc/rpc-lib/src/rpc-clnt.c:789
#6  0x00002b430dd95586 in rpc_clnt_notify (trans=0xeec3780, mydata=0xeec3490, event=RPC_TRANSPORT_MSG_RECEIVED, data=0x2aaabc008710)
    at ../../../../rpc/rpc-lib/src/rpc-clnt.c:902
#7  0x00002b430dd919f3 in rpc_transport_notify (this=0xeec3780, event=RPC_TRANSPORT_MSG_RECEIVED, data=0x2aaabc008710)
    at ../../../../rpc/rpc-lib/src/rpc-transport.c:498
#8  0x00002aaaaae06ea7 in socket_event_poll_in (this=0xeec3780) at ../../../../../rpc/rpc-transport/socket/src/socket.c:1675
#9  0x00002aaaaae073e9 in socket_event_handler (fd=9, idx=2, data=0xeec3780, poll_in=1, poll_out=0, poll_err=0)
    at ../../../../../rpc/rpc-transport/socket/src/socket.c:1790
#10 0x00002b430db3d84c in event_dispatch_epoll_handler (event_pool=0xeeb4960, events=0xeebfc70, i=0) at ../../../libglusterfs/src/event.c:794
#11 0x00002b430db3da51 in event_dispatch_epoll (event_pool=0xeeb4960) at ../../../libglusterfs/src/event.c:856
#12 0x00002b430db3ddab in event_dispatch (event_pool=0xeeb4960) at ../../../libglusterfs/src/event.c:956
#13 0x000000000040784d in main (argc=1, argv=0x7fffb034cda8) at ../../../glusterfsd/src/glusterfsd.c:1592
(gdb) f 3
#3  0x00002aaaaaafb6c3 in glusterd_volume_rebalance_use_rsp_dict (rsp_dict=0x2aaabde38ae0)
    at ../../../../../xlators/mgmt/glusterd/src/glusterd-rpc-ops.c:1356
1356            GF_ASSERT (GD_OP_REBALANCE == op);
(gdb) p op
$1 = GD_OP_STATUS_VOLUME
(gdb) f 4
#4  0x00002aaaaaafbdf0 in glusterd3_1_commit_op_cbk (req=0x2aaaabd7004c, iov=0x2aaaabd7008c, count=1, myframe=0x2b430ecd70d4)
    at ../../../../../xlators/mgmt/glusterd/src/glusterd-rpc-ops.c:1521
1521                            ret = glusterd_volume_rebalance_use_rsp_dict (dict);
(gdb) l
1516                            ret = glusterd_volume_status_use_rsp_dict (dict);
1517                            if (ret)
1518                                    goto out;
1519
1520                    case GD_OP_REBALANCE:
1521                            ret = glusterd_volume_rebalance_use_rsp_dict (dict);
1522                            if (ret)
1523                                    goto out;
1524
1525                    break;
(gdb) l -
1506                                    goto out;
1507                    break;
1508
1509                    case GD_OP_GSYNC_SET:
1510                            ret = glusterd_gsync_use_rsp_dict (dict, rsp.op_errstr);
1511                            if (ret)
1512                                    goto out;
1513                    break;
1514
1515                    case GD_OP_STATUS_VOLUME:
(gdb) l
1516                            ret = glusterd_volume_status_use_rsp_dict (dict);
1517                            if (ret)
1518                                    goto out;
1519
1520                    case GD_OP_REBALANCE:
1521                            ret = glusterd_volume_rebalance_use_rsp_dict (dict);
1522                            if (ret)
1523                                    goto out;
1524
1525                    break;
(gdb) 


Above break statement is missing after volume_status operation. Thus whenver volume status command is recieved glusterd tries to execute rebalance op also afer volume_status op.

Comment 1 Anand Avati 2011-09-29 13:15:55 UTC
CHANGE: http://review.gluster.com/534 (Change-Id: I70e7d38a5cb3f6b0033ab9cabd7dfed0c68b77b8) merged in master by Vijay Bellur (vijay)

Comment 2 Raghavendra Bhat 2011-10-03 08:13:18 UTC
Tested with glusterfs-3.3.0qa13. volume status command does not crash.


Note You need to log in before you can comment on or make changes to this bug.