Bug 770048 - 'gluster peer probe <non-resolvable-hostname>' crashes glusterd
Summary: 'gluster peer probe <non-resolvable-hostname>' crashes glusterd
Keywords:
Status: CLOSED WORKSFORME
Alias: None
Product: GlusterFS
Classification: Community
Component: glusterd
Version: mainline
Hardware: Unspecified
OS: Unspecified
high
unspecified
Target Milestone: ---
Assignee: Kaushal
QA Contact:
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2011-12-23 07:20 UTC by Amar Tumballi
Modified: 2013-12-19 00:07 UTC (History)
4 users (show)

Fixed In Version:
Doc Type: Bug Fix
Doc Text:
Clone Of:
Environment:
Last Closed: 2012-02-22 06:07:21 UTC
Regression: ---
Mount Type: ---
Documentation: ---
CRM:
Verified Versions:
Embargoed:


Attachments (Terms of Use)

Description Amar Tumballi 2011-12-23 07:20:04 UTC
Description of problem:
glusterd process segfault when issued 'gluster peer probe abcd'

Version-Release number of selected component (if applicable):
mainline

How reproducible:
# glusterd
# gluster peer probe abcd
# gluster peer status

  
Actual results:
No crashes expected

Expected results:
Command should give proper error and exit.

Additional info:

#0  0x00000037cce48212 in vfprintf () from /lib64/libc.so.6
#1  0x00000037cce6fda2 in vsnprintf () from /lib64/libc.so.6
#2  0x00000037cce50cb2 in snprintf () from /lib64/libc.so.6
#3  0x00007ffd8360b899 in glusterd_store_hostname_peerpath_set (peerinfo=0x1a15640, 
    peerfpath=0x7fffa4b559d0 "/etc/glusterd/peers/\377\177", len=4096)
    at ../../../../../xlators/mgmt/glusterd/src/glusterd-store.c:2119
#4  0x00007ffd8360ba28 in glusterd_peerinfo_hostname_shandle_check_destroy (peerinfo=0x1a15640)
    at ../../../../../xlators/mgmt/glusterd/src/glusterd-store.c:2155
#5  0x00007ffd8360bbcc in glusterd_store_create_peer_shandle (peerinfo=0x1a15640)
    at ../../../../../xlators/mgmt/glusterd/src/glusterd-store.c:2177
#6  0x00007ffd8360c0a7 in glusterd_store_peerinfo (peerinfo=0x1a15640)
    at ../../../../../xlators/mgmt/glusterd/src/glusterd-store.c:2245
#7  0x00007ffd835ca460 in glusterd_friend_add (hoststr=0x1a22df0 "abcd1", port=24007, state=GD_FRIEND_STATE_DEFAULT, 
    uuid=0x0, rpc=0x1a162a0, friend=0x7fffa4b56b30, restore=_gf_false, args=0x7fffa4b56b20)
    at ../../../../../xlators/mgmt/glusterd/src/glusterd-handler.c:2107
#8  0x00007ffd835ca7af in glusterd_probe_begin (req=0x7ffd831c104c, hoststr=0x1a22df0 "abcd1", port=24007)
    at ../../../../../xlators/mgmt/glusterd/src/glusterd-handler.c:2145
#9  0x00007ffd835c389f in glusterd_handle_cli_probe (req=0x7ffd831c104c)
    at ../../../../../xlators/mgmt/glusterd/src/glusterd-handler.c:677
#10 0x00007ffd874a9ba8 in rpcsvc_handle_rpc_call (svc=0x1a0faa0, trans=0x1a239f0, msg=0x1a22d70)
    at ../../../../rpc/rpc-lib/src/rpcsvc.c:507
#11 0x00007ffd874aa15a in rpcsvc_notify (trans=0x1a239f0, mydata=0x1a0faa0, event=RPC_TRANSPORT_MSG_RECEIVED, 
    data=0x1a22d70) at ../../../../rpc/rpc-lib/src/rpcsvc.c:603
#12 0x00007ffd874b3b1a in rpc_transport_notify (this=0x1a239f0, event=RPC_TRANSPORT_MSG_RECEIVED, data=0x1a22d70)
    at ../../../../rpc/rpc-lib/src/rpc-transport.c:498
#13 0x00007ffd82fabfae in socket_event_poll_in (this=0x1a239f0) at ../../../../../rpc/rpc-transport/socket/src/socket.c:1675
#14 0x00007ffd82fac9e0 in socket_event_handler (fd=5, idx=1, data=0x1a239f0, poll_in=1, poll_out=0, poll_err=0)
    at ../../../../../rpc/rpc-transport/socket/src/socket.c:1790
#15 0x00007ffd877383d4 in event_dispatch_epoll_handler (event_pool=0x1a062d0, events=0x1a13e70, i=0)
    at ../../../libglusterfs/src/event.c:794
(gdb) fr 3
#3  0x00007ffd8360b899 in glusterd_store_hostname_peerpath_set (peerinfo=0x1a15640, 
    peerfpath=0x7fffa4b559d0 "/etc/glusterd/peers/\377\177", len=4096)
    at ../../../../../xlators/mgmt/glusterd/src/glusterd-store.c:2119
2119	        snprintf (peerfpath, len, "%s/%s", peerdir, peerinfo->hostname);
(gdb) p peerinfo
$1 = (glusterd_peerinfo_t *) 0x1a15640
(gdb) p *peerinfo
$2 = {uuid = "[2011-12-23 12:4", uuid_str = "7:22.079591] I [glusterd-handler.c:2315:glusterd_x", state = {
    state = 1919967081, transition_time = {tv_sec = 3256138116279526770, tv_usec = 7237959101965495399}}, 
  hostname = 0x203a <Address 0x203a out of bounds>, port = 0, uuid_list = {next = 0x0, prev = 0x21}, op_peers_list = {
    next = 0x37cd195248, prev = 0x37cd195248}, rpc = 0x1a162a0, mgmt = 0x20, peer = 0x6c2f343662696c2f, 
  connected = 1667719785, shandle = 0x312e6f73, sm_log = {transitions = 0x4a1, current = 0, size = 27350752, 
    count = 239687780176, state_name_get = 0, event_name_get = 0x1a14f30}}
(gdb)

Comment 1 Kaushal 2012-01-05 12:52:01 UTC
Is anyone else getting this crash? I don't. 
Peer probe always fails for me with the message, 
"Probe Unsuccessful
 Probe returned with unknown errno 107"

Amar, are there any other conditions for this to happen?

Comment 2 Amar Tumballi 2012-02-20 07:20:10 UTC
Seems that its working for me now with master branch today. Closing it as it works for me.

Comment 3 Shwetha Panduranga 2012-02-21 05:59:43 UTC
I am seeing this issue on mainline

[root@APP-SERVER1 ~]# gluster peer probe APP-SERVER2
Probe unsuccessful
Probe returned with unknown errno 107

Comment 4 Amar Tumballi 2012-02-21 06:16:59 UTC
Shwetha, when this output comes, is the 'glusterd' process crashed? if not, we should open a new bug with output not being meaningful. this is about glusterd crashing with the backtrace above.

Comment 5 Shwetha Panduranga 2012-02-22 05:57:13 UTC
Glusterd didn't crash. But it command gave "probe unsuccessful" output.

Comment 6 Amar Tumballi 2012-02-22 06:07:21 UTC
Thanks for clarifying. 'Probe unsuccessful' is valid output (as it is not successful), I will be closing this bug as there is no more a crash. 

But if we think 'Probe returned with unknown errno 107' is not meaningful, then we can have a 'low' severity bug about the output of the message itself. Go ahead and rise the bug.


Note You need to log in before you can comment on or make changes to this bug.