Bug 1032122

Summary: glusterd getting oomkilled
Product: [Community] GlusterFS Reporter: Lukas Bezdicka <social>
Component: glusterdAssignee: krishnan parthasarathi <kparthas>
Status: CLOSED CURRENTRELEASE QA Contact:
Severity: unspecified Docs Contact:
Priority: unspecified    
Version: 3.4.1CC: gluster-bugs, mikko.tiainen, nsathyan, sasundar
Target Milestone: ---   
Target Release: ---   
Hardware: Unspecified   
OS: Unspecified   
Whiteboard:
Fixed In Version: glusterfs-3.6.1 Doc Type: Bug Fix
Doc Text:
Story Points: ---
Clone Of: Environment:
Last Closed: 2014-11-10 15:13:28 UTC Type: Bug
Regression: --- Mount Type: ---
Documentation: --- CRM:
Verified Versions: Category: ---
oVirt Team: --- RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: --- Target Upstream Version:
Embargoed:
Attachments:
Description Flags
last words from glusterd at debug
none
bt caught shortly before oomkill none

Description Lukas Bezdicka 2013-11-19 15:17:02 UTC
Created attachment 826106 [details]
last words from glusterd at debug

Description of problem:
We are seeing random oom-kills on production which we link to geo-replication and our monitoring running commands like gluster volume status/gluster volume heal...

Version-Release number of selected component (if applicable):
3.4.1

How reproducible:
HARD - closest reproducer below

Steps to Reproduce:
1. set up two nodes holding replicated volume
2. on one node mount the volume and generate heavy traffic (I use bonnie++ but other stuff can be used)
3. create third node and run geo-replication from node 2 to node 3
4. on node 2 run while loop with simultaneous (at least 10 at same time) commands calling gluster volume status, gluster volume info, gluster volume heal info, gluster volume profile info commands.
5. keep running for hour? stop the while loop.
6. watch glusterd memory

Actual results:
oom-kill kicks in on glusterd


Expected results:
no oomkill

Additional info:
These are last logs from production and see attachment where I added debug logs from my attempted reproducer

 	11/19/13
2:25:14.953 PM 	

[2013-11-19 13:25:14.953780] E [glusterd-op-sm.c:5378:glusterd_op_sm] 0-management: handler returned: -1

    host=cl-dfs02   Options|  
    sourcetype=gluster   Options|  
    source=/mnt/log/glusterfs/etc-glusterfs-glusterd.vol.log   Options

» 	11/19/13
2:25:14.953 PM 	

[2013-11-19 13:25:14.953730] E [glusterd-utils.c:362:glusterd_unlock] 0-management: Cluster lock not held!

    host=cl-dfs02   Options|  
    sourcetype=gluster   Options|  
    source=/mnt/log/glusterfs/etc-glusterfs-glusterd.vol.log   Options

» 	11/19/13
2:25:14.946 PM 	

[2013-11-19 13:25:14.946968] E [glusterd-op-sm.c:5378:glusterd_op_sm] 0-management: handler returned: -1

    host=cl-dfs02   Options|  
    sourcetype=gluster   Options|  
    source=/mnt/log/glusterfs/etc-glusterfs-glusterd.vol.log   Options

Comment 1 Lukas Bezdicka 2013-11-19 15:17:59 UTC
wrong copy paste from logs, add to production logs:
[2013-11-19 13:25:14.946912] E [glusterd-utils.c:329:glusterd_lock] 0-management: Unable to get lock for uuid: 389038de-21e5-4063-bce3-a88eb2b5252e, lock held by: 389038de-21e5-4063-bce3-a88eb2b5252e

Comment 2 Lukas Bezdicka 2013-12-04 17:26:24 UTC
==27443== 5,382 bytes in 369 blocks are definitely lost in loss record 242 of 270
==27443==    at 0x4A069EE: malloc (vg_replace_malloc.c:270)
==27443==    by 0x387B481041: strdup (in /lib64/libc-2.12.so)
==27443==    by 0x82C70E3: get_ip_from_addrinfo (glusterd-utils.c:192)
==27443==    by 0x82C71A4: glusterd_is_local_addr (glusterd-utils.c:289)
==27443==    by 0x82C736A: glusterd_get_brickinfo (glusterd-utils.c:4243)
==27443==    by 0x82E2C25: __gluster_pmap_signin (glusterd-pmap.c:417)
==27443==    by 0x82A1B4E: glusterd_big_locked_handler (glusterd-handler.c:75)
==27443==    by 0x3FD7408F84: rpcsvc_handle_rpc_call (rpcsvc.c:551)
==27443==    by 0x3FD74090D2: rpcsvc_notify (rpcsvc.c:645)
==27443==    by 0x3FD740A4E7: rpc_transport_notify (rpc-transport.c:497)
==27443==    by 0x8EDB1D5: socket_event_poll_in (socket.c:2118)
==27443==    by 0x8EDCBFC: socket_event_handler (socket.c:2230)

==27443== 5,293 bytes in 397 blocks are definitely lost in loss record 241 of 270
==27443==    at 0x4A069EE: malloc (vg_replace_malloc.c:270)
==27443==    by 0x387B481041: strdup (in /lib64/libc-2.12.so)
==27443==    by 0x82C70E3: get_ip_from_addrinfo (glusterd-utils.c:192)
==27443==    by 0x82C71A4: glusterd_is_local_addr (glusterd-utils.c:289)
==27443==    by 0x82E115F: __server_getspec (glusterd-handshake.c:256)
==27443==    by 0x82A1B4E: glusterd_big_locked_handler (glusterd-handler.c:75)
==27443==    by 0x3FD7408F84: rpcsvc_handle_rpc_call (rpcsvc.c:551)
==27443==    by 0x3FD74090D2: rpcsvc_notify (rpcsvc.c:645)
==27443==    by 0x3FD740A4E7: rpc_transport_notify (rpc-transport.c:497)
==27443==    by 0x8EDB1D5: socket_event_poll_in (socket.c:2118)
==27443==    by 0x8EDCBFC: socket_event_handler (socket.c:2230)
==27443==    by 0x3FD705E206: event_dispatch_epoll (event-epoll.c:384)

==27443== 126 bytes in 9 blocks are definitely lost in loss record 146 of 270
==27443==    at 0x4A069EE: malloc (vg_replace_malloc.c:270)
==27443==    by 0x387B481041: strdup (in /lib64/libc-2.12.so)
==27443==    by 0x82C70E3: get_ip_from_addrinfo (glusterd-utils.c:192)
==27443==    by 0x82C71A4: glusterd_is_local_addr (glusterd-utils.c:289)
==27443==    by 0x82CB9A4: glusterd_hostname_to_uuid (glusterd-utils.c:4913)
==27443==    by 0x82CD390: glusterd_volume_brickinfo_get (glusterd-utils.c:940)
==27443==    by 0x82D2382: glusterd_volume_brickinfo_get_by_brick (glusterd-utils.c:982)
==27443==    by 0x82AEA62: __glusterd_brick_rpc_notify (glusterd-handler.c:3389)
==27443==    by 0x82A1D6F: glusterd_big_locked_notify (glusterd-handler.c:64)
==27443==    by 0x3FD740EC30: rpc_clnt_notify (rpc-clnt.c:923)
==27443==    by 0x3FD740A4E7: rpc_transport_notify (rpc-transport.c:497)
==27443==    by 0x8ED7490: socket_connect_finish (socket.c:2193)


Patch:


    diff --git a/xlators/mgmt/glusterd/src/glusterd-utils.c b/xlators/mgmt/glusterd/src/glusterd-utils.c
    index 282dde0..16cae80 100644
    --- a/xlators/mgmt/glusterd/src/glusterd-utils.c
    +++ b/xlators/mgmt/glusterd/src/glusterd-utils.c
    @@ -291,8 +291,11 @@ glusterd_is_local_addr (char *hostname)
     
                     found = glusterd_is_loopback_localhost (res->ai_addr, hostname)
                             || glusterd_interface_search (ip);
    -                if (found)
    +                if (found) {
    +                        GF_FREE (ip);
                             goto out;
    +                }
    +                GF_FREE (ip);
             }
     
     out:

Comment 3 Lukas Bezdicka 2013-12-04 21:22:28 UTC
==4998== 2,236 bytes in 1 blocks are definitely lost in loss record 254 of 300
==4998==    at 0x4A0577B: calloc (vg_replace_malloc.c:593)
==4998==    by 0x4C53FF2: __gf_calloc (mem-pool.h:75)
==4998==    by 0x4C54135: mem_get (mem-pool.c:419)
==4998==    by 0x4EA6BD3: rpc_clnt_submit (rpc-clnt.c:1416)
==4998==    by 0x89B2222: gd_syncop_submit_request (glusterd-syncop.c:130)
==4998==    by 0x89B2970: gd_syncop_mgmt_stage_op (glusterd-syncop.c:425)
==4998==    by 0x89B3198: gd_stage_op_phase (glusterd-syncop.c:782)
==4998==    by 0x89B38D1: gd_sync_task_begin (glusterd-syncop.c:1049)
==4998==    by 0x89B3A5A: glusterd_op_begin_synctask (glusterd-syncop.c:1090)
==4998==    by 0x894DB05: __glusterd_handle_status_volume (glusterd-handler.c:3264)
==4998==    by 0x894EB6E: glusterd_big_locked_handler (glusterd-handler.c:75)
==4998==    by 0x4C609E1: synctask_wrap (syncop.c:131)
--
==4998== 2,304 bytes in 8 blocks are definitely lost in loss record 255 of 300
==4998==    at 0x4A0577B: calloc (vg_replace_malloc.c:593)
==4998==    by 0x4C53FF2: __gf_calloc (mem-pool.h:75)
==4998==    by 0x4C2D7DD: dict_allocate_and_serialize (dict.c:2627)
==4998==    by 0x895A89D: __glusterd_handle_cli_list_volume (glusterd-handler.c:1360)
==4998==    by 0x894EB6E: glusterd_big_locked_handler (glusterd-handler.c:75)
==4998==    by 0x4C609E1: synctask_wrap (syncop.c:131)
==4998==    by 0x3DF5443BEF: ??? (in /lib64/libc-2.12.so)
--
==4998== 9,928 bytes in 73 blocks are definitely lost in loss record 269 of 300
==4998==    at 0x4A0577B: calloc (vg_replace_malloc.c:593)
==4998==    by 0x4C53FF2: __gf_calloc (mem-pool.h:75)
==4998==    by 0x4C551E4: iobref_new (iobuf.c:771)
==4998==    by 0x937D86B: socket_event_poll_in (socket.c:2022)
==4998==    by 0x937EC3C: socket_event_handler (socket.c:2230)
==4998==    by 0x4C77226: event_dispatch_epoll (event-epoll.c:384)
==4998==    by 0x406817: main (glusterfsd.c:1934)
--
==4998== 42,812 bytes in 7 blocks are definitely lost in loss record 281 of 300
==4998==    at 0x4A0577B: calloc (vg_replace_malloc.c:593)
==4998==    by 0x4C53FF2: __gf_calloc (mem-pool.h:75)
==4998==    by 0x4C54135: mem_get (mem-pool.c:419)
==4998==    by 0x4EA1AB8: rpcsvc_request_create (rpcsvc.c:363)
==4998==    by 0x4EA1E55: rpcsvc_handle_rpc_call (rpcsvc.c:506)
==4998==    by 0x4EA20D2: rpcsvc_notify (rpcsvc.c:645)
==4998==    by 0x4EA34E7: rpc_transport_notify (rpc-transport.c:497)
==4998==    by 0x937D215: socket_event_poll_in (socket.c:2118)
==4998==    by 0x937EC3C: socket_event_handler (socket.c:2230)
==4998==    by 0x4C77226: event_dispatch_epoll (event-epoll.c:384)
==4998==    by 0x406817: main (glusterfsd.c:1934)
--
==5101== 15 bytes in 1 blocks are definitely lost in loss record 20 of 225
==5101==    at 0x4A0577B: calloc (vg_replace_malloc.c:593)
==5101==    by 0x4C53FF2: __gf_calloc (mem-pool.h:75)
==5101==    by 0x4C2F62C: _dict_set (dict.c:289)
==5101==    by 0x4C2F7F9: dict_set (dict.c:334)
==5101==    by 0x4EA33BA: rpc_transport_inet_options_build (rpc-transport.c:603)
==5101==    by 0x894EEE3: glusterd_transport_inet_options_build (glusterd-handler.c:2686)
==5101==    by 0x894F21C: glusterd_friend_rpc_create (glusterd-handler.c:2731)
==5101==    by 0x898BA97: glusterd_store_retrieve_peers (glusterd-store.c:2457)
==5101==    by 0x898BD17: glusterd_restore (glusterd-store.c:2519)
==5101==    by 0x894ADA1: init (glusterd.c:1151)
==5101==    by 0x4C313A1: xlator_init (xlator.c:361)
==5101==    by 0x4C5E8C0: glusterfs_graph_init (graph.c:289)

Comment 4 Lukas Bezdicka 2013-12-04 21:32:07 UTC
==4998== 42,812 bytes in 7 blocks are definitely lost in loss record 281 of 300
==4998==    at 0x4A0577B: calloc (vg_replace_malloc.c:593)
==4998==    by 0x4C53FF2: __gf_calloc (mem-pool.h:75)
==4998==    by 0x4C54135: mem_get (mem-pool.c:419)
==4998==    by 0x4EA1AB8: rpcsvc_request_create (rpcsvc.c:363)
==4998==    by 0x4EA1E55: rpcsvc_handle_rpc_call (rpcsvc.c:506)
==4998==    by 0x4EA20D2: rpcsvc_notify (rpcsvc.c:645)
==4998==    by 0x4EA34E7: rpc_transport_notify (rpc-transport.c:497)
==4998==    by 0x937D215: socket_event_poll_in (socket.c:2118)
==4998==    by 0x937EC3C: socket_event_handler (socket.c:2230)
==4998==    by 0x4C77226: event_dispatch_epoll (event-epoll.c:384)
==4998==    by 0x406817: main (glusterfsd.c:1934)
--

17:23 < social> hagarth: well I'm hunting some memleak and I got a bit puzzled, do you have 5min? Please checkout release-3.4 in git and look at rpc/rpc-lib/src/rpcsvc.c
17:24 < social> line 363 calls rpcsvc_alloc_request which is wrapped up mem_get to req, in case of error after that req is just set to NULL
17:25 < social> hagarth: shouldn't there be some mem_put call or something? I think it's in rpcsvc_request_destroy but I don't see it in rpcsvc_request_create
17:26 < social> so (now goes only my theory) when I dos gluster with for example gluster volume status request so much it starts callocing memory in mem_get it'll leak the calloced memory here?

Comment 5 Anand Avati 2013-12-05 07:02:53 UTC
REVIEW: http://review.gluster.org/6433 (libglusterfs: Free IP address string in gf_is_local_addr()) posted (#1) for review on master by Vijay Bellur (vbellur)

Comment 6 Lukas Bezdicka 2013-12-05 10:04:04 UTC
Patch works fine and ip leak dissappears from valgrind output.

Comment 7 Lukas Bezdicka 2013-12-05 11:01:04 UTC
(In reply to Lukas Bezdicka from comment #4)
> ==4998== 42,812 bytes in 7 blocks are definitely lost in loss record 281 of
> 300
> ==4998==    at 0x4A0577B: calloc (vg_replace_malloc.c:593)
> ==4998==    by 0x4C53FF2: __gf_calloc (mem-pool.h:75)
> ==4998==    by 0x4C54135: mem_get (mem-pool.c:419)
> ==4998==    by 0x4EA1AB8: rpcsvc_request_create (rpcsvc.c:363)
> ==4998==    by 0x4EA1E55: rpcsvc_handle_rpc_call (rpcsvc.c:506)
> ==4998==    by 0x4EA20D2: rpcsvc_notify (rpcsvc.c:645)
> ==4998==    by 0x4EA34E7: rpc_transport_notify (rpc-transport.c:497)
> ==4998==    by 0x937D215: socket_event_poll_in (socket.c:2118)
> ==4998==    by 0x937EC3C: socket_event_handler (socket.c:2230)
> ==4998==    by 0x4C77226: event_dispatch_epoll (event-epoll.c:384)
> ==4998==    by 0x406817: main (glusterfsd.c:1934)
> --
> 
> 17:23 < social> hagarth: well I'm hunting some memleak and I got a bit
> puzzled, do you have 5min? Please checkout release-3.4 in git and look at
> rpc/rpc-lib/src/rpcsvc.c
> 17:24 < social> line 363 calls rpcsvc_alloc_request which is wrapped up
> mem_get to req, in case of error after that req is just set to NULL
> 17:25 < social> hagarth: shouldn't there be some mem_put call or something?
> I think it's in rpcsvc_request_destroy but I don't see it in
> rpcsvc_request_create
> 17:26 < social> so (now goes only my theory) when I dos gluster with for
> example gluster volume status request so much it starts callocing memory in
> mem_get it'll leak the calloced memory here?

diff --git a/rpc/rpc-lib/src/rpcsvc.c b/rpc/rpc-lib/src/rpcsvc.c
index d9bbb1c..aae6bfd 100644
--- a/rpc/rpc-lib/src/rpcsvc.c
+++ b/rpc/rpc-lib/src/rpcsvc.c
@@ -431,6 +431,7 @@ err:
                 if (ret)
                         gf_log ("rpcsvc", GF_LOG_WARNING,
                                 "failed to queue error reply");
+               rpcsvc_request_destroy(req);
                 req = NULL;
         }

Comment 8 Anand Avati 2013-12-05 12:50:34 UTC
REVIEW: http://review.gluster.org/6440 (rpcsvc: destroy request before returning NULL on error in rpcsvc_request_create) posted (#1) for review on master by Lukáš Bezdička (lukas.bezdicka)

Comment 9 Anand Avati 2013-12-05 21:36:26 UTC
COMMIT: http://review.gluster.org/6433 committed in master by Anand Avati (avati) 
------
commit 39dd3b21c59380fb5f4dcae59ebd4f8e000cfa98
Author: Vijay Bellur <vbellur>
Date:   Thu Dec 5 12:31:34 2013 +0530

    libglusterfs: Free IP address string in gf_is_local_addr()
    
    Change-Id: Ib113de269134c907aa2f35459e2764c142b94477
    BUG: 1032122
    Signed-off-by: Vijay Bellur <vbellur>
    Reviewed-on: http://review.gluster.org/6433
    Tested-by: Lukáš Bezdička <lukas.bezdicka>
    Tested-by: Gluster Build System <jenkins.com>
    Reviewed-by: Anand Avati <avati>

Comment 10 Lukas Bezdicka 2013-12-06 09:43:02 UTC
Caught on production shortly before oomkill:

    #0  0x000000387b4e54b7 in mprotect () from /lib64/libc.so.6
    #1  0x000000387b47a36b in _int_malloc () from /lib64/libc.so.6
    #2  0x000000387b47a5a6 in calloc () from /lib64/libc.so.6
    #3  0x0000003fd703afd3 in __gf_default_calloc (nmemb=<value optimized out>, size=2236, type=48) at mem-pool.h:75
    #4  __gf_calloc (nmemb=<value optimized out>, size=2236, type=48) at mem-pool.c:104
    #5  0x0000003fd703b116 in mem_get (mem_pool=0x10ddac0) at mem-pool.c:419
    #6  0x0000003fd740dbd4 in rpc_clnt_submit (rpc=0x10dd9a0, prog=<value optimized out>, procnum=<value optimized out>, cbkfn=0x7fbb00a6ff60 <gd_syncop_stage_op_cbk>,
    #7  0x00007fbb00a701e3 in gd_syncop_submit_request (rpc=0x10dd9a0, req=0x7fbab8001750, cookie=<value optimized out>, prog=<value optimized out>, procnum=3,
    #8  0x00007fbb00a70931 in gd_syncop_mgmt_stage_op (rpc=0x10dd9a0, args=0x1500820, my_uuid=<value optimized out>, recv_uuid=<value optimized out>, op=15,
    #9  0x00007fbb00a71159 in gd_stage_op_phase (peers=0x10936c0, op=GD_OP_GSYNC_SET, op_ctx=0x7fbb02c900f8, req_dict=0x7fbb02c8ff54, op_errstr=0x1501018, npeers=22024080)
    #10 0x00007fbb00a71892 in gd_sync_task_begin (op_ctx=0x7fbb02c900f8, req=0x7fbb0003fa44) at glusterd-syncop.c:1049
    #11 0x00007fbb00a71a1b in glusterd_op_begin_synctask (req=0x7fbb0003fa44, op=<value optimized out>, dict=0x7fbb02c900f8) at glusterd-syncop.c:1090
    #12 0x00007fbb00a5e6db in __glusterd_handle_gsync_set (req=0x7fbb0003fa44) at glusterd-geo-rep.c:143
    #13 0x00007fbb00a0cb4f in glusterd_big_locked_handler (req=0x7fbb0003fa44, actor_fn=0x7fbb00a5e570 <__glusterd_handle_gsync_set>) at glusterd-handler.c:75
    #14 0x0000003fd70479c2 in synctask_wrap (old_task=<value optimized out>) at syncop.c:131
    #15 0x000000387b443b70 in ?? () from /lib64/libc.so.6
    #16 0x0000000000000000 in ?? ()

Comment 11 Lukas Bezdicka 2013-12-06 09:47:13 UTC
Created attachment 833514 [details]
bt caught shortly before oomkill

Comment 12 Anand Avati 2013-12-08 12:45:02 UTC
REVIEW: http://review.gluster.org/6462 (libglusterfs: Free IP address string in gf_is_local_addr()) posted (#1) for review on release-3.5 by Vijay Bellur (vbellur)

Comment 13 Lukas Bezdicka 2013-12-10 13:42:12 UTC
(In reply to Lukas Bezdicka from comment #7)
> (In reply to Lukas Bezdicka from comment #4)
> > ==4998== 42,812 bytes in 7 blocks are definitely lost in loss record 281 of
> > 300
> > ==4998==    at 0x4A0577B: calloc (vg_replace_malloc.c:593)
> > ==4998==    by 0x4C53FF2: __gf_calloc (mem-pool.h:75)
> > ==4998==    by 0x4C54135: mem_get (mem-pool.c:419)
> > ==4998==    by 0x4EA1AB8: rpcsvc_request_create (rpcsvc.c:363)
> > ==4998==    by 0x4EA1E55: rpcsvc_handle_rpc_call (rpcsvc.c:506)
> > ==4998==    by 0x4EA20D2: rpcsvc_notify (rpcsvc.c:645)
> > ==4998==    by 0x4EA34E7: rpc_transport_notify (rpc-transport.c:497)
> > ==4998==    by 0x937D215: socket_event_poll_in (socket.c:2118)
> > ==4998==    by 0x937EC3C: socket_event_handler (socket.c:2230)
> > ==4998==    by 0x4C77226: event_dispatch_epoll (event-epoll.c:384)
> > ==4998==    by 0x406817: main (glusterfsd.c:1934)
> > --
> > 
> > 17:23 < social> hagarth: well I'm hunting some memleak and I got a bit
> > puzzled, do you have 5min? Please checkout release-3.4 in git and look at
> > rpc/rpc-lib/src/rpcsvc.c
> > 17:24 < social> line 363 calls rpcsvc_alloc_request which is wrapped up
> > mem_get to req, in case of error after that req is just set to NULL
> > 17:25 < social> hagarth: shouldn't there be some mem_put call or something?
> > I think it's in rpcsvc_request_destroy but I don't see it in
> > rpcsvc_request_create
> > 17:26 < social> so (now goes only my theory) when I dos gluster with for
> > example gluster volume status request so much it starts callocing memory in
> > mem_get it'll leak the calloced memory here?
> 
> diff --git a/rpc/rpc-lib/src/rpcsvc.c b/rpc/rpc-lib/src/rpcsvc.c
> index d9bbb1c..aae6bfd 100644
> --- a/rpc/rpc-lib/src/rpcsvc.c
> +++ b/rpc/rpc-lib/src/rpcsvc.c
> @@ -431,6 +431,7 @@ err:
>                  if (ret)
>                          gf_log ("rpcsvc", GF_LOG_WARNING,
>                                  "failed to queue error reply");
> +               rpcsvc_request_destroy(req);
>                  req = NULL;
>          }

I'm unable to reproduce that one now :/

Comment 14 Anand Avati 2013-12-16 18:10:18 UTC
REVIEW: http://review.gluster.org/6518 (mgmt/glusterd: Fix a memory leak in glusterd_is_local_addr()) posted (#1) for review on release-3.4 by Vijay Bellur (vbellur)

Comment 15 Anand Avati 2013-12-17 07:15:52 UTC
COMMIT: http://review.gluster.org/6518 committed in release-3.4 by Vijay Bellur (vbellur) 
------
commit 1832dbf0ba3d5153415c7e7f7eab935007cc8209
Author: Vijay Bellur <vbellur>
Date:   Mon Dec 16 23:37:27 2013 +0530

    mgmt/glusterd: Fix a memory leak in glusterd_is_local_addr()
    
    Change-Id: Id41d828e1cc56005f5e2a1e75b6d858703dd79c9
    BUG: 1032122
    Signed-off-by: Vijay Bellur <vbellur>
    Reviewed-on: http://review.gluster.org/6518
    Reviewed-by: Lukáš Bezdička <lukas.bezdicka>
    Tested-by: Gluster Build System <jenkins.com>

Comment 16 Anand Avati 2013-12-17 17:02:39 UTC
COMMIT: http://review.gluster.org/6462 committed in release-3.5 by Vijay Bellur (vbellur) 
------
commit 20d53b39197871f34dd7dcb00bdf6193add35ef5
Author: Vijay Bellur <vbellur>
Date:   Thu Dec 5 12:31:34 2013 +0530

    libglusterfs: Free IP address string in gf_is_local_addr()
    
    Change-Id: Ib113de269134c907aa2f35459e2764c142b94477
    BUG: 1032122
    Signed-off-by: Vijay Bellur <vbellur>
    Reviewed-on: http://review.gluster.org/6433
    Tested-by: Lukáš Bezdička <lukas.bezdicka>
    Tested-by: Gluster Build System <jenkins.com>
    Reviewed-by: Anand Avati <avati>
    Reviewed-on: http://review.gluster.org/6462
    Reviewed-by: Pranith Kumar Karampuri <pkarampu>

Comment 17 Niels de Vos 2014-11-10 15:13:28 UTC
This bug is getting closed because a release has been made available that should address the reported issue. In case the problem is still not fixed with glusterfs-3.6.1, please reopen this bug report.

glusterfs-3.6.1 has been announced [1], packages for several distributions should become available in the near future. Keep an eye on the Gluster Users mailinglist [2] and the update infrastructure for your distribution.

[1] http://supercolony.gluster.org/pipermail/gluster-users/2014-November/019410.html
[2] http://supercolony.gluster.org/mailman/listinfo/gluster-users

Comment 18 Mikko Tiainen 2014-12-23 07:25:24 UTC
Hi,
it seems that this bug is again present in glusterfs version 3.6.1. I have made an additional ticket same kind of glusterd oomkill issue:
https://bugzilla.redhat.com/show_bug.cgi?id=1175617