Bug 763170 (GLUSTER-1438)

Summary: memory leaks
Product: [Community] GlusterFS Reporter: Lakshmipathi G <lakshmipathi>
Component: unclassifiedAssignee: Raghavendra G <raghavendra>
Status: CLOSED CURRENTRELEASE QA Contact:
Severity: medium Docs Contact:
Priority: low    
Version: 3.1-alphaCC: anush, gluster-bugs, vijay
Target Milestone: ---   
Target Release: ---   
Hardware: All   
OS: Linux   
Whiteboard:
Fixed In Version: Doc Type: Bug Fix
Doc Text:
Story Points: ---
Clone Of: Environment:
Last Closed: Type: ---
Regression: RTP Mount Type: fuse
Documentation: DNR CRM:
Verified Versions: Category: ---
oVirt Team: --- RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: --- Target Upstream Version:
Attachments:
Description Flags
vol files none

Description Lakshmipathi G 2010-08-25 09:58:37 UTC
few definitely lost leaks:
---------------------------
==12844==
==12844== 854,859 bytes in 9,531 blocks are definitely lost in loss record 21
of 22
==12844==    at 0x4C1ED1F: calloc (vg_replace_malloc.c:279)
==12844==    by 0x4E5EB16: __gf_calloc (mem-pool.c:135)
==12844==    by 0x8689E62: __socket_read_reply (socket.c:1201)
==12844==    by 0x868A31D: __socket_read_frag (socket.c:1278)
==12844==    by 0x868A719: __socket_proto_state_machine (socket.c:1403)
==12844==    by 0x868AA3C: socket_proto_state_machine (socket.c:1513)
==12844==    by 0x868AA7C: socket_event_poll_in (socket.c:1528)
==12844==    by 0x868ADD8: socket_event_handler (socket.c:1645)
==12844==    by 0x4E5E318: event_dispatch_epoll_handler (event.c:812)
==12844==    by 0x4E5E4EB: event_dispatch_epoll (event.c:876)
==12844==    by 0x4E5E7BB: event_dispatch (event.c:984)
==12844==    by 0x4050C9: main (glusterfsd.c:1297)


==12844== 
==12844== 88 bytes in 6 blocks are definitely lost in loss record 4 of 22
==12844==    at 0x4C1F9F6: malloc (vg_replace_malloc.c:149)
==12844==    by 0x59C8153: xdr_string (in /lib64/libc-2.7.so)
==12844==    by 0x508F38D: xdr_gf_prog_detail (rpc-common.c:82)
==12844==    by 0x59C947D: xdr_reference (in /lib64/libc-2.7.so)
==12844==    by 0x59C9430: xdr_pointer (in /lib64/libc-2.7.so)
==12844==    by 0x508F49B: xdr_gf_dump_rsp (rpc-common.c:102)
==12844==    by 0x508F301: xdr_to_generic (rpc-common.c:60)
==12844==    by 0x508F58E: xdr_to_dump_rsp (rpc-common.c:133)
==12844==    by 0x75858E0: client_dump_version_cbk (client-handshake.c:792)
==12844==    by 0x508D692: rpc_clnt_handle_reply (rpc-clnt.c:690)
==12844==    by 0x508D9A3: rpc_clnt_notify (rpc-clnt.c:799)
==12844==    by 0x508B6AF: rpc_transport_notify (rpc-transport.c:1123)
==12844== 

==12844== 214 bytes in 2 blocks are definitely lost in loss record 7 of 22
==12844==    at 0x4C1F9F6: malloc (vg_replace_malloc.c:149)
==12844==    by 0x4E5EBB5: __gf_malloc (mem-pool.c:159)
==12844==    by 0x4E5EE53: gf_asprintf (mem-pool.c:213)
==12844==    by 0x508ACCB: rpc_transport_load (rpc-transport.c:887)
==12844==    by 0x508DB32: rpc_clnt_connection_init (rpc-clnt.c:866)
==12844==    by 0x508DDC3: rpc_clnt_init (rpc-clnt.c:938)
==12844==    by 0x757123E: client_init_rpc (client.c:1721)
==12844==    by 0x75713FF: init (client.c:1783)
==12844==    by 0x4E3BF10: __xlator_init (xlator.c:828)
==12844==    by 0x4E3BFEE: xlator_init (xlator.c:856)
==12844==    by 0x4E68A08: glusterfs_graph_init (graph.c:307)
==12844==    by 0x4E68EC6: glusterfs_graph_activate (graph.c:470)

Comment 1 Lakshmipathi G 2010-08-25 12:43:02 UTC
While running valgrind with system_light tools - it reports following messages.
complete server log can be found under /share/tickets/valgrind/aug24

client
-------
==12844== LEAK SUMMARY:
==12844==    definitely lost: 855,161 bytes in 9,539 blocks.
==12844==      possibly lost: 98,305,625 bytes in 28,728 blocks.
==12844==    still reachable: 114,568 bytes in 1,166 blocks.
==12844==         suppressed: 0 bytes in 0 blocks.

server
-------
==12837== LEAK SUMMARY:
==12837==    definitely lost: 242 bytes in 3 blocks.
==12837==      possibly lost: 48,040,086 bytes in 6,697 blocks.
==12837==    still reachable: 4,195,922 bytes in 1,053 blocks.
==12837==         suppressed: 0 bytes in 0 blocks.
---------------

Comment 2 Anand Avati 2010-08-26 04:09:30 UTC
PATCH: http://patches.gluster.com/patch/4312 in master (transport/socket: free priv->incoming.request_info if not already freed after reading each message.)

Comment 3 Lakshmipathi G 2010-08-27 01:56:05 UTC
another leak,found while running nightly_valgrind with afr

==21489== 114,157 bytes in 1,510 blocks are definitely lost in loss record 20 of 22
==21489==    at 0x4C1ED1F: calloc (vg_replace_malloc.c:279)
==21489==    by 0x4E5EB16: __gf_calloc (mem-pool.c:135)
==21489==    by 0x402C9C: gf_strdup (mem-pool.h:87)
==21489==    by 0x4035FE: parse_opts (glusterfsd.c:498)
==21489==    by 0x59AA707: argp_parse (in /lib64/libc-2.7.so)
==21489==    by 0x40434F: parse_cmdline (glusterfsd.c:877)
==21489==    by 0x40505B: main (glusterfsd.c:1275)

==21489== LEAK SUMMARY:
==21489==    definitely lost: 114,459 bytes in 1,518 blocks.
==21489==      possibly lost: 253,251,492 bytes in 327,588 blocks.
==21489==    still reachable: 243,155 bytes in 1,988 blocks.
==21489==         suppressed: 0 bytes in 0 blocks.

Comment 4 Lakshmipathi G 2010-08-27 02:07:08 UTC
Created attachment 297

Comment 5 Lakshmipathi G 2010-08-27 02:12:13 UTC
logs can be found under /share/tickets/valgrind/2010-08-26/

here is few large possibly lost -

==21493== 
==21493== 5,340,668 bytes in 32 blocks are possibly lost in loss record 15 of 15
==21493==    at 0x4C1ED1F: calloc (vg_replace_malloc.c:279)
==21493==    by 0x4E5EB16: __gf_calloc (mem-pool.c:135)
==21493==    by 0x402C9C: gf_strdup (mem-pool.h:87)
==21493==    by 0x403AC7: generate_uuid (glusterfsd.c:651)
==21493==    by 0x403F90: glusterfs_ctx_defaults_init (glusterfsd.c:764)
==21493==    by 0x40503E: main (glusterfsd.c:1271)


==21489== 
==21489== 
==21489== 123,358,416 bytes in 316,588 blocks are possibly lost in loss record 21 of 22
==21489==    at 0x4C1ED1F: calloc (vg_replace_malloc.c:279)
==21489==    by 0x4E5EB16: __gf_calloc (mem-pool.c:135)
==21489==    by 0x4E30D19: get_new_data (dict.c:57)
==21489==    by 0x4E35B0B: dict_unserialize (dict.c:2486)
==21489==    by 0x757955C: client3_1_lookup_cbk (client3_1-fops.c:1935)
==21489==    by 0x508D692: rpc_clnt_handle_reply (rpc-clnt.c:690)
==21489==    by 0x508D9A3: rpc_clnt_notify (rpc-clnt.c:799)
==21489==    by 0x508B6AF: rpc_transport_notify (rpc-transport.c:1123)
==21489==    by 0x868AA9F: socket_event_poll_in (socket.c:1531)
==21489==    by 0x868ADD8: socket_event_handler (socket.c:1645)
==21489==    by 0x4E5E318: event_dispatch_epoll_handler (event.c:812)
==21489==    by 0x4E5E4EB: event_dispatch_epoll (event.c:876)
==21489== 
==21489== 
==21489== 129,892,260 bytes in 10,997 blocks are possibly lost in loss record 22 of 22
==21489==    at 0x4C1F9F6: malloc (vg_replace_malloc.c:149)
==21489==    by 0x4E5EBB5: __gf_malloc (mem-pool.c:159)
==21489==    by 0x7E10777: iov_dup (common-utils.h:169)
==21489==    by 0x7E103E9: ioc_fault_cbk (page.c:404)
==21489==    by 0x7C039A2: ra_frame_unwind (page.c:403)
==21489==    by 0x7C03A75: ra_frame_return (page.c:435)
==21489==    by 0x7C02610: ra_waitq_return (page.c:127)
==21489==    by 0x7C028D0: ra_fault_cbk (page.c:196)
==21489==    by 0x79F5762: wb_readv_cbk (write-behind.c:2095)
==21489==    by 0x77A5D58: afr_readv_cbk (afr-inode-read.c:809)

Comment 6 Lakshmipathi G 2010-08-31 02:25:56 UTC
last nights afr show following huge possibly lost in client logs.

==29494== 
==29494== 140,384,424 bytes in 502,649 blocks are possibly lost in loss record 24 of 24
==29494==    at 0x4C1ED1F: calloc (vg_replace_malloc.c:279)
==29494==    by 0x4E5EB16: __gf_calloc (mem-pool.c:135)
==29494==    by 0x7E123E5: ioc_inode_update (ioc-inode.c:161)
==29494==    by 0x7E0AE5E: ioc_create_cbk (io-cache.c:612)
==29494==    by 0x7BFE45E: ra_create_cbk (read-ahead.c:181)
==29494==    by 0x79F392F: wb_create_cbk (write-behind.c:1417)
==29494==    by 0x7798E5C: afr_create_unwind (afr-dir-write.c:109)
==29494==    by 0x77993C6: afr_create_wind_cbk (afr-dir-write.c:219)
==29494==    by 0x7577CA0: client3_1_create_cbk (client3_1-fops.c:1505)
==29494==    by 0x508D692: rpc_clnt_handle_reply (rpc-clnt.c:690)
==29494==    by 0x508D9A3: rpc_clnt_notify (rpc-clnt.c:799)
==29494==    by 0x508B6AF: rpc_transport_notify (rpc-transport.c:1123)
==29494== 
==29494== 
==29494== 134,639,795 bytes in 17,175 blocks are possibly lost in loss record 23
 of 24
==29494==    at 0x4C1F9F6: malloc (vg_replace_malloc.c:149)
==29494==    by 0x4E5EBB5: __gf_malloc (mem-pool.c:159)
==29494==    by 0x4E5EE53: gf_asprintf (mem-pool.c:213)
==29494==    by 0x4E32D36: data_from_double (dict.c:887)
==29494==    by 0x4E34B19: dict_set_double (dict.c:1884)
==29494==    by 0x402B46: create_fuse_mount (glusterfsd.c:229)
==29494==    by 0x405089: main (glusterfsd.c:1285)
==29494== 
==29494==

Comment 7 Lakshmipathi G 2010-08-31 04:42:05 UTC
process state dump of dht server ,client  - diff before and after running qa system_light tests.

s1
---
< memusage.protocol/server.server-tcp.type.73.max_size=130
< memusage.protocol/server.server-tcp.type.73.max_num_allocs=1
---
> memusage.protocol/server.server-tcp.type.73.max_size=65566
> memusage.protocol/server.server-tcp.type.73.max_num_allocs=5
373c409
----
< memusage.performance/io-threads.brick4.type.40.max_size=2
< memusage.performance/io-threads.brick4.type.40.max_num_allocs=1
---
> memusage.performance/io-threads.brick4.type.40.max_size=7700
> memusage.performance/io-threads.brick4.type.40.max_num_allocs=10
----
< memusage.storage/posix.posix4.type.40.size=26
< memusage.storage/posix.posix4.type.40.num_allocs=1
< memusage.storage/posix.posix4.type.40.max_size=26
< memusage.storage/posix.posix4.type.40.max_num_allocs=1
---
> memusage.storage/posix.posix4.type.40.size=408239
> memusage.storage/posix.posix4.type.40.num_allocs=979
> memusage.storage/posix.posix4.type.40.max_size=410257
> memusage.storage/posix.posix4.type.40.max_num_allocs=981

==========
< memusage.protocol/server.server-tcp.type.9.size=2048
---
> memusage.protocol/server.server-tcp.type.9.size=16384
=========
< memusage.protocol/server.server-tcp.type.9.max_size=2048
< memusage.protocol/server.server-tcp.type.9.max_num_allocs=1
---
> memusage.protocol/server.server-tcp.type.9.max_size=24576
> memusage.protocol/server.server-tcp.type.9.max_num_allocs=4
==========
< memusage.protocol/server.server-tcp.type.17.size=96
< memusage.protocol/server.server-tcp.type.17.num_allocs=1
< memusage.protocol/server.server-tcp.type.17.max_size=96
< memusage.protocol/server.server-tcp.type.17.max_num_allocs=1
---
> memusage.protocol/server.server-tcp.type.17.size=64224
> memusage.protocol/server.server-tcp.type.17.num_allocs=669
> memusage.protocol/server.server-tcp.type.17.max_size=197952
> memusage.protocol/server.server-tcp.type.17.max_num_allocs=2062
===============

< memusage.protocol/server.server-tcp.type.2.max_size=912
< memusage.protocol/server.server-tcp.type.2.max_num_allocs=26
---
> memusage.protocol/server.server-tcp.type.2.max_size=1072
> memusage.protocol/server.server-tcp.type.2.max_num_allocs=34
==========
< memusage.protocol/server.server-tcp.type.3.max_size=512
< memusage.protocol/server.server-tcp.type.3.max_num_allocs=16
---
> memusage.protocol/server.server-tcp.type.3.max_size=832
> memusage.protocol/server.server-tcp.type.3.max_num_allocs=26
----
< memusage.protocol/server.server-tcp.type.4.max_size=240
< memusage.protocol/server.server-tcp.type.4.max_num_allocs=5
---
> memusage.protocol/server.server-tcp.type.4.max_size=480
> memusage.protocol/server.server-tcp.type.4.max_num_allocs=10

============
client
++++++++
< memusage.performance/stat-prefetch.statprefetch.type.40.size=2
< memusage.performance/stat-prefetch.statprefetch.type.40.num_allocs=1
< memusage.performance/stat-prefetch.statprefetch.type.40.max_size=4
< memusage.performance/stat-prefetch.statprefetch.type.40.max_num_allocs=2
---
> memusage.performance/stat-prefetch.statprefetch.type.40.size=49296
> memusage.performance/stat-prefetch.statprefetch.type.40.num_allocs=978
> memusage.performance/stat-prefetch.statprefetch.type.40.max_size=56982
> memusage.performance/stat-prefetch.statprefetch.type.40.max_num_allocs=986
> 


< memusage.performance/quick-read.quickread.type.68.size=56
< memusage.performance/quick-read.quickread.type.68.num_allocs=1
< memusage.performance/quick-read.quickread.type.68.max_size=56
< memusage.performance/quick-read.quickread.type.68.max_num_allocs=1
---
> memusage.performance/quick-read.quickread.type.68.size=1736
> memusage.performance/quick-read.quickread.type.68.num_allocs=31
> memusage.performance/quick-read.quickread.type.68.max_size=1960
> memusage.performance/quick-read.quickread.type.68.max_num_allocs=35


< memusage.performance/io-cache.iocache.type.40.size=0
< memusage.performance/io-cache.iocache.type.40.num_allocs=0
< memusage.performance/io-cache.iocache.type.40.max_size=2
< memusage.performance/io-cache.iocache.type.40.max_num_allocs=1
---
> memusage.performance/io-cache.iocache.type.40.size=20116
> memusage.performance/io-cache.iocache.type.40.num_allocs=769
> memusage.performance/io-cache.iocache.type.40.max_size=23448
> memusage.performance/io-cache.iocache.type.40.max_num_allocs=773

< memusage.performance/io-cache.iocache.type.74.size=160
< memusage.performance/io-cache.iocache.type.74.num_allocs=1
< memusage.performance/io-cache.iocache.type.74.max_size=160
< memusage.performance/io-cache.iocache.type.74.max_num_allocs=1
---
> memusage.performance/io-cache.iocache.type.74.size=5843360
> memusage.performance/io-cache.iocache.type.74.num_allocs=36521
> memusage.performance/io-cache.iocache.type.74.max_size=21842880
> memusage.performance/io-cache.iocache.type.74.max_num_allocs=136518
< memusage.cluster/distribute.distribute.type.40.max_size=2
< memusage.cluster/distribute.distribute.type.40.max_num_allocs=1
---
> memusage.cluster/distribute.distribute.type.40.max_size=7700
> memusage.cluster/distribute.distribute.type.40.max_num_allocs=18
< memusage.cluster/distribute.distribute.type.45.max_size=51
< memusage.cluster/distribute.distribute.type.45.max_num_allocs=2
---
> memusage.cluster/distribute.distribute.type.45.max_size=510
> memusage.cluster/distribute.distribute.type.45.max_num_allocs=20
714,715c922,923
< memusage.cluster/distribute.distribute.type.70.max_size=2480
< memusage.cluster/distribute.distribute.type.70.max_num_allocs=2
---
> memusage.cluster/distribute.distribute.type.70.max_size=48360
> memusage.cluster/distribute.distribute.type.70.max_num_allocs=39
< memusage.cluster/distribute.distribute.type.72.size=344
< memusage.cluster/distribute.distribute.type.72.num_allocs=6
< memusage.cluster/distribute.distribute.type.72.max_size=464
< memusage.cluster/distribute.distribute.type.72.max_num_allocs=7
---
> memusage.cluster/distribute.distribute.type.72.size=499544
> memusage.cluster/distribute.distribute.type.72.num_allocs=4166
> memusage.cluster/distribute.distribute.type.72.max_size=3458024
> memusage.cluster/distribute.distribute.type.72.max_num_allocs=28820


< memusage.protocol/client.ip-10-244-167-207-4.type.62.size=0
< memusage.protocol/client.ip-10-244-167-207-4.type.62.num_allocs=0
< memusage.protocol/client.ip-10-244-167-207-4.type.62.max_size=64
< memusage.protocol/client.ip-10-244-167-207-4.type.62.max_num_allocs=1
---
> memusage.protocol/client.ip-10-244-167-207-4.type.62.size=1179264
> memusage.protocol/client.ip-10-244-167-207-4.type.62.num_allocs=18426
> memusage.protocol/client.ip-10-244-167-207-4.type.62.max_size=1179328
> memusage.protocol/client.ip-10-244-167-207-4.type.62.max_num_allocs=18427


< memusage.protocol/client.ip-10-244-167-207-1.type.62.size=0
< memusage.protocol/client.ip-10-244-167-207-1.type.62.num_allocs=0
< memusage.protocol/client.ip-10-244-167-207-1.type.62.max_size=64
< memusage.protocol/client.ip-10-244-167-207-1.type.62.max_num_allocs=1
---
> memusage.protocol/client.ip-10-244-167-207-1.type.62.size=22272
> memusage.protocol/client.ip-10-244-167-207-1.type.62.num_allocs=348
> memusage.protocol/client.ip-10-244-167-207-1.type.62.max_size=22336
> memusage.protocol/client.ip-10-244-167-207-1.type.62.max_num_allocs=349

Comment 8 Lakshmipathi G 2010-09-01 03:03:36 UTC
< memusage.protocol/client.ip-10-202-98-15-1.type.38.size=0
< memusage.protocol/client.ip-10-202-98-15-1.type.38.num_allocs=0
< memusage.protocol/client.ip-10-202-98-15-1.type.38.max_size=206
< memusage.protocol/client.ip-10-202-98-15-1.type.38.max_num_allocs=1
---
> memusage.protocol/client.ip-10-202-98-15-1.type.38.size=129472897
> memusage.protocol/client.ip-10-202-98-15-1.type.38.num_allocs=16252
> memusage.protocol/client.ip-10-202-98-15-1.type.38.max_size=136827595
> memusage.protocol/client.ip-10-202-98-15-1.type.38.max_num_allocs=27002

Comment 9 Raghavendra G 2010-09-03 02:02:56 UTC
Many of the valgrind reports (at least the ones resulting in large memory leaks) are false triggers. The code handles the cases which valgrind has reported. cases reported in possibly lost sections corresponds to memory occupied by inode, their contexts etc. I think this bug can be closed.

Comment 10 Vijay Bellur 2010-09-04 08:17:12 UTC
PATCH: http://patches.gluster.com/patch/4547 in master (rpc-transport/socket: fix memory leaks.)

Comment 11 Raghavendra G 2010-09-09 01:05:05 UTC
*** Bug 1452 has been marked as a duplicate of this bug. ***

Comment 12 Lakshmipathi G 2010-09-10 05:17:36 UTC
report from last nights afr run -
---------
==18983== 6,850,550 bytes in 185,150 blocks are definitely lost in loss record 22 of 23
==18983==    at 0x4C1F9F6: malloc (vg_replace_malloc.c:149)
==18983==    by 0x59D2268: xdr_bytes (in /lib64/libc-2.7.so)
==18983==    by 0x52A8A93: xdr_gfs3_create_req (glusterfs3-xdr.c:1376)
==18983==    by 0x5097E71: xdr_to_generic (rpc-common.c:60)
==18983==    by 0x52AAC8E: xdr_to_create_req (glusterfs3.c:355)
==18983==    by 0x79B3A0C: server_create (server3_1-fops.c:2850)
==18983==    by 0x508D968: rpcsvc_handle_rpc_call (rpcsvc.c:990)
==18983==    by 0x508DCFB: rpcsvc_notify (rpcsvc.c:1085)
==18983==    by 0x5093A44: rpc_transport_notify (rpc-transport.c:1124)
==18983==    by 0x7DCD313: socket_event_poll_in (socket.c:1577)
==18983==    by 0x7DCD684: socket_event_handler (socket.c:1691)
==18983==    by 0x4E62EF3: event_dispatch_epoll_handler (event.c:812)
---------

Comment 13 Vijay Bellur 2010-09-14 07:49:04 UTC
PATCH: http://patches.gluster.com/patch/4768 in master (performance/io-cache: fix memory leak in ioc_mknod.)

Comment 14 Vijay Bellur 2010-09-15 04:06:40 UTC
PATCH: http://patches.gluster.com/patch/4787 in master (memory leak fixes.)

Comment 15 Vijay Bellur 2010-09-22 06:08:50 UTC
PATCH: http://patches.gluster.com/patch/4910 in master (performance/quick-read: fix memory leaks.)

Comment 16 Vijay Bellur 2010-09-28 13:20:05 UTC
Moving this to major as the critical leaks are fixed.

Comment 17 Lakshmipathi G 2010-10-11 08:30:31 UTC
possibly lost -

==7273== 1,904 bytes in 7 blocks are possibly lost in loss record 12 of 17
==7273==    at 0x4C1ED1F: calloc (vg_replace_malloc.c:279)
==7273==    by 0x4010422: _dl_allocate_tls (in /lib64/ld-2.7.so)
==7273==    by 0x56C8B52: pthread_create@@GLIBC_2.2.5 (in /lib64/libpthread-2.7.so)
==7273==    by 0x4059E7: glusterfs_signals_setup (glusterfsd.c:1231)
==7273==    by 0x405B06: daemonize (glusterfsd.c:1277)
==7273==    by 0x405E2A: main (glusterfsd.c:1402)

---------------
==7286== 816 bytes in 3 blocks are possibly lost in loss record 12 of 20
==7286==    at 0x4C1ED1F: calloc (vg_replace_malloc.c:279)
==7286==    by 0x4010422: _dl_allocate_tls (in /lib64/ld-2.7.so)
==7286==    by 0x56C8B52: pthread_create@@GLIBC_2.2.5 (in /lib64/libpthread-2.7.so)
==7286==    by 0x6985F10: notify (fuse-bridge.c:3335)
==7286==    by 0x4E3F5A6: xlator_notify (xlator.c:1048)
==7286==    by 0x4E4F086: default_notify (defaults.c:1226)
==7286==    by 0x4E3F5A6: xlator_notify (xlator.c:1048)
==7286==    by 0x4E4F0C3: default_notify (defaults.c:1230)
==7286==    by 0x4E3F5A6: xlator_notify (xlator.c:1048)
==7286==    by 0x4E4F0C3: default_notify (defaults.c:1230)
==7286==    by 0x4E3F5A6: xlator_notify (xlator.c:1048)
==7286==    by 0x4E4F0C3: default_notify (defaults.c:1230)

Comment 18 Vijay Bellur 2010-10-11 11:31:23 UTC
PATCH: http://patches.gluster.com/patch/5424 in master (features/locks: free fdctx in release.)

Comment 19 Anand Avati 2010-11-15 09:03:49 UTC
PATCH: http://patches.gluster.com/patch/5695 in master (cluster/replicate: Fix memory leak in afr_fd_ctx_cleanup.)