Description of problem: There is a memleak in glusterfs server reported by valgrind whenever volume status command is issued. ==15021== 456,634 (5,376 direct, 451,258 indirect) bytes in 56 blocks are definitely lost in loss record 489 of 498 ==15021== at 0x4A04A28: calloc (vg_replace_malloc.c:467) ==15021== by 0x4C59910: __gf_calloc (mem-pool.c:145) ==15021== by 0x4C1EF82: get_new_dict_full (dict.c:67) ==15021== by 0x4C1F025: dict_new (dict.c:98) ==15021== by 0x40A926: glusterfs_handle_brick_status (glusterfsd-mgmt.c:884) ==15021== by 0x40ACF8: glusterfs_handle_rpc_msg (glusterfsd-mgmt.c:978) ==15021== by 0x4EA60A8: rpcsvc_handle_rpc_call (rpcsvc.c:514) ==15021== by 0x4EA644B: rpcsvc_notify (rpcsvc.c:610) ==15021== by 0x4EABDA7: rpc_transport_notify (rpc-transport.c:498) ==15021== by 0x999626F: socket_event_poll_in (socket.c:1686) ==15021== by 0x99967F3: socket_event_handler (socket.c:1801) ==15021== by 0x4C58D9F: event_dispatch_epoll_handler (event.c:794) ==15021== 142,662 (560 direct, 142,102 indirect) bytes in 7 blocks are definitely lost in loss record 484 of 498 ==15021== at 0x4A04A28: calloc (vg_replace_malloc.c:467) ==15021== by 0x4C59910: __gf_calloc (mem-pool.c:145) ==15021== by 0x4C1F730: _dict_set (dict.c:275) ==15021== by 0x4C1F925: dict_set (dict.c:324) ==15021== by 0x4C233CD: dict_set_str (dict.c:2116) ==15021== by 0x4C5FDE4: gf_proc_dump_mempool_info_to_dict (statedump.c:325) ==15021== by 0x40A988: glusterfs_handle_brick_status (glusterfsd-mgmt.c:889) ==15021== by 0x40ACF8: glusterfs_handle_rpc_msg (glusterfsd-mgmt.c:978) ==15021== by 0x4EA60A8: rpcsvc_handle_rpc_call (rpcsvc.c:514) ==15021== by 0x4EA644B: rpcsvc_notify (rpcsvc.c:610) ==15021== by 0x4EABDA7: rpc_transport_notify (rpc-transport.c:498) ==15021== by 0x999626F: socket_event_poll_in (socket.c:1686) ==15021== 101,876 (400 direct, 101,476 indirect) bytes in 5 blocks are definitely lost in loss record 482 of 498 ==15021== at 0x4A04A28: calloc (vg_replace_malloc.c:467) ==15021== by 0x4C59910: __gf_calloc (mem-pool.c:145) ==15021== by 0x4C1F730: _dict_set (dict.c:275) ==15021== by 0x4C1F925: dict_set (dict.c:324) ==15021== by 0x4C22B9B: dict_set_int32 (dict.c:1743) ==15021== by 0x4C5FE4D: gf_proc_dump_mempool_info_to_dict (statedump.c:331) ==15021== by 0x40A988: glusterfs_handle_brick_status (glusterfsd-mgmt.c:889) ==15021== by 0x40ACF8: glusterfs_handle_rpc_msg (glusterfsd-mgmt.c:978) ==15021== by 0x4EA60A8: rpcsvc_handle_rpc_call (rpcsvc.c:514) ==15021== by 0x4EA644B: rpcsvc_notify (rpcsvc.c:610) ==15021== by 0x4EABDA7: rpc_transport_notify (rpc-transport.c:498) ==15021== by 0x999626F: socket_event_poll_in (socket.c:1686) ==15021== 61,132 (240 direct, 60,892 indirect) bytes in 3 blocks are definitely lost in loss record 465 of 498 ==15021== at 0x4A04A28: calloc (vg_replace_malloc.c:467) ==15021== by 0x4C59910: __gf_calloc (mem-pool.c:145) ==15021== by 0x4C1F730: _dict_set (dict.c:275) ==15021== by 0x4C1F925: dict_set (dict.c:324) ==15021== by 0x4C22B9B: dict_set_int32 (dict.c:1743) ==15021== by 0x4C5FA85: gf_proc_dump_mem_info_to_dict (statedump.c:253) ==15021== by 0x40A975: glusterfs_handle_brick_status (glusterfsd-mgmt.c:888) ==15021== by 0x40ACF8: glusterfs_handle_rpc_msg (glusterfsd-mgmt.c:978) ==15021== by 0x4EA60A8: rpcsvc_handle_rpc_call (rpcsvc.c:514) ==15021== by 0x4EA644B: rpcsvc_notify (rpcsvc.c:610) ==15021== by 0x4EABDA7: rpc_transport_notify (rpc-transport.c:498) ==15021== by 0x999626F: socket_event_poll_in (socket.c:1686) ==15021== 9,497 (80 direct, 9,417 indirect) bytes in 1 blocks are definitely lost in loss record 408 of 498 ==15021== at 0x4A04A28: calloc (vg_replace_malloc.c:467) ==15021== by 0x4C59910: __gf_calloc (mem-pool.c:145) ==15021== by 0x4C1F730: _dict_set (dict.c:275) ==15021== by 0x4C1F925: dict_set (dict.c:324) ==15021== by 0x4C22B9B: dict_set_int32 (dict.c:1743) ==15021== by 0x4C56938: fdtable_dump_to_dict (fd.c:1129) ==15021== by 0xBF41228: server_fd_to_dict (server.c:209) ==15021== by 0x40A9EB: glusterfs_handle_brick_status (glusterfsd-mgmt.c:901) ==15021== by 0x40ACF8: glusterfs_handle_rpc_msg (glusterfsd-mgmt.c:978) ==15021== by 0x4EA60A8: rpcsvc_handle_rpc_call (rpcsvc.c:514) ==15021== by 0x4EA644B: rpcsvc_notify (rpcsvc.c:610) ==15021== by 0x4EABDA7: rpc_transport_notify (rpc-transport.c:498) Version-Release number of selected component (if applicable): How reproducible: always Steps to Reproduce: 1. create and start the volume 2. run some tests on the volume 3. do volume status Actual results: there is a memory leak in the glusterfs process Expected results: glusterfs process should not leak memory. Additional info:
CHANGE: http://review.gluster.com/2799 (glusterfsd: unref the dict and use dict_set_dynstr to avoid memleak) merged in master by Vijay Bellur (vijay)
CHANGE: http://review.gluster.com/2808 (glusterfsd: unref the dict and free the memory to avoid memleak) merged in master by Vijay Bellur (vijay)
CHANGE: http://review.gluster.com/2886 (protocol/client: Free readdirp xdr leak) merged in master by Vijay Bellur (vijay)
Checked with glusterfs-3.3.0qa40. Now these memory leaks are not seen.