Bug 796186 - [glusterfs-3.30qa23]: memleak in glusterfs server whenever volume status command is issued
Summary: [glusterfs-3.30qa23]: memleak in glusterfs server whenever volume status comm...
Keywords:
Status: CLOSED CURRENTRELEASE
Alias: None
Product: GlusterFS
Classification: Community
Component: core
Version: mainline
Hardware: Unspecified
OS: Unspecified
unspecified
unspecified
Target Milestone: ---
Assignee: Raghavendra Bhat
QA Contact:
URL:
Whiteboard:
Depends On:
Blocks: 817967
TreeView+ depends on / blocked
 
Reported: 2012-02-22 12:54 UTC by Raghavendra Bhat
Modified: 2013-07-24 17:36 UTC (History)
1 user (show)

Fixed In Version: glusterfs-3.4.0
Doc Type: Bug Fix
Doc Text:
Clone Of:
Environment:
Last Closed: 2013-07-24 17:36:26 UTC
Regression: ---
Mount Type: ---
Documentation: ---
CRM:
Verified Versions: glusterfs-3.3.0qa40
Embargoed:


Attachments (Terms of Use)

Description Raghavendra Bhat 2012-02-22 12:54:50 UTC
Description of problem:
There is a memleak in glusterfs server reported by valgrind whenever volume status command is issued.

==15021== 456,634 (5,376 direct, 451,258 indirect) bytes in 56 blocks are definitely lost in loss record 489 of 498
==15021==    at 0x4A04A28: calloc (vg_replace_malloc.c:467)
==15021==    by 0x4C59910: __gf_calloc (mem-pool.c:145)
==15021==    by 0x4C1EF82: get_new_dict_full (dict.c:67)
==15021==    by 0x4C1F025: dict_new (dict.c:98)
==15021==    by 0x40A926: glusterfs_handle_brick_status (glusterfsd-mgmt.c:884)
==15021==    by 0x40ACF8: glusterfs_handle_rpc_msg (glusterfsd-mgmt.c:978)
==15021==    by 0x4EA60A8: rpcsvc_handle_rpc_call (rpcsvc.c:514)
==15021==    by 0x4EA644B: rpcsvc_notify (rpcsvc.c:610)
==15021==    by 0x4EABDA7: rpc_transport_notify (rpc-transport.c:498)
==15021==    by 0x999626F: socket_event_poll_in (socket.c:1686)
==15021==    by 0x99967F3: socket_event_handler (socket.c:1801)
==15021==    by 0x4C58D9F: event_dispatch_epoll_handler (event.c:794)

==15021== 142,662 (560 direct, 142,102 indirect) bytes in 7 blocks are definitely lost in loss record 484 of 498
==15021==    at 0x4A04A28: calloc (vg_replace_malloc.c:467)
==15021==    by 0x4C59910: __gf_calloc (mem-pool.c:145)
==15021==    by 0x4C1F730: _dict_set (dict.c:275)
==15021==    by 0x4C1F925: dict_set (dict.c:324)
==15021==    by 0x4C233CD: dict_set_str (dict.c:2116)
==15021==    by 0x4C5FDE4: gf_proc_dump_mempool_info_to_dict (statedump.c:325)
==15021==    by 0x40A988: glusterfs_handle_brick_status (glusterfsd-mgmt.c:889)
==15021==    by 0x40ACF8: glusterfs_handle_rpc_msg (glusterfsd-mgmt.c:978)
==15021==    by 0x4EA60A8: rpcsvc_handle_rpc_call (rpcsvc.c:514)
==15021==    by 0x4EA644B: rpcsvc_notify (rpcsvc.c:610)
==15021==    by 0x4EABDA7: rpc_transport_notify (rpc-transport.c:498)
==15021==    by 0x999626F: socket_event_poll_in (socket.c:1686)

==15021== 101,876 (400 direct, 101,476 indirect) bytes in 5 blocks are definitely lost in loss record 482 of 498
==15021==    at 0x4A04A28: calloc (vg_replace_malloc.c:467)
==15021==    by 0x4C59910: __gf_calloc (mem-pool.c:145)
==15021==    by 0x4C1F730: _dict_set (dict.c:275)
==15021==    by 0x4C1F925: dict_set (dict.c:324)
==15021==    by 0x4C22B9B: dict_set_int32 (dict.c:1743)
==15021==    by 0x4C5FE4D: gf_proc_dump_mempool_info_to_dict (statedump.c:331)
==15021==    by 0x40A988: glusterfs_handle_brick_status (glusterfsd-mgmt.c:889)
==15021==    by 0x40ACF8: glusterfs_handle_rpc_msg (glusterfsd-mgmt.c:978)
==15021==    by 0x4EA60A8: rpcsvc_handle_rpc_call (rpcsvc.c:514)
==15021==    by 0x4EA644B: rpcsvc_notify (rpcsvc.c:610)
==15021==    by 0x4EABDA7: rpc_transport_notify (rpc-transport.c:498)
==15021==    by 0x999626F: socket_event_poll_in (socket.c:1686)

==15021== 61,132 (240 direct, 60,892 indirect) bytes in 3 blocks are definitely lost in loss record 465 of 498
==15021==    at 0x4A04A28: calloc (vg_replace_malloc.c:467)
==15021==    by 0x4C59910: __gf_calloc (mem-pool.c:145)
==15021==    by 0x4C1F730: _dict_set (dict.c:275)
==15021==    by 0x4C1F925: dict_set (dict.c:324)
==15021==    by 0x4C22B9B: dict_set_int32 (dict.c:1743)
==15021==    by 0x4C5FA85: gf_proc_dump_mem_info_to_dict (statedump.c:253)
==15021==    by 0x40A975: glusterfs_handle_brick_status (glusterfsd-mgmt.c:888)
==15021==    by 0x40ACF8: glusterfs_handle_rpc_msg (glusterfsd-mgmt.c:978)
==15021==    by 0x4EA60A8: rpcsvc_handle_rpc_call (rpcsvc.c:514)
==15021==    by 0x4EA644B: rpcsvc_notify (rpcsvc.c:610)
==15021==    by 0x4EABDA7: rpc_transport_notify (rpc-transport.c:498)
==15021==    by 0x999626F: socket_event_poll_in (socket.c:1686)

==15021== 9,497 (80 direct, 9,417 indirect) bytes in 1 blocks are definitely lost in loss record 408 of 498
==15021==    at 0x4A04A28: calloc (vg_replace_malloc.c:467)
==15021==    by 0x4C59910: __gf_calloc (mem-pool.c:145)
==15021==    by 0x4C1F730: _dict_set (dict.c:275)
==15021==    by 0x4C1F925: dict_set (dict.c:324)
==15021==    by 0x4C22B9B: dict_set_int32 (dict.c:1743)
==15021==    by 0x4C56938: fdtable_dump_to_dict (fd.c:1129)
==15021==    by 0xBF41228: server_fd_to_dict (server.c:209)
==15021==    by 0x40A9EB: glusterfs_handle_brick_status (glusterfsd-mgmt.c:901)
==15021==    by 0x40ACF8: glusterfs_handle_rpc_msg (glusterfsd-mgmt.c:978)
==15021==    by 0x4EA60A8: rpcsvc_handle_rpc_call (rpcsvc.c:514)
==15021==    by 0x4EA644B: rpcsvc_notify (rpcsvc.c:610)
==15021==    by 0x4EABDA7: rpc_transport_notify (rpc-transport.c:498)



Version-Release number of selected component (if applicable):


How reproducible:

always

Steps to Reproduce:
1. create and start the volume
2. run some tests on the volume
3. do volume status
  
Actual results:

there is a memory leak in the glusterfs process

Expected results:

glusterfs process should not leak memory.


Additional info:

Comment 1 Anand Avati 2012-02-23 05:21:35 UTC
CHANGE: http://review.gluster.com/2799 (glusterfsd: unref the dict and use dict_set_dynstr to avoid memleak) merged in master by Vijay Bellur (vijay)

Comment 2 Anand Avati 2012-02-27 10:17:39 UTC
CHANGE: http://review.gluster.com/2808 (glusterfsd: unref the dict and free the memory to avoid memleak) merged in master by Vijay Bellur (vijay)

Comment 3 Anand Avati 2012-03-07 17:20:12 UTC
CHANGE: http://review.gluster.com/2886 (protocol/client: Free readdirp xdr leak) merged in master by Vijay Bellur (vijay)

Comment 4 Raghavendra Bhat 2012-05-09 10:32:45 UTC
Checked with glusterfs-3.3.0qa40. Now these memory leaks are not seen.


Note You need to log in before you can comment on or make changes to this bug.