Red Hat Bugzilla – Bug 518633
snmpd leaks memory
Last modified: 2014-02-10 18:04:00 EST
Description of problem:
snmpd cyclically leaks memory
Version-Release number of selected component (if applicable):
Steps to Reproduce:
1. run snmpd inside valgrind and try to walk over whole tree
There is memory leak in the valgrind output that is growing over the time.
No growing memory leaks.
See valgrind output in attachment.
Created attachment 358228 [details]
I have cure for leaks in ipAddressPrefixTable_container_load, ipv6ScopeZoneIndexTable_container_load, udpEndpointTable_container_load, ipDefaultRouterTable_container_load, ipIfStatsTable_container_load, udpEndpointTable_container_load and tcpListenerTable_container_load.
The others seem to occur only during initialization of the agent and do not grow in time.
Created attachment 382479 [details]
valgrind log of snmpd from net-snmp-220.127.116.11-8.el5
Memory leaks are present also in net-snmp-18.104.22.168-8.el5. See the attachment for valgrind log of snmpd after cca 18 iterations of "snmpwalk -v 1 -c public localhost .1".
==9056== LEAK SUMMARY:
==9056== definitely lost: 5,866 bytes in 47 blocks.
==9056== indirectly lost: 80 bytes in 1 blocks.
==9056== possibly lost: 3,496 bytes in 35 blocks.
==9056== still reachable: 1,455,269 bytes in 24,094 blocks.
==9056== suppressed: 0 bytes in 0 blocks.
Created attachment 382480 [details]
valgrind log of snmpd from net-snmp-22.214.171.124-8.el5 on x86_64
Memory leaks are present also in net-snmp-126.96.36.199-8.el5. See the attachment for
valgrind log of snmpd after cca 70 iterations of "snmpwalk -v 1 -c public
This one has been fixed upstream in SVN rev. #17795
==9158== 15,750 bytes in 70 blocks are definitely lost in loss record 166 of 186
snmp_clone_mem (in /usr/lib64/libnetsnmp.so.10.0.3)
netsnmp_table_build_oid_from_index (in /usr/lib64/libnetsnmphelpers.so.10.0.3)
netsnmp_call_handler (in /usr/lib64/libnetsnmpagent.so.10.0.3)
table_helper_handler (in /usr/lib64/libnetsnmphelpers.so.10.0.3)
This one remains mystery, I can't reproduce it locally:
9,472 bytes in 148 blocks are definitely lost in loss record 164 of 186
net_snmp_create_prefix_info (in /usr/lib64/libnetsnmpmibs.so.10.0.3)
netsnmp_prefix_listen (in /usr/lib64/libnetsnmpmibs.so.10.0.3)
start_thread (in /lib64/libpthread-2.5.so)
clone (in /lib64/libc-2.5.so)
(In reply to comment #10)
> This one remains mystery, I can't reproduce it locally:
> 9,472 bytes in 148 blocks are definitely lost in loss record 164 of 186
> calloc (vg_replace_malloc.c:279)
> net_snmp_create_prefix_info (in /usr/lib64/libnetsnmpmibs.so.10.0.3)
> netsnmp_prefix_listen (in /usr/lib64/libnetsnmpmibs.so.10.0.3)
> start_thread (in /lib64/libpthread-2.5.so)
> clone (in /lib64/libc-2.5.so)
Finally I am able to reproduce it - snmpd receives information about incoming ICMPv6 router advertisements from kernel (via netlink) and it sometimes leaks memory during its processing. Therefore, you just need to find a machine on network segment, which is handled by IPv6 router - our lab network seems to be fine. Check, that the machine under test periodically (~each minute) receives ICMPv6 router advertisement (tshark ip6) and let snmpd running for few minutes to process several such messages. The leak appears in valgrind output then.
I'm seeing a leak as well in the udpEndpointTable (.188.8.131.52.184.108.40.206) , walking it consistently grows the number of file descriptors. I found a patch at http://sourceforge.net/tracker/index.php?func=detail&aid=2822355&group_id=12694&atid=112694
(In reply to comment #14)
> I'm seeing a leak as well in the udpEndpointTable (.220.127.116.11.18.104.22.168) , walking
> it consistently grows the number of file descriptors. I found a patch at
I believe I've fixed this leak too. You can look forward to RHEL 5.5.
An advisory has been issued which should help the problem
described in this bug report. This report is therefore being
closed with a resolution of ERRATA. For more information
on therefore solution and/or where to find the updated files,
please follow the link below. You may reopen this bug report
if the solution does not work for you.