Description of problem:
xmlrpc-c library version 1.16.24 has a memory leak.
This leak is evident running a simpe XML-RPC client under valgrind
Version-Release number of selected component (if applicable):
Writing a simple XML-RPC client and server with a method having as parameters an array of structres start the server and perform an xmlrpc call with the client under valgrind. valgrind will report the memory leakage of the library.
Steps to Reproduce:
1.Write an xmlrpc server with a method having as parameters an array of structures
2.Write an xmlrpc client using xmlrpc-c-client calling the defined method
3.Execute the client under valgrind with --leak-check=yes option
4.Inspect valgrind output
valgrind will report the loss of memory allocated by xmlrpc_createXmlrpcValue function. Here is a sample output
==11383== 1,512 (96 direct, 1,416 indirect) bytes in 2 blocks are definitely lost in loss record 1,916 of 2,075
==11383== at 0x4A0646F: malloc (vg_replace_malloc.c:236)
==11383== by 0x4C1C142: xmlrpc_createXmlrpcValue (xmlrpc_data.c:320)
==11383== by 0x4C1EB71: xmlrpc_array_new (xmlrpc_array.c:191)
==11383== by 0x4C1FAAD: getValue (xmlrpc_build.c:109)
==11383== by 0x4C2010E: xmlrpc_build_value_va (xmlrpc_build.c:376)
==11383== by 0x38ED2041BD: xmlrpc_client_call_asynch (in /usr/lib64/libxmlrpc_client.so.3.16)
valgrind does not report any memory leakage
Performing this test with last stable xmlrpc-c version 1.25.10 does not shows the memory leak
Thanks for the report, Michele.
Unfortunately, I couldn't reproduce the issue yet. /no leaks/ Could you please provide the reproducer?
diff --git a/src/xmlrpc_client_global.c b/src/xmlrpc_client_global.c
index 7beba14..9725ae6 100644
@@ -319,6 +319,7 @@ xmlrpc_client_call_asynch(const char * const serverUrl,
serverUrl, methodName, responseHandler, userData,
(*responseHandler)(serverUrl, methodName, NULL, userData, &env, NULL);
I can't confirm this patch without the reproducer..
I am not able to reproduce this memory leak either.
Because this bug was in NEEDINFO for over 6 months and there was no reply from the reporter, I am closing this bug with CANTFIX.
Michele: If you post a working reproducer then I may be able to reopen the bug.