Hide Forgot
Description of problem: Ugh. I can't believe how long it took me to track this one down. Espceically since it turned out to be just as simple as the other ones I've filed on this recently. Went down a rat-hole I had no business even looking in, much less climbing down into and walking around and around and around in. OK, Here it is: In gp_accept_sec_context, in src/gp_rpc_accept_sec_context.c, we're using this local variable "ach" with pointers that are getting set to stuff allocated via gp_add_krb5_creds, and I can't see where any of this memory is getting freed. I think that's probably what's responsible for this: ==24113== 39,450,828 (148,416 direct, 39,302,412 indirect) bytes in 4,638 blocks are definitely lost in loss record 85 of 85 ==24113== at 0x4C2B974: calloc (in /usr/lib64/valgrind/vgpreload_memcheck-amd64-linux.so) ==24113== by 0x65DEF0F: gss_acquire_cred_from (g_acquire_cred.c:170) ==24113== by 0x408476: gp_add_krb5_creds (gp_creds.c:469) ==24113== by 0x40DD15: gp_accept_sec_context (gp_rpc_accept_sec_context.c:79) ==24113== by 0x40ADC0: gp_rpc_execute (gp_rpc_process.c:343) ==24113== by 0x40ADC0: gp_rpc_process_call (gp_rpc_process.c:400) ==24113== by 0x4073CB: gp_handle_query (gp_workers.c:447) ==24113== by 0x4073CB: gp_worker_main (gp_workers.c:401) ==24113== by 0x6822DC4: start_thread (pthread_create.c:308) ==24113== by 0x6B2DCEC: clone (clone.S:113) part of this valgrind report that I've been working my way through. This is by far the biggest of these leaks. Please note that this was after only about a 3 hour run, so that means this guy is losing over 10M an hour just on this one leak. This would be a great one to try to slip in through whatever back door you can for this poor guy to try out, if such a thing is possible. Version-Release number of selected component (if applicable): Found same code in latest version I picked up from brew (0.4.1-13.el7). How reproducible: 100%, just look there, and there it will be (until someone changes it, of course). Steps to Reproduce: 1. Look at the code. 2. See the bug. 3. Slap self in forehead. Banging head on table, weeping and wailing are all still optional. Actual results: HUMUNGOUS memory leak. Expected results: No memory leak at all. Not even a trickle. OK, maybe a trickle might be acceptable... :-) Additional info: There are tunz of little ones in this report. I'm debating whether I should track some of them down or not. Just because they didn't give this fellow a big problem with his usage model doesn't mean it won't cause a huge problem for someone else who happens to be using it slightly differently.
Verified using GSSProxy :: gssproxy-0.7.0-3.el7.x86_64 Marking BZ as verified as sanityonly.
Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory, and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://access.redhat.com/errata/RHBA-2017:2033