RHEL Engineering is moving the tracking of its product development work on RHEL 6 through RHEL 9 to Red Hat Jira (issues.redhat.com). If you're a Red Hat customer, please continue to file support cases via the Red Hat customer portal. If you're not, please head to the "RHEL project" in Red Hat Jira and file new tickets here. Individual Bugzilla bugs in the statuses "NEW", "ASSIGNED", and "POST" are being migrated throughout September 2023. Bugs of Red Hat partners with an assigned Engineering Partner Manager (EPM) are migrated in late September as per pre-agreed dates. Bugs against components "kernel", "kernel-rt", and "kpatch" are only migrated if still in "NEW" or "ASSIGNED". If you cannot log in to RH Jira, please consult article #7032570. That failing, please send an e-mail to the RH Jira admins at rh-issues@redhat.com to troubleshoot your issue as a user management inquiry. The email creates a ServiceNow ticket with Red Hat. Individual Bugzilla bugs that are migrated will be moved to status "CLOSED", resolution "MIGRATED", and set with "MigratedToJIRA" in "Keywords". The link to the successor Jira issue will be found under "Links", have a little "two-footprint" icon next to it, and direct you to the "RHEL project" in Red Hat Jira (issue links are of type "https://issues.redhat.com/browse/RHEL-XXXX", where "X" is a digit). This same link will be available in a blue banner at the top of the page informing you that that bug has been migrated.
Bug 1380490 - gssproxy memory leak (ach) in gp_accept_sec_context, in src/gp_rpc_accept_sec_context.c
Summary: gssproxy memory leak (ach) in gp_accept_sec_context, in src/gp_rpc_accept_sec...
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: Red Hat Enterprise Linux 7
Classification: Red Hat
Component: gssproxy
Version: 7.4
Hardware: All
OS: All
high
high
Target Milestone: rc
: ---
Assignee: Robbie Harwood
QA Contact: Abhijeet Kasurde
URL: https://pagure.io/gssproxy/pull-reque...
Whiteboard:
Depends On:
Blocks: 1298243 1399979
TreeView+ depends on / blocked
 
Reported: 2016-09-29 18:45 UTC by Thomas Gardner
Modified: 2020-09-10 09:49 UTC (History)
8 users (show)

Fixed In Version: gssproxy-0.6.2-4.el7
Doc Type: No Doc Update
Doc Text:
Fixed several memory leaks in gssproxy. (Group 1379005, 1379482, 1379616, 1380490 together as a single line item.)
Clone Of:
Environment:
Last Closed: 2017-08-01 20:55:26 UTC
Target Upstream Version:
Embargoed:


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
Red Hat Product Errata RHBA-2017:2033 0 normal SHIPPED_LIVE gssproxy bug fix update 2017-08-01 18:34:35 UTC

Description Thomas Gardner 2016-09-29 18:45:15 UTC
Description of problem:

Ugh.  I can't believe how long it took me to track this one down.
Espceically since it turned out to be just as simple as the other
ones I've filed on this recently.  Went down a rat-hole I had no
business even looking in, much less climbing down into and walking
around and around and around in.

OK, Here it is:  In gp_accept_sec_context, in
src/gp_rpc_accept_sec_context.c, we're using this local variable "ach"
with pointers that are getting set to stuff allocated via
gp_add_krb5_creds, and I can't see where any of this memory is getting
freed.  I think that's probably what's responsible for this:

==24113== 39,450,828 (148,416 direct, 39,302,412 indirect) bytes in 4,638 blocks are definitely lost in loss record 85 of 85
==24113==    at 0x4C2B974: calloc (in /usr/lib64/valgrind/vgpreload_memcheck-amd64-linux.so)
==24113==    by 0x65DEF0F: gss_acquire_cred_from (g_acquire_cred.c:170)
==24113==    by 0x408476: gp_add_krb5_creds (gp_creds.c:469)
==24113==    by 0x40DD15: gp_accept_sec_context (gp_rpc_accept_sec_context.c:79)
==24113==    by 0x40ADC0: gp_rpc_execute (gp_rpc_process.c:343)
==24113==    by 0x40ADC0: gp_rpc_process_call (gp_rpc_process.c:400)
==24113==    by 0x4073CB: gp_handle_query (gp_workers.c:447)
==24113==    by 0x4073CB: gp_worker_main (gp_workers.c:401)
==24113==    by 0x6822DC4: start_thread (pthread_create.c:308)
==24113==    by 0x6B2DCEC: clone (clone.S:113)

part of this valgrind report that I've been working my way through.
This is by far the biggest of these leaks.  Please note that this
was after only about a 3 hour run, so that means this guy is losing
over 10M an hour just on this one leak.  This would be a great one
to try to slip in through whatever back door you can for this poor
guy to try out, if such a thing is possible.

Version-Release number of selected component (if applicable):

Found same code in latest version I picked up from brew (0.4.1-13.el7).

How reproducible:

100%, just look there, and there it will be (until someone changes it,
of course).

Steps to Reproduce:
1.  Look at the code.
2.  See the bug.
3.  Slap self in forehead.  Banging head on table, weeping and wailing
are all still optional.

Actual results:

HUMUNGOUS memory leak.

Expected results:

No memory leak at all.  Not even a trickle.  OK, maybe a trickle might
be acceptable...  :-)

Additional info:

There are tunz of little ones in this report.  I'm debating whether
I should track some of them down or not.  Just because they didn't
give this fellow a big problem with his usage model doesn't mean it
won't cause a huge problem for someone else who happens to be using
it slightly differently.

Comment 9 Abhijeet Kasurde 2017-05-22 14:05:14 UTC
Verified using GSSProxy :: gssproxy-0.7.0-3.el7.x86_64

Marking BZ as verified as sanityonly.

Comment 10 errata-xmlrpc 2017-08-01 20:55:26 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHBA-2017:2033


Note You need to log in before you can comment on or make changes to this bug.