RHEL Engineering is moving the tracking of its product development work on RHEL 6 through RHEL 9 to Red Hat Jira (issues.redhat.com). If you're a Red Hat customer, please continue to file support cases via the Red Hat customer portal. If you're not, please head to the "RHEL project" in Red Hat Jira and file new tickets here. Individual Bugzilla bugs in the statuses "NEW", "ASSIGNED", and "POST" are being migrated throughout September 2023. Bugs of Red Hat partners with an assigned Engineering Partner Manager (EPM) are migrated in late September as per pre-agreed dates. Bugs against components "kernel", "kernel-rt", and "kpatch" are only migrated if still in "NEW" or "ASSIGNED". If you cannot log in to RH Jira, please consult article #7032570. That failing, please send an e-mail to the RH Jira admins at rh-issues@redhat.com to troubleshoot your issue as a user management inquiry. The email creates a ServiceNow ticket with Red Hat. Individual Bugzilla bugs that are migrated will be moved to status "CLOSED", resolution "MIGRATED", and set with "MigratedToJIRA" in "Keywords". The link to the successor Jira issue will be found under "Links", have a little "two-footprint" icon next to it, and direct you to the "RHEL project" in Red Hat Jira (issue links are of type "https://issues.redhat.com/browse/RHEL-XXXX", where "X" is a digit). This same link will be available in a blue banner at the top of the page informing you that that bug has been migrated.
Bug 1077897 - Memory leak with proxy auth control
Summary: Memory leak with proxy auth control
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: Red Hat Enterprise Linux 7
Classification: Red Hat
Component: 389-ds-base
Version: 7.0
Hardware: Unspecified
OS: Unspecified
medium
unspecified
Target Milestone: rc
: ---
Assignee: Noriko Hosoi
QA Contact: Viktor Ashirov
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2014-03-18 19:31 UTC by Noriko Hosoi
Modified: 2020-09-13 21:00 UTC (History)
4 users (show)

Fixed In Version: 389-ds-base-1.3.3.1-1.el7
Doc Type: Bug Fix
Doc Text:
Clone Of:
Environment:
Last Closed: 2015-03-05 09:34:06 UTC
Target Upstream Version:
Embargoed:


Attachments (Terms of Use)
valgrind output (61.77 KB, text/plain)
2015-01-09 16:50 UTC, Viktor Ashirov
no flags Details


Links
System ID Private Priority Status Summary Last Updated
Github 389ds 389-ds-base issues 1075 0 None None None 2020-09-13 21:00:06 UTC
Red Hat Product Errata RHSA-2015:0416 0 normal SHIPPED_LIVE Important: 389-ds-base security, bug fix, and enhancement update 2015-03-05 14:26:33 UTC

Description Noriko Hosoi 2014-03-18 19:31:12 UTC
This bug is created as a clone of upstream ticket:
https://fedorahosted.org/389/ticket/47743

I've been running some load tests against version 389-ds-base-1.2.11.15-31.el6_5.x86_64 on Centos 6.5. When looking at the vmsize of the ns-slapd process it appears to grow, rather than reaching a plateau. After some modifications to the load test it appears that the memory usage is flat if we remove the portion that performs the proxied authentication control.

I've created a simplified version of the data that exhibits the problem. This backup ldif is attached to the case. It consists of 3 users and 2 acis.

The load is the following pseudocode:
create a simple connection, binding as uid=AUser,o=Test.com
In 30 threads, repeat 1000 times each (i.e. 30000 total searches):
    using this single connection, proxy as uid=PUser,o=Test.com, and search for uid=SUser

In the end, the access log would look something like:
[13/Mar/2014:16:32:57 +0000] conn=5 op=29999 SRCH base="o=test.com" scope=2 filter="(uid=SUser)" attrs=ALL authzid="uid=puser,o=test.com"
[13/Mar/2014:16:32:57 +0000] conn=5 op=29999 RESULT err=0 tag=101 nentries=1 etime=0.001000
[13/Mar/2014:16:32:57 +0000] conn=5 op=30000 SRCH base="o=test.com" scope=2 filter="(uid=SUser)" attrs=ALL authzid="uid=puser,o=test.com"
[13/Mar/2014:16:32:57 +0000] conn=5 op=30000 RESULT err=0 tag=101 nentries=1 etime=0.001000
[13/Mar/2014:16:34:04 +0000] conn=5 op=30002 UNBIND
[13/Mar/2014:16:34:04 +0000] conn=5 op=30002 fd=66 closed - U1

You can see the memory grow with this command:
for((i=0;;++i)) { echo `date` ` grep VmSize /proc/\`pidof ns-slapd\`/status | grep -o '[0-9]*'`; sleep 1 || break; }

Note that when the connection/search finishes, the memory is not freed. If you run the test again it will grow again from the new baseline. The original (more complex and slow) load test caused the ns-slapd process to be aborted due to out-of-memory after around a week.

Comment 1 mreynolds 2014-03-18 20:24:42 UTC
Fixed upstream.

Comment 3 Viktor Ashirov 2015-01-09 16:50:25 UTC
Created attachment 978294 [details]
valgrind output

Verification steps: https://bugzilla.redhat.com/show_bug.cgi?id=1077895#c5

$ rpm -qa | grep 389-ds
389-ds-base-debuginfo-1.3.3.1-11.el7.x86_64
389-ds-base-1.3.3.1-11.el7.x86_64
389-ds-base-libs-1.3.3.1-11.el7.x86_64

Database was initialized with Setup.ldif from https://bugzilla.redhat.com/attachment.cgi?id=878275

$ date; java -cp ldapbp-repackaged-4.1.jar:. LoadTest; date
Fri Jan  9 17:26:17 CET 2015
Fri Jan  9 17:41:49 CET 2015

$ sudo /usr/sbin/stop-dirsrv 
Stopping instance "rhel7ds"

$ grep proxyauth_get_dn /tmp/valgrind-20150109-172454-rhel7ds.out  | wc -l 
0

Marking as VERIFIED.

Comment 5 errata-xmlrpc 2015-03-05 09:34:06 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://rhn.redhat.com/errata/RHSA-2015-0416.html


Note You need to log in before you can comment on or make changes to this bug.