Bug 436830
Summary: | Memory leak in ns-slapd's Class Of Service | ||
---|---|---|---|
Product: | [Retired] 389 | Reporter: | Tamas Bagyal <keef> |
Component: | Directory Server | Assignee: | Noriko Hosoi <nhosoi> |
Status: | CLOSED CURRENTRELEASE | QA Contact: | Chandrasekar Kannan <ckannan> |
Severity: | high | Docs Contact: | |
Priority: | medium | ||
Version: | 1.1.0 | CC: | benl, jgalipea, nhosoi, rmeggins |
Target Milestone: | --- | ||
Target Release: | --- | ||
Hardware: | i386 | ||
OS: | Linux | ||
Whiteboard: | |||
Fixed In Version: | 8.1 | Doc Type: | Bug Fix |
Doc Text: | Story Points: | --- | |
Clone Of: | Environment: | ||
Last Closed: | 2009-04-29 23:02:50 UTC | Type: | --- |
Regression: | --- | Mount Type: | --- |
Documentation: | --- | CRM: | |
Verified Versions: | Category: | --- | |
oVirt Team: | --- | RHEL 7.3 requirements from Atomic Host: | |
Cloudforms Team: | --- | Target Upstream Version: | |
Embargoed: | |||
Bug Depends On: | |||
Bug Blocks: | 249650, 493682 | ||
Attachments: |
Description
Tamas Bagyal
2008-03-10 18:24:28 UTC
Created attachment 297486 [details]
valgrind's output
I've been trying various ways to reproduce this problem, but I have not been able to get a stack trace like this one: ==28389== 5,927,334 (4,711,269 direct, 1,216,065 indirect) bytes in 392,488 blocks are definitely lost in loss record 54 of 54 1) Can you use valgrind --num-callers=40 to print the full stack traces? 2) Can you post your Class of Service configuration? 1) The new valgrind output is the attachment. 2) see attachments. I hope these are what you think. Created attachment 300399 [details]
valgrind output, run with --num-callers=40
Created attachment 300400 [details]
cos definition entrys
Created attachment 300401 [details]
cos template entrys
This is excellent. Thank you very much! I made some test and I found 2 thing: 1) this bug is present in 1.0.4 too... ( build on debian etch by ds-build script patched with https://www.redhat.com/archives/fedora-directory- commits/2007-September/msg00064.html ) 2) when i remove all cos definition/template entry and turned off per-subtree password policy, the memory usage is stay at normal level. Created attachment 328495 [details]
test ldif files to reproduce the memory leak
How to reproduce the problem:
ldapmodify -p <port> -D 'cn=Directory Manager' -w <pw> -a -f cos-template.ldif
ldapmodify -p <port> -D 'cn=Directory Manager' -w <pw> -a -f cos-definition.ldif
ldapmodify -p <port> -D 'cn=Directory Manager' -w <pw> -a -f cos-data.ldif
ldapmodify -p <port> -D 'cn=Directory Manager' -w <pw> -a -f cos-data0.ldif
ldapmodify -p <port> -D 'cn=Directory Manager' -w <pw> -a -f cos-data1.ldif
ldapmodify -p <port> -D 'cn=Directory Manager' -w <pw> -a -f cos-data2.ldif
ldapmodify -p <port> -D 'cn=Directory Manager' -w <pw> -a -f cos-data3.ldif
Created attachment 328496 [details]
output (snippet) from valgrind when the previous ldif files are added by ldapmodify
Created attachment 328497 [details]
cvs diff ldapserver/ldap/servers/plugins/cos/cos_cache.c
Fix Description: When all the necessary values for the template cache are not available, the allocated memory should be discarded. One of them pCosPriority was missed to release.
"cos_cache.c"
1126 static int cos_dn_tmpl_entries_cb (Slapi_Entry* e, void *callback_data) {
[...]
1247 else
1248 {
1249 /*
1250 this template is brain dead - bail
1251 if we have a dn use it to report, if not then *really* bad
1252 things are going on
1253 - of course it might not be a template, so lets
1254 not tell the world unless the world wants to know,
1255 we'll make it a plugin level message
1256 */
Created attachment 328507 [details]
cvs commit message
Reviewed by Rich (Thank you!!)
Checked in into CVS HEAD.
Created attachment 334974 [details]
valgrind output
Please see attached valgrind output while running test ldif files - looks a lot better than previous attached output. Noriko - will this suffice? (In reply to comment #15) > Please see attached valgrind output while running test ldif files - looks a lot > better than previous attached output. Noriko - will this suffice? Jenny, I wonder how you ran valgrind with the server. Is it from you start the server with valgrind. Ran something like the commands in comment #10 then shutdown the server? I don't see many leak reports we usually see in your valgrind output... I learned how Jenny ran the server with valgrind in the email conversation. That's the right way to do it. I scanned her valgrind output and did not see any cos_cache related leak messages in it, thus I'm going to mark verified. An advisory has been issued which should help the problem described in this bug report. This report is therefore being closed with a resolution of ERRATA. For more information on therefore solution and/or where to find the updated files, please follow the link below. You may reopen this bug report if the solution does not work for you. http://rhn.redhat.com/errata/RHEA-2009-0455.html |