Description of problem:
The requests that the responders send to the Data Providers are allocated on the global context to ensure that even if the client disconnects, there is someone to read the reply. However, we forgot to free the structure that represents the request, which meant that the sssd_nss process grew over time.
Version-Release number of selected component (if applicable):
Steps to Reproduce:
1. set a very low cache timeout
2. run account requests in parallel
3. observe the sssd_nss process growing
sssd_nss process is growing
the consumption should stay pretty much the same
This is not easily reproducable, but apart from running many requests and watching the consumption grow, a quicker, but more involved way might be to check with the gdb that no tevent_req structures are allocated on top of the rctx after a request finishes. Please let me know which approach is preferable for QE.
STI IPA automated testing can reproduce this issue and verify.
Did you see this on the last execution? Can this be set to VERIFIED?
Not a defect I wrote up but was concerned with in the past. I just happened to be collecting this process info for sssd_nss the last 8 days with a build containing Jakubs fixes during my final system test run.
I applied mutithread ssh/sudo runtime load against the Ipa clients over a 8 day period subjecting each Ipa client to 50k attempts per day cycling through 10k Ipa users. Over that period of time the ps aux process sssd_nss parameters %CPU, %MEM, VSZ, RSS was flat for both Ipa clients. Samples were taken every 10 minutes, stored in my db for historical analysis.
Build Installed 11/16/2012
Red Hat Enterprise Linux Server release 6.4 Beta (Santiago)
IPA server version 3.0.1. API version 2.46
2 Ipa Servers
2 Ipa Clients
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.
For information on the advisory, and where to find the updated
files, follow the link below.
If the solution does not work for you, open a new bug report.