Bug 1340155 - 5.5 UI Worker Thread Leak while ordering a service
Summary: 5.5 UI Worker Thread Leak while ordering a service
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: Red Hat CloudForms Management Engine
Classification: Red Hat
Component: Performance
Version: 5.5.0
Hardware: Unspecified
OS: Unspecified
high
high
Target Milestone: GA
: 5.6.0
Assignee: dmetzger
QA Contact: Alex Krzos
URL:
Whiteboard:
Depends On:
Blocks: 1342277 1342279
TreeView+ depends on / blocked
 
Reported: 2016-05-26 14:37 UTC by Alex Krzos
Modified: 2019-08-06 20:04 UTC (History)
5 users (show)

Fixed In Version: 5.6.0.5
Doc Type: If docs needed, set a value
Doc Text:
Clone Of:
: 1342277 1342279 (view as bug list)
Environment:
Last Closed: 2016-06-29 16:06:05 UTC
Category: ---
Cloudforms Team: ---
Target Upstream Version:
Embargoed:


Attachments (Terms of Use)
Graph displaying Thread Count and Memory While ordering single item off service catalog (62.48 KB, image/png)
2016-05-26 14:58 UTC, Alex Krzos
no flags Details
Graph displaying thread count and memory after patch (58.97 KB, image/png)
2016-05-26 15:29 UTC, Alex Krzos
no flags Details


Links
System ID Private Priority Status Summary Last Updated
Red Hat Product Errata RHBA-2016:1348 0 normal SHIPPED_LIVE CFME 5.6.0 bug fixes and enhancement update 2016-06-29 18:50:04 UTC

Description Alex Krzos 2016-05-26 14:37:33 UTC
Description of problem:
Using a large database to replicate slow User Interface response timing while ordering a service I found that the UI Worker on 5.5 is leaking threads and growing in memory usage.

I ran through a simple scenario ordering a service 5 times in a row and the memory usage of the UIWorker shot from 355MiB RSS to 714MiB  during that time frame the number of threads grew from 21 to 111 for that worker.  Prior to that, I had ran through the scenario to "prime" any caches and that is most likely why the thread count was already 21 rather than the "typical" 3 threads I can see this worker has when initially started.

Version-Release number of selected component (if applicable):
Tested on 5.5.4.0

How reproducible:
With this large scale database on 5.5.4.0 it is reproducible 

Steps to Reproduce:
1. Navigate to Services -> Catalog
2. Pick an item to order on the service catalog
3. Observe memory and thread count for UIWorker

Actual results:
Thread count grows by 5 when pressing the order button, memory of the work grew by ~24MiB during that time frame as well.

Thread count again grows after clicking on the request after pushing the submit button.  And further grows on pushing the little checkbox to then approve an order.  These further grow the worker by around 20MiB RSS memory as well.

Approving an order also grows memory ~1MiB RSS

Expected results:
Thread count to only temporarily grow if needed and for memory to be reclaimed 

Additional info:


Memory also appears to climb as text is added to the text dialogs for this service as well.  This does not appear to have any thread leakage though.

Comment 3 Alex Krzos 2016-05-26 14:58:28 UTC
Created attachment 1162036 [details]
Graph displaying Thread Count and Memory While ordering single item off service catalog

Comment 4 Alex Krzos 2016-05-26 15:28:40 UTC
This is related to this bug:

https://bugzilla.redhat.com/show_bug.cgi?id=1321934

I tried the patch (8399) and the threads are not accumulating anymore

Furthermore memory usage levels off after 3-4 runs of ordering an item rather than continuing to grow

Comment 5 Alex Krzos 2016-05-26 15:29:23 UTC
Created attachment 1162066 [details]
Graph displaying thread count and memory after patch

Comment 9 errata-xmlrpc 2016-06-29 16:06:05 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHBA-2016:1348


Note You need to log in before you can comment on or make changes to this bug.