Bug 1394816 - [Regression] dynflow_executor memory usage continues to grow, causing performance degradation
Summary: [Regression] dynflow_executor memory usage continues to grow, causing perform...
Keywords:
Status: CLOSED INSUFFICIENT_DATA
Alias: None
Product: Red Hat Satellite
Classification: Red Hat
Component: Subscription Management
Version: 6.2.2
Hardware: x86_64
OS: Linux
high
high vote
Target Milestone: Unspecified
Assignee: Ivan Necas
QA Contact: Katello QA List
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2016-11-14 13:47 UTC by Rick Dixon
Modified: 2021-12-10 14:47 UTC (History)
12 users (show)

Fixed In Version:
Doc Type: If docs needed, set a value
Doc Text:
Clone Of:
Environment:
Last Closed: 2016-11-30 14:10:17 UTC
Target Upstream Version:


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
Red Hat Knowledge Base (Legacy) 2766381 0 None None None 2016-11-14 19:43:21 UTC
Red Hat Knowledge Base (Solution) 2785811 0 None None None 2016-11-30 13:57:27 UTC

Comment 8 Ivan Necas 2016-11-15 10:20:36 UTC
Having the reports grouped by the the executable this way is not very useful for analysis, as it doesn't actually show which processes are growing most, please provide more granular information, similar to what's requested here https://bugzilla.redhat.com/show_bug.cgi?id=1368103#c6. From the current report, I can only see the ruby set of processes growing, but there are a lot of different process in there.

Comment 10 Ivan Necas 2016-11-15 17:37:46 UTC
Thanks, it helps a bit, but I would like to ask for more details if possible:

1. restart foreman-tasks
2. start collecting periodically the `ps aux` output (let's say every minute, if possible)
3. perform the reproducing steps (I assume it's reproducible easily)
4. once the memory grew significantly, run `foreman-rake foreman_tasks:export_tasks tasks=all days=1`
to share what kind of tasks were there
5. I would be also interested into qpid stats:
qpid-stat --ssl-certificate=/etc/pki/katello/qpid_client_striped.crt -b amqps://localhost:5671 -e
qpid-stat --ssl-certificate=/etc/pki/katello/qpid_client_striped.crt -b amqps://localhost:5671 -g
qpid-stat --ssl-certificate=/etc/pki/katello/qpid_client_striped.crt -b amqps://localhost:5671 -c

Comment 24 Ivan Necas 2016-11-28 16:57:00 UTC
The investigation so far lead to two issues that caused unnecessary large tasks to be created at every repo sync:

https://bugzilla.redhat.com/show_bug.cgi?id=1391704
https://bugzilla.redhat.com/show_bug.cgi?id=1398438

Both related to a repo syncing, when library is synced to a capsule

Comment 25 Ivan Necas 2016-11-28 17:13:47 UTC
Here is another bug, that came from analysis of further data provided https://bugzilla.redhat.com/show_bug.cgi?id=1399294

Comment 32 Ivan Necas 2016-11-30 14:10:17 UTC
I'm closing this issue, as the original case when opening the issue was closed and there were additional bugs filed identifying specific usages that lead to the memory growth. The reason for closing is avoiding misunderstandings when dealing with several independent issues in one BZ. For future bugs regarding similar memory issues, please see

https://access.redhat.com/site/solutions/2785811

to see possible causes of the issues, as well as info how to collect data for faster analysis and provide additional information, when any of the existing bug  mentioned there doesn't correspond to the system behavior and a new BZ needs to be filed.


Note You need to log in before you can comment on or make changes to this bug.