Note: This bug is displayed in read-only format because the product is no longer active in Red Hat Bugzilla.
Red Hat Satellite engineering is moving the tracking of its product development work on Satellite to Red Hat Jira (issues.redhat.com). If you're a Red Hat customer, please continue to file support cases via the Red Hat customer portal. If you're not, please head to the "Satellite project" in Red Hat Jira and file new tickets here. Individual Bugzilla bugs will be migrated starting at the end of May. If you cannot log in to RH Jira, please consult article #7032570. That failing, please send an e-mail to the RH Jira admins at rh-issues@redhat.com to troubleshoot your issue as a user management inquiry. The email creates a ServiceNow ticket with Red Hat. Individual Bugzilla bugs that are migrated will be moved to status "CLOSED", resolution "MIGRATED", and set with "MigratedToJIRA" in "Keywords". The link to the successor Jira issue will be found under "Links", have a little "two-footprint" icon next to it, and direct you to the "Satellite project" in Red Hat Jira (issue links are of type "https://issues.redhat.com/browse/SAT-XXXX", where "X" is a digit). This same link will be available in a blue banner at the top of the page informing you that that bug has been migrated.

Bug 1394816

Summary: [Regression] dynflow_executor memory usage continues to grow, causing performance degradation
Product: Red Hat Satellite Reporter: Rick Dixon <rdixon>
Component: Subscription ManagementAssignee: Ivan Necas <inecas>
Status: CLOSED INSUFFICIENT_DATA QA Contact: Katello QA List <katello-qa-list>
Severity: high Docs Contact:
Priority: high    
Version: 6.2.2CC: anerurka, aperotti, bbuckingham, erinn.looneytriggs, inecas, jcallaha, kdixon, mmccune, mmello, pdwyer, rdixon, xdmoon
Target Milestone: UnspecifiedKeywords: Performance, PrioBumpGSS, Triaged
Target Release: Unused   
Hardware: x86_64   
OS: Linux   
Whiteboard:
Fixed In Version: Doc Type: If docs needed, set a value
Doc Text:
Story Points: ---
Clone Of: Environment:
Last Closed: 2016-11-30 14:10:17 UTC Type: Bug
Regression: --- Mount Type: ---
Documentation: --- CRM:
Verified Versions: Category: ---
oVirt Team: --- RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: --- Target Upstream Version:
Embargoed:

Comment 8 Ivan Necas 2016-11-15 10:20:36 UTC
Having the reports grouped by the the executable this way is not very useful for analysis, as it doesn't actually show which processes are growing most, please provide more granular information, similar to what's requested here https://bugzilla.redhat.com/show_bug.cgi?id=1368103#c6. From the current report, I can only see the ruby set of processes growing, but there are a lot of different process in there.

Comment 10 Ivan Necas 2016-11-15 17:37:46 UTC
Thanks, it helps a bit, but I would like to ask for more details if possible:

1. restart foreman-tasks
2. start collecting periodically the `ps aux` output (let's say every minute, if possible)
3. perform the reproducing steps (I assume it's reproducible easily)
4. once the memory grew significantly, run `foreman-rake foreman_tasks:export_tasks tasks=all days=1`
to share what kind of tasks were there
5. I would be also interested into qpid stats:
qpid-stat --ssl-certificate=/etc/pki/katello/qpid_client_striped.crt -b amqps://localhost:5671 -e
qpid-stat --ssl-certificate=/etc/pki/katello/qpid_client_striped.crt -b amqps://localhost:5671 -g
qpid-stat --ssl-certificate=/etc/pki/katello/qpid_client_striped.crt -b amqps://localhost:5671 -c

Comment 24 Ivan Necas 2016-11-28 16:57:00 UTC
The investigation so far lead to two issues that caused unnecessary large tasks to be created at every repo sync:

https://bugzilla.redhat.com/show_bug.cgi?id=1391704
https://bugzilla.redhat.com/show_bug.cgi?id=1398438

Both related to a repo syncing, when library is synced to a capsule

Comment 25 Ivan Necas 2016-11-28 17:13:47 UTC
Here is another bug, that came from analysis of further data provided https://bugzilla.redhat.com/show_bug.cgi?id=1399294

Comment 32 Ivan Necas 2016-11-30 14:10:17 UTC
I'm closing this issue, as the original case when opening the issue was closed and there were additional bugs filed identifying specific usages that lead to the memory growth. The reason for closing is avoiding misunderstandings when dealing with several independent issues in one BZ. For future bugs regarding similar memory issues, please see

https://access.redhat.com/site/solutions/2785811

to see possible causes of the issues, as well as info how to collect data for faster analysis and provide additional information, when any of the existing bug  mentioned there doesn't correspond to the system behavior and a new BZ needs to be filed.