Red Hat Bugzilla – Bug 1283582
dynflow_executor memory usage continues to grow, causing performance degredation
Last modified: 2018-01-23 22:42:25 EST
Description of problem:
The memory usage of dynflow_executor continues to grow over time.
As a conequence of this, operations like content view publishes
take longer and longer to complete.
Restarting the Satellite improves the performance, which then
continues to degrade over time
Version-Release number of selected component (if applicable):
Steps to Reproduce:
1. Use Satellite
dynflow_executor memory usage continues to grow over time
memory usage should be reach a steady state and rise and
fall as work is done.
I used the following script to collect process mem stats
every 15 minutes
cat /proc/$PID/status | grep ^Vm >> $LOGFILE
The I can review the VmPeak, VmData etc sizes.
The only time these figures go down is on a process restart.
Created redmine issue http://projects.theforeman.org/issues/12650 from this bug
Upstream bug component is Tasks Plugin
Upstream bug assigned to email@example.com
Doesn't look like the memory leak has been plugged.
These results are from 6.2 Beta (public)
# grep ^VmData /var/log/dynflow_executor-memory-usage.log | uniq
VmData: 2371532 kB
VmData: 1082756 kB
VmData: 901564 kB
VmData: 1297548 kB
VmData: 1494156 kB
VmData: 1559692 kB
VmData: 1826900 kB
VmData: 1953612 kB
Only downward movement relates to restart(s) of satellite.
It's an improvement, as this system has been up for 2 weeks,
so the last restart of satellite was around that time.
I've isolated the the problem on the listening on candlepin events.
There is an ::Actions::Candlepin::ListenOnCandlepinEvents action.
I've watched memory consimption while using different kind of actions and the only time the memory was raising was when I was doing:
I=0; while subscription-manager register --username admin --password changeme --org 'Summit2016' --environment Library --force; do I=$((I+1)); echo ================== $I; done
Then if commented out the line ::Actions::Candlepin::ListenOnCandlepinEvents.ensure_running(world)
in /opt/theforeman/tfm/root/usr/share/gems/gems/katello-18.104.22.168/lib/katello/engine.rb and restarted foreman-tasks
After than, the subscription-manager register calls where not causing the leaks.
I suspect qpid client library to cause this, I've found https://issues.apache.org/jira/browse/QPID-5872 that seems relevant and unresolved.
It seems this issue is relevang https://issues.apache.org/jira/browse/QPID-3321, after adding `@session.sync` it seems the problems went away
Proposed fix https://github.com/Katello/katello/pull/6065
Moving to POST since upstream bug http://projects.theforeman.org/issues/12650 has been closed
how I tested:
I used the instructions in https://github.com/Katello/katello/pull/6065#issue-156747193
No mem increase was observed
marking as VERIFIED.
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.
For information on the advisory, and where to find the updated
files, follow the link below.
If the solution does not work for you, open a new bug report.