Description of problem: The memory usage of dynflow_executor continues to grow over time. As a conequence of this, operations like content view publishes take longer and longer to complete. Restarting the Satellite improves the performance, which then continues to degrade over time Version-Release number of selected component (if applicable): 6.1.3, 6.1.4 How reproducible: 100% Steps to Reproduce: 1. Use Satellite 2. 3. Actual results: dynflow_executor memory usage continues to grow over time Expected results: memory usage should be reach a steady state and rise and fall as work is done. Additional info: I used the following script to collect process mem stats every 15 minutes #!/bin/bash LOGFILE=/var/log/dynflow_executor-memory-usage.log PID=`pidof dynflow_executor` cat /proc/$PID/status | grep ^Vm >> $LOGFILE The I can review the VmPeak, VmData etc sizes. The only time these figures go down is on a process restart.
Created redmine issue http://projects.theforeman.org/issues/12650 from this bug
Upstream bug component is Tasks Plugin
Upstream bug assigned to inecas
Doesn't look like the memory leak has been plugged. These results are from 6.2 Beta (public) # grep ^VmData /var/log/dynflow_executor-memory-usage.log | uniq VmData: 2371532 kB VmData: 1082756 kB VmData: 901564 kB VmData: 1297548 kB VmData: 1494156 kB VmData: 1559692 kB VmData: 1826900 kB VmData: 1953612 kB Only downward movement relates to restart(s) of satellite. It's an improvement, as this system has been up for 2 weeks, so the last restart of satellite was around that time.
I've isolated the the problem on the listening on candlepin events. There is an ::Actions::Candlepin::ListenOnCandlepinEvents action. I've watched memory consimption while using different kind of actions and the only time the memory was raising was when I was doing: I=0; while subscription-manager register --username admin --password changeme --org 'Summit2016' --environment Library --force; do I=$((I+1)); echo ================== $I; done Then if commented out the line ::Actions::Candlepin::ListenOnCandlepinEvents.ensure_running(world) in /opt/theforeman/tfm/root/usr/share/gems/gems/katello-3.0.0.24/lib/katello/engine.rb and restarted foreman-tasks After than, the subscription-manager register calls where not causing the leaks. I suspect qpid client library to cause this, I've found https://issues.apache.org/jira/browse/QPID-5872 that seems relevant and unresolved.
It seems this issue is relevang https://issues.apache.org/jira/browse/QPID-3321, after adding `@session.sync` it seems the problems went away
Proposed fix https://github.com/Katello/katello/pull/6065
Moving to POST since upstream bug http://projects.theforeman.org/issues/12650 has been closed
tested with: tfm-rubygem-katello-3.0.0.57-1.el7sat.noarch how I tested: I used the instructions in https://github.com/Katello/katello/pull/6065#issue-156747193 No mem increase was observed marking as VERIFIED.
Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory, and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://access.redhat.com/errata/RHBA-2016:1501