Bug 1283582 - dynflow_executor memory usage continues to grow, causing performance degredation
dynflow_executor memory usage continues to grow, causing performance degredation
Status: CLOSED ERRATA
Product: Red Hat Satellite 6
Classification: Red Hat
Component: Subscription Management (Show other bugs)
6.1.4
Unspecified Unspecified
unspecified Severity high (vote)
: GA
: --
Assigned To: Ivan Necas
Chris Duryee
http://projects.theforeman.org/issues...
: Triaged
Depends On:
Blocks: 1317008
  Show dependency treegraph
 
Reported: 2015-11-19 06:00 EST by Stuart Auchterlonie
Modified: 2017-08-11 05:20 EDT (History)
7 users (show)

See Also:
Fixed In Version: rubygem-katello-3.0.0.38-1
Doc Type: Bug Fix
Doc Text:
Story Points: ---
Clone Of:
Environment:
Last Closed: 2016-07-27 07:04:22 EDT
Type: Bug
Regression: ---
Mount Type: ---
Documentation: ---
CRM:
Verified Versions:
Category: ---
oVirt Team: ---
RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: ---


Attachments (Terms of Use)


External Trackers
Tracker ID Priority Status Summary Last Updated
Red Hat Knowledge Base (Solution) 2133971 None None None 2016-01-21 07:09 EST
Foreman Issue Tracker 12650 None None None 2016-04-26 13:10 EDT

  None (edit)
Description Stuart Auchterlonie 2015-11-19 06:00:47 EST
Description of problem:

The memory usage of dynflow_executor continues to grow over time.
As a conequence of this, operations like content view publishes
take longer and longer to complete.

Restarting the Satellite improves the performance, which then
continues to degrade over time


Version-Release number of selected component (if applicable):

6.1.3, 6.1.4

How reproducible:

100%

Steps to Reproduce:
1. Use Satellite
2.
3.

Actual results:

dynflow_executor memory usage continues to grow over time

Expected results:

memory usage should be reach a steady state and rise and
fall as work is done.

Additional info:

I used the following script to collect process mem stats
every 15 minutes

#!/bin/bash

LOGFILE=/var/log/dynflow_executor-memory-usage.log
PID=`pidof dynflow_executor`
cat /proc/$PID/status | grep ^Vm >> $LOGFILE

The I can review the VmPeak, VmData etc sizes.

The only time these figures go down is on a process restart.
Comment 2 Bryan Kearney 2015-12-01 13:22:04 EST
Created redmine issue http://projects.theforeman.org/issues/12650 from this bug
Comment 3 Bryan Kearney 2015-12-01 14:01:12 EST
Upstream bug component is Tasks Plugin
Comment 5 Bryan Kearney 2016-01-04 16:29:55 EST
Upstream bug assigned to inecas@redhat.com
Comment 9 Stuart Auchterlonie 2016-05-05 06:42:28 EDT
Doesn't look like the memory leak has been plugged.
These results are from 6.2 Beta (public)

# grep ^VmData /var/log/dynflow_executor-memory-usage.log | uniq
VmData:	 2371532 kB
VmData:	 1082756 kB
VmData:	  901564 kB
VmData:	 1297548 kB
VmData:	 1494156 kB
VmData:	 1559692 kB
VmData:	 1826900 kB
VmData:	 1953612 kB

Only downward movement relates to restart(s) of satellite.

It's an improvement, as this system has been up for 2 weeks,
so the last restart of satellite was around that time.
Comment 10 Ivan Necas 2016-05-25 08:07:38 EDT
I've isolated the the problem on the listening on candlepin events.

There is an ::Actions::Candlepin::ListenOnCandlepinEvents action.

I've watched memory consimption while using different kind of actions and the only time the memory was raising was when I was doing:

I=0; while subscription-manager register --username admin --password changeme --org 'Summit2016' --environment Library --force; do I=$((I+1)); echo ================== $I; done

Then if commented out the line ::Actions::Candlepin::ListenOnCandlepinEvents.ensure_running(world)
in /opt/theforeman/tfm/root/usr/share/gems/gems/katello-3.0.0.24/lib/katello/engine.rb and restarted foreman-tasks

After than, the subscription-manager register calls where not causing the leaks.

I suspect qpid client library to cause this, I've found https://issues.apache.org/jira/browse/QPID-5872 that seems relevant and unresolved.
Comment 11 Ivan Necas 2016-05-25 08:27:33 EDT
It seems this issue is relevang https://issues.apache.org/jira/browse/QPID-3321, after adding `@session.sync` it seems the problems went away
Comment 12 Ivan Necas 2016-05-25 09:13:24 EDT
Proposed fix https://github.com/Katello/katello/pull/6065
Comment 13 Bryan Kearney 2016-05-25 10:10:14 EDT
Upstream bug assigned to inecas@redhat.com
Comment 14 Bryan Kearney 2016-05-25 10:10:18 EDT
Upstream bug assigned to inecas@redhat.com
Comment 15 Bryan Kearney 2016-05-27 10:11:42 EDT
Moving to POST since upstream bug http://projects.theforeman.org/issues/12650 has been closed
Comment 16 Chris Duryee 2016-07-07 11:03:47 EDT
tested with:

tfm-rubygem-katello-3.0.0.57-1.el7sat.noarch

how I tested:

I used the instructions in https://github.com/Katello/katello/pull/6065#issue-156747193

No mem increase was observed

marking as VERIFIED.
Comment 17 Bryan Kearney 2016-07-27 07:04:22 EDT
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHBA-2016:1501

Note You need to log in before you can comment on or make changes to this bug.