Bug 1283582 - dynflow_executor memory usage continues to grow, causing performance degredation
Summary: dynflow_executor memory usage continues to grow, causing performance degredation
Status: CLOSED ERRATA
Alias: None
Product: Red Hat Satellite 6
Classification: Red Hat
Component: Subscription Management
Version: 6.1.4
Hardware: Unspecified
OS: Unspecified
unspecified
high vote
Target Milestone: Unspecified
Assignee: Ivan Necas
QA Contact: Chris Duryee
URL: http://projects.theforeman.org/issues...
Whiteboard:
Keywords: Triaged
Depends On:
Blocks: 1317008
TreeView+ depends on / blocked
 
Reported: 2015-11-19 11:00 UTC by Stuart Auchterlonie
Modified: 2019-04-01 20:27 UTC (History)
8 users (show)

(edit)
Clone Of:
(edit)
Last Closed: 2016-07-27 11:04:22 UTC


Attachments (Terms of Use)


External Trackers
Tracker ID Priority Status Summary Last Updated
Foreman Issue Tracker 12650 None None None 2016-04-26 17:10 UTC
Red Hat Knowledge Base (Solution) 2133971 None None None 2016-01-21 12:09 UTC

Description Stuart Auchterlonie 2015-11-19 11:00:47 UTC
Description of problem:

The memory usage of dynflow_executor continues to grow over time.
As a conequence of this, operations like content view publishes
take longer and longer to complete.

Restarting the Satellite improves the performance, which then
continues to degrade over time


Version-Release number of selected component (if applicable):

6.1.3, 6.1.4

How reproducible:

100%

Steps to Reproduce:
1. Use Satellite
2.
3.

Actual results:

dynflow_executor memory usage continues to grow over time

Expected results:

memory usage should be reach a steady state and rise and
fall as work is done.

Additional info:

I used the following script to collect process mem stats
every 15 minutes

#!/bin/bash

LOGFILE=/var/log/dynflow_executor-memory-usage.log
PID=`pidof dynflow_executor`
cat /proc/$PID/status | grep ^Vm >> $LOGFILE

The I can review the VmPeak, VmData etc sizes.

The only time these figures go down is on a process restart.

Comment 2 Bryan Kearney 2015-12-01 18:22:04 UTC
Created redmine issue http://projects.theforeman.org/issues/12650 from this bug

Comment 3 Bryan Kearney 2015-12-01 19:01:12 UTC
Upstream bug component is Tasks Plugin

Comment 5 Bryan Kearney 2016-01-04 21:29:55 UTC
Upstream bug assigned to inecas@redhat.com

Comment 9 Stuart Auchterlonie 2016-05-05 10:42:28 UTC
Doesn't look like the memory leak has been plugged.
These results are from 6.2 Beta (public)

# grep ^VmData /var/log/dynflow_executor-memory-usage.log | uniq
VmData:	 2371532 kB
VmData:	 1082756 kB
VmData:	  901564 kB
VmData:	 1297548 kB
VmData:	 1494156 kB
VmData:	 1559692 kB
VmData:	 1826900 kB
VmData:	 1953612 kB

Only downward movement relates to restart(s) of satellite.

It's an improvement, as this system has been up for 2 weeks,
so the last restart of satellite was around that time.

Comment 10 Ivan Necas 2016-05-25 12:07:38 UTC
I've isolated the the problem on the listening on candlepin events.

There is an ::Actions::Candlepin::ListenOnCandlepinEvents action.

I've watched memory consimption while using different kind of actions and the only time the memory was raising was when I was doing:

I=0; while subscription-manager register --username admin --password changeme --org 'Summit2016' --environment Library --force; do I=$((I+1)); echo ================== $I; done

Then if commented out the line ::Actions::Candlepin::ListenOnCandlepinEvents.ensure_running(world)
in /opt/theforeman/tfm/root/usr/share/gems/gems/katello-3.0.0.24/lib/katello/engine.rb and restarted foreman-tasks

After than, the subscription-manager register calls where not causing the leaks.

I suspect qpid client library to cause this, I've found https://issues.apache.org/jira/browse/QPID-5872 that seems relevant and unresolved.

Comment 11 Ivan Necas 2016-05-25 12:27:33 UTC
It seems this issue is relevang https://issues.apache.org/jira/browse/QPID-3321, after adding `@session.sync` it seems the problems went away

Comment 12 Ivan Necas 2016-05-25 13:13:24 UTC
Proposed fix https://github.com/Katello/katello/pull/6065

Comment 13 Bryan Kearney 2016-05-25 14:10:14 UTC
Upstream bug assigned to inecas@redhat.com

Comment 14 Bryan Kearney 2016-05-25 14:10:18 UTC
Upstream bug assigned to inecas@redhat.com

Comment 15 Bryan Kearney 2016-05-27 14:11:42 UTC
Moving to POST since upstream bug http://projects.theforeman.org/issues/12650 has been closed

Comment 16 Chris Duryee 2016-07-07 15:03:47 UTC
tested with:

tfm-rubygem-katello-3.0.0.57-1.el7sat.noarch

how I tested:

I used the instructions in https://github.com/Katello/katello/pull/6065#issue-156747193

No mem increase was observed

marking as VERIFIED.

Comment 17 Bryan Kearney 2016-07-27 11:04:22 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHBA-2016:1501


Note You need to log in before you can comment on or make changes to this bug.