Bug 1412308 - small memory leak in dynflow_executor triggered by virt-who run
Summary: small memory leak in dynflow_executor triggered by virt-who run
Keywords:
Status: CLOSED DUPLICATE of bug 1412307
Alias: None
Product: Red Hat Satellite
Classification: Red Hat
Component: Tasks Plugin
Version: 6.2.6
Hardware: x86_64
OS: Linux
high
high
Target Milestone: Unspecified
Assignee: satellite6-bugs
QA Contact:
URL:
Whiteboard:
Depends On:
Blocks: 1353215
TreeView+ depends on / blocked
 
Reported: 2017-01-11 17:33 UTC by Pavel Moravec
Modified: 2020-08-13 08:48 UTC (History)
10 users (show)

Fixed In Version:
Doc Type: If docs needed, set a value
Doc Text:
Clone Of:
Environment:
Last Closed: 2017-06-15 18:33:45 UTC
Target Upstream Version:
Embargoed:


Attachments (Terms of Use)

Description Pavel Moravec 2017-01-11 17:33:36 UTC
Description of problem:
Having frequent virt-who reports (that triggers Actions::Katello::Host::Hypervisors task, each), dynflow_executor memory consumption grows a bit over time. It is often flat for some time, but after say 100 such tasks, there is a jump in memory utilization in (tens of) MBs to a new "stable level" of RSS.


Version-Release number of selected component (if applicable):
Sat 6.2.6
tfm-rubygem-dynflow-0.8.13.3-2.el7sat.noarch
tfm-rubygem-hammer_cli_foreman_tasks-0.0.10.3-1.el7sat.noarch
rubygem-smart_proxy_dynflow-0.1.3-1.el7sat.noarch
tfm-rubygem-smart_proxy_dynflow_core-0.1.3-1.el7sat.noarch
tfm-rubygem-foreman-tasks-0.7.14.11-1.el7sat.noarch

How reproducible:
100%


Steps to Reproduce:
1. Have a RHEVM virt backend (I guess any other can be used as well) that virt-who reports to Satellite. To artificially trigger frequent reports, do:

while true; do date; touch /etc/virt-who.d/rhevm.conf; service virt-who restart; sleep 30; done

(the touch and restart ensures a new report is sent and a new foreman task is triggered by every loop of that bash cycle)

2. Monitor memory usage of dynflow_executor


Actual results:
RSS grows a bit, like:

Wed Jan 11 17:21:22 CET 2017
foreman   3478  0.6 11.2 3118784 1558720 ?     Sl   Jan10  12:01 dynflow_executor
Wed Jan 11 17:26:22 CET 2017
foreman   3478  0.6 11.2 3118784 1559248 ?     Sl   Jan10  12:02 dynflow_executor
Wed Jan 11 17:31:22 CET 2017
foreman   3478  0.6 11.2 3118784 1559272 ?     Sl   Jan10  12:04 dynflow_executor
Wed Jan 11 17:36:22 CET 2017
foreman   3478  0.6 11.2 3118784 1559276 ?     Sl   Jan10  12:06 dynflow_executor
Wed Jan 11 17:41:22 CET 2017
foreman   3478  0.6 11.2 3118784 1559176 ?     Sl   Jan10  12:08 dynflow_executor
Wed Jan 11 17:46:22 CET 2017
foreman   3478  0.6 11.2 3118784 1559964 ?     Sl   Jan10  12:10 dynflow_executor
Wed Jan 11 17:51:22 CET 2017
foreman   3478  0.6 11.2 3118784 1559972 ?     Sl   Jan10  12:12 dynflow_executor
Wed Jan 11 17:56:22 CET 2017
foreman   3478  0.6 11.2 3118784 1561028 ?     Sl   Jan10  12:14 dynflow_executor
Wed Jan 11 18:01:22 CET 2017
foreman   3478  0.6 11.2 3118784 1561288 ?     Sl   Jan10  12:16 dynflow_executor
Wed Jan 11 18:06:23 CET 2017
foreman   3478  0.6 11.2 3118784 1566036 ?     Sl   Jan10  12:18 dynflow_executor
Wed Jan 11 18:11:23 CET 2017
foreman   3478  0.6 11.3 3118784 1569564 ?     Sl   Jan10  12:20 dynflow_executor
Wed Jan 11 18:16:23 CET 2017
foreman   3478  0.6 11.3 3118784 1569564 ?     Sl   Jan10  12:23 dynflow_executor
Wed Jan 11 18:21:23 CET 2017
foreman   3478  0.6 11.3 3118784 1569828 ?     Sl   Jan10  12:25 dynflow_executor
Wed Jan 11 18:26:23 CET 2017
foreman   3478  0.6 11.3 3118784 1569856 ?     Sl   Jan10  12:27 dynflow_executor
Wed Jan 11 18:31:23 CET 2017
foreman   3478  0.6 11.3 3118784 1576452 ?     Sl   Jan10  12:29 dynflow_executor


Expected results:
no such memory growth


Additional info:

Comment 3 Ivan Necas 2017-06-13 20:12:44 UTC
Was the GC variables set with the reproducer?

Could you employ https://bugzilla.redhat.com/show_bug.cgi?id=1412307#c33 to collect more data about garbage collection.

Can we rule out connection to https://bugzilla.redhat.com/show_bug.cgi?id=1440235 ?

In general, this fits the stanard memory growth issue related to the long-running ruby process, that can ask for more memory if it considers as needed, which doesn't have to mean the memory leak (due to way how the garbage collection works in Ruby).

The way how to ensure the toplimit is being addressed in https://bugzilla.redhat.com/show_bug.cgi?id=1434069 and it's planned for sat 6.3.

Comment 4 Pavel Moravec 2017-06-15 18:33:45 UTC
I cant reproduce the leak after adding the GC tunning. Closing it as dup. of 1412307 then.

*** This bug has been marked as a duplicate of bug 1412307 ***


Note You need to log in before you can comment on or make changes to this bug.