Description of problem: Having frequent virt-who reports (that triggers Actions::Katello::Host::Hypervisors task, each), dynflow_executor memory consumption grows a bit over time. It is often flat for some time, but after say 100 such tasks, there is a jump in memory utilization in (tens of) MBs to a new "stable level" of RSS. Version-Release number of selected component (if applicable): Sat 6.2.6 tfm-rubygem-dynflow-0.8.13.3-2.el7sat.noarch tfm-rubygem-hammer_cli_foreman_tasks-0.0.10.3-1.el7sat.noarch rubygem-smart_proxy_dynflow-0.1.3-1.el7sat.noarch tfm-rubygem-smart_proxy_dynflow_core-0.1.3-1.el7sat.noarch tfm-rubygem-foreman-tasks-0.7.14.11-1.el7sat.noarch How reproducible: 100% Steps to Reproduce: 1. Have a RHEVM virt backend (I guess any other can be used as well) that virt-who reports to Satellite. To artificially trigger frequent reports, do: while true; do date; touch /etc/virt-who.d/rhevm.conf; service virt-who restart; sleep 30; done (the touch and restart ensures a new report is sent and a new foreman task is triggered by every loop of that bash cycle) 2. Monitor memory usage of dynflow_executor Actual results: RSS grows a bit, like: Wed Jan 11 17:21:22 CET 2017 foreman 3478 0.6 11.2 3118784 1558720 ? Sl Jan10 12:01 dynflow_executor Wed Jan 11 17:26:22 CET 2017 foreman 3478 0.6 11.2 3118784 1559248 ? Sl Jan10 12:02 dynflow_executor Wed Jan 11 17:31:22 CET 2017 foreman 3478 0.6 11.2 3118784 1559272 ? Sl Jan10 12:04 dynflow_executor Wed Jan 11 17:36:22 CET 2017 foreman 3478 0.6 11.2 3118784 1559276 ? Sl Jan10 12:06 dynflow_executor Wed Jan 11 17:41:22 CET 2017 foreman 3478 0.6 11.2 3118784 1559176 ? Sl Jan10 12:08 dynflow_executor Wed Jan 11 17:46:22 CET 2017 foreman 3478 0.6 11.2 3118784 1559964 ? Sl Jan10 12:10 dynflow_executor Wed Jan 11 17:51:22 CET 2017 foreman 3478 0.6 11.2 3118784 1559972 ? Sl Jan10 12:12 dynflow_executor Wed Jan 11 17:56:22 CET 2017 foreman 3478 0.6 11.2 3118784 1561028 ? Sl Jan10 12:14 dynflow_executor Wed Jan 11 18:01:22 CET 2017 foreman 3478 0.6 11.2 3118784 1561288 ? Sl Jan10 12:16 dynflow_executor Wed Jan 11 18:06:23 CET 2017 foreman 3478 0.6 11.2 3118784 1566036 ? Sl Jan10 12:18 dynflow_executor Wed Jan 11 18:11:23 CET 2017 foreman 3478 0.6 11.3 3118784 1569564 ? Sl Jan10 12:20 dynflow_executor Wed Jan 11 18:16:23 CET 2017 foreman 3478 0.6 11.3 3118784 1569564 ? Sl Jan10 12:23 dynflow_executor Wed Jan 11 18:21:23 CET 2017 foreman 3478 0.6 11.3 3118784 1569828 ? Sl Jan10 12:25 dynflow_executor Wed Jan 11 18:26:23 CET 2017 foreman 3478 0.6 11.3 3118784 1569856 ? Sl Jan10 12:27 dynflow_executor Wed Jan 11 18:31:23 CET 2017 foreman 3478 0.6 11.3 3118784 1576452 ? Sl Jan10 12:29 dynflow_executor Expected results: no such memory growth Additional info:
Was the GC variables set with the reproducer? Could you employ https://bugzilla.redhat.com/show_bug.cgi?id=1412307#c33 to collect more data about garbage collection. Can we rule out connection to https://bugzilla.redhat.com/show_bug.cgi?id=1440235 ? In general, this fits the stanard memory growth issue related to the long-running ruby process, that can ask for more memory if it considers as needed, which doesn't have to mean the memory leak (due to way how the garbage collection works in Ruby). The way how to ensure the toplimit is being addressed in https://bugzilla.redhat.com/show_bug.cgi?id=1434069 and it's planned for sat 6.3.
I cant reproduce the leak after adding the GC tunning. Closing it as dup. of 1412307 then. *** This bug has been marked as a duplicate of bug 1412307 ***