Description of problem:
With memory recycler, it happens more often that the tasks can get interrupted
during the execution. In sake of transparency of the recycling process, we should
try to handle this situation better so that the user doesn't have to deal with
the error explicitly
Version-Release number of selected component (if applicable):
Steps to Reproduce:
1. setup memory limit in /etc/sysconfig/foreman-tasks (EXECUTOR_MEMORY_LIMIT=2gb, for easier reproducing, one might decrease
the EXECUTOR_MEMORY_MONITOR_DELAY to get the restarting more often)
2. restart foreman-tasks
3. start using Satellite in larger environment (continuous registration of hosts + content view publishes in combination with multiple capsules)
After some time, some tasks can end up in paused/error state `Abnormal termination (previous state: running)`
We should analyse this cases and find a way how to resume those before requiring
the user to manually interact with those
We will try to find more reliable reproducer, as we will develop the fix for this issue.
Created redmine issue http://projects.theforeman.org/issues/25528 from this bug
The original memory recycler was introduced in foreman-tasks-0.9.2 which first landed in Satellite 6.3. It was described in the tuning guide until 6.6 inclusive, but it was removed from the tuning guide in 6.7 so whether it was supported in 6.7 or even 6.8 is rather questionable. Now with 6.8, the memory recycler is gone and therefore cannot cause tasks to get paused with abnormal termination errors.
If the workers grow too much and someone wants to reclaim the memory, they workers can be relatively safely restarted manually with systemctl restart dynflow-sidekiq@$worker where $worker is the worker instance id. The workers should be able to deal with being restarted this way without having any impact on the jobs.
If we want to be super safe, it would be better to do
systemctl kill --signal TSTP dynflow-sidekiq@$worker
while ! systemctl status dynflow-sidekiq@$worker | grep -Po '\[0 of \d+ busy\]'; do sleep 5; done
systemctl restart dynflow-sidekiq@$worker
The first systemctl command will tell the worker not to accept new jobs, then it will wait until the worker is not doing anything and the last one will restart the service.
If the workers are killed hard (kill -9), the workers should be able to recover from that, but it will be on a best-effort basis. The harder you kill the workers, the harder the recovery becomes. It may just work, it may take time or it may take time and then fail the job. Here be dragons.
I put together a writeup how a memory recycler *could* be implemented using systemd from 6.8 onwards. Please read it carefully, as there are a few catches, mostly regarding how systemd kills services when the memory limit is reached and how the recovery is handled, but the tl;dr is:
- it can be done
- the services will get killed hard
- with dynflow <= 1.4.6, a patch needs to be applied otherwise the recovery will not be successful
- even with the patch from the previous line, the recovery is best-effort only
 - https://gist.github.com/adamruzicka/8abb3c65aa8ff1c84c1b81599a6d42b0
 - https://github.com/Dynflow/dynflow/pull/360
@Ashish from your point of view, was this considered supported given it wasn't mentioned anywhere in the docs starting with 6.7?
Not sure what the right state for this should be, removing triaged keyword to let the triage time decide what to do with this bz
Thanks for confirmation, the memory recycler is gone, there will be a KCS how to achieve similar behavior even though it's not generally recommended. Closing this now. Please reopen if I missed something.