Since this issue was entered in Red Hat Bugzilla, the release flag has been
set to ? to ensure that it is properly evaluated for this release.
Please do not mix more bugs in one BZ and file another one.
I need more info - re comment 4.
please do not override the priority flag which is used by us to order lists of bugs.
From Katello upstream docs (http://www.katello.org/troubleshooting/index.html#dealing-with-paused-task)
Foreman tasks provides a locking mechanism which is important to prevent the possibility of operations colliding that are being performed concurrently on the same resource (such as synchronizing and deleting a repository at the same time).
When trying to run an operation on a resource that another task is already running, one can get Required lock is already taken by other running tasks.
A locked resource is one where another task that is related to the same resource is already running. Thus, the task being attempted will result in that task being tried in running or paused state. This means that the error is triggered also in cases, where there is a task with unresolved failure (see dealing with paused tasks for more details).
In rare cases, it might be hard to get into the stopped state. There is a possibility to unlock the resource in the running/paused task. This will switch the task into stopped state, freeing the resources for other tasks. Caution: unlocking allows running other tasks to run on potentially inconsistent data, which might lead into further errors. It’s still possible to go to the Dynflow console and resume the tasks, even after using the unlock feature. There are two unlock-related buttons: Unlock and Force Unlock. The only difference between these two is the second one is allowed even when the task is in running state, and therefore is potentially even more dangerous than the Unlock button. See dealing with tasks running too long before attempting to use the Force Unlock option.
Some relevant info at https://access.redhat.com/solutions/1284813
please provide foreman logs when you execute puppet (and when the node script runs), this will most likely show the actual error.