Bug 715226 - cancelling a job while it is being scheduled leaves an orphaned watchdog
Summary: cancelling a job while it is being scheduled leaves an orphaned watchdog
Keywords:
Status: CLOSED DUPLICATE of bug 807237
Alias: None
Product: Beaker
Classification: Retired
Component: inventory
Version: 0.5
Hardware: Unspecified
OS: Unspecified
high
high
Target Milestone: ---
Assignee: Dan Callaghan
QA Contact:
URL:
Whiteboard: Scheduler
: 663698 711686 740397 746955 749743 921898 (view as bug list)
Depends On:
Blocks: 593663
TreeView+ depends on / blocked
 
Reported: 2011-06-22 09:46 UTC by Ales Zelinka
Modified: 2014-08-12 04:34 UTC (History)
11 users (show)

Fixed In Version:
Doc Type: Bug Fix
Doc Text:
Clone Of:
Environment:
Last Closed: 2013-03-20 05:31:40 UTC
Embargoed:


Attachments (Terms of Use)

Comment 1 Dan Callaghan 2011-06-22 23:58:15 UTC
The fact that this recipe has been deleted is just making it slightly more inconvenient to see what's going on, it's not the cause of the problem.

I think this is the same problem as bug 663698 and bug 645873, where a recipe is cancelled but the system and watchdog are not removed.

Looking in the database, we noticed that the recipe was cancelled, and had NULL start_time and finish_time, which means that it never reached the 'Running' state. However a system had been assigned and a watchdog created, which means it had been through the the queued_recipes routine in beakerd. So the recipe was probably cancelled in the 'Scheduled' state, before the scheduled_recipes routine had a chance to actually start it.

What I still can't figure out is why the system was not released when the recipe was cancelled. That should be happening in RecipeSet._update_status.

One possible way to mitigate these problems would be to let the external watchdog clean them up eventually. We noticed that the watchdog row for this recipe had NULL kill_time, because the queued_recipes routine in beakerd does not set the kill_time when it initially creates the watchdog. Bill, is there any reason we can't set a kill_time of (say) 2 hours in the future when we first create the watchdog?

Comment 2 Dan Callaghan 2011-06-23 05:28:40 UTC
I've returned the two systems in question.

The workaround is to use the bkr job-cancel command to cancel the already-cancelled recipe sets. Unfortunately, once the recipe has been deleted the only way to find its corresponding recipe set ID is to look in the beaker database.

Comment 3 Ales Zelinka 2011-06-24 14:55:02 UTC
thanks for returning the machines for me. Would it make sense to scan the DB for other recipes with NULL kill_time, just in case other machines are currently blocked by this issue?

Comment 4 Dan Callaghan 2011-06-27 03:15:41 UTC
(In reply to comment #3)

Yes, we will keep an eye out for similarly "stuck" systems in the database, until we can fix this properly.

Comment 5 Dan Callaghan 2011-07-28 05:27:56 UTC
This SQL query will find watchdogs corresponding to completed recipes:

SELECT watchdog.*, recipe.status_id FROM watchdog INNER JOIN recipe ON watchdog.recipe_id = recipe.id WHERE kill_time IS NULL AND recipe.status_id >= 7;

In our Beaker instance that finds 13 "stuck" watchdogs, which I have just cleaned up. So this is definitely a persistent problem.

Comment 6 Dan Callaghan 2011-08-16 01:05:51 UTC
Could this be a race condition due to transaction isolation? In one transaction beakerd is scheduling the recipe and assigning it a system, while at the same time the user has cancelled the recipe so another transaction calls recipe.release_system() -- but that sees no system has been assigned yet, so it becomes a no-op. Then the transactions are serialised and we get a Cancelled recipe with a system and watchdog still assigned to it.

At the very least, we should probably lock the recipe row in release_system() before we check whether it has a system. Though this makes me wonder where else we might have similar race conditions -- should we actually be SELECTing recipes FOR UPDATE at the top of every loop in beakerd? And what about recipe sets and jobs?...

Comment 7 Matt Brodeur 2011-09-22 15:27:19 UTC
*** Bug 740397 has been marked as a duplicate of this bug. ***

Comment 8 Steven Lawrance 2011-09-23 01:48:14 UTC
I just triggered this on stage, cancelling a job via the web UI as beakerd was scheduling it.

2011-09-22 21:19:54,252 beakerd INFO recipe ID 7441 moved from Processed to Queued
2011-09-22 21:19:54,336 beakerd DEBUG Exiting processed_recipes routine
2011-09-22 21:20:00,634 beakerd DEBUG Entering queued_recipes routine
2011-09-22 21:20:00,704 beakerd DEBUG System : a is available for Recipe 7441
2011-09-22 21:20:05,513 beakerd DEBUG Created watchdog for recipe id: 7441 and system: a
2011-09-22 21:20:05,533 beakerd INFO recipe ID 7441 moved from Queued to Scheduled
2011-09-22 21:20:05,636 beakerd DEBUG Exiting queued_recipes routine
2011-09-22 21:20:05,672 beakerd DEBUG Entering scheduled_recipes routine
2011-09-22 21:20:05,672 beakerd INFO scheduled_recipes: RS:5303
2011-09-22 21:20:06,935 beakerd ERROR Failed to commit in scheduled_recipes
...
OperationalError: (OperationalError) (1213, 'Deadlock found when trying to get lock; try restarting transaction') u'UPDATE job SET status_id=%s WHERE job.id = %s' [6L, 3813L]

Cancelling the RS freed up the machine and got it moving again.

Comment 9 Bill Peck 2011-10-20 12:55:37 UTC
*** Bug 746955 has been marked as a duplicate of this bug. ***

Comment 10 Dan Callaghan 2011-10-31 22:15:20 UTC
*** Bug 749743 has been marked as a duplicate of this bug. ***

Comment 11 Dan Callaghan 2011-11-23 02:10:01 UTC
The reason we are prone to deadlocks here is that when we update the status of a job or any of its components, we are not always operating at the same level in the hierarchy. Cancelling happens at the job or recipe set level, and so the status change is made there first and then bubbles down, whereas beakerd operates (mostly) at the recipe level and so the change begins at the recipe and bubbles up and down.

A golden rule of writing concurrent code is to always acquire resources in the same order in each thread, to avoid deadlock. Unfortunately we are not doing that at present.

I think the cleanest solution is to store and update the status only at the recipe-task level, and make the status of everything else (recipe, recipe set, job) be computed from those. It remains to be seen if we can do that without compromising performance or efficiency.

Comment 12 Dan Callaghan 2011-11-23 02:14:32 UTC
(In reply to comment #11)
> I think the cleanest solution is to store and update the status only at the
> recipe-task level, and make the status of everything else (recipe, recipe set,
> job) be computed from those. It remains to be seen if we can do that without
> compromising performance or efficiency.

This would also allow us to re-order our code such that operations on recipes are serialized properly, preventing the situation where beakerd assigns a system to a recipe at the same time as a user cancels it, which is what this bug is really about.

Comment 14 Dan Callaghan 2012-10-26 03:25:18 UTC
*** Bug 663698 has been marked as a duplicate of this bug. ***

Comment 15 Dan Callaghan 2012-10-26 03:32:20 UTC
*** Bug 711686 has been marked as a duplicate of this bug. ***

Comment 16 Dan Callaghan 2013-03-17 22:36:38 UTC
*** Bug 921898 has been marked as a duplicate of this bug. ***

Comment 17 Dan Callaghan 2013-03-20 05:31:40 UTC
I am going to dupe this onto bug 807237. It's the same underlying cause (race condition in updating recipe status) even though the end result is different in this bug. But I have a test case which reproduces this, and the solution for bug 807237 fixes this one too.

*** This bug has been marked as a duplicate of bug 807237 ***


Note You need to log in before you can comment on or make changes to this bug.