Bug 952587 - Scheduling deadlock for multihost tests with pending status updates
Summary: Scheduling deadlock for multihost tests with pending status updates
Keywords:
Status: CLOSED CURRENTRELEASE
Alias: None
Product: Beaker
Classification: Community
Component: scheduler
Version: 0.12
Hardware: Unspecified
OS: Unspecified
high
high vote
Target Milestone: 0.13.x
Assignee: Raymond Mancy
QA Contact: xjia
URL:
Whiteboard: Scheduler
Depends On: 889065
Blocks:
TreeView+ depends on / blocked
 
Reported: 2013-04-16 09:07 UTC by Nick Coghlan
Modified: 2018-02-06 00:41 UTC (History)
14 users (show)

Fixed In Version:
Doc Type: Bug Fix
Doc Text:
Clone Of: 889065
Environment:
Last Closed: 2013-07-11 02:44:37 UTC


Attachments (Terms of Use)

Description Nick Coghlan 2013-04-16 09:07:25 UTC
+++ This bug was initially created as a clone of Bug #889065 +++

The ABBA scheduling deadlock described in #889065 may still occur in some situations, as recipes with assigned systems may still deadlock with a recipe that arrives later, but happens to be processed when the original recipe is in the middle of a status update.

--- Additional comment from Raymond Mancy on 2013-04-05 17:03:48 EST ---

I think this happened because of this patch http://gerrit.beaker-project.org/#/c/1855/.

My theory is that once the first systemB is reserved by multi-host recipeset1, that recipeset is now taken out of the scheduled_recipes_queue() loop as it is dirty, so multi-host recipeset2 can take systemA as it does not have to contend with recipeset1 (and if it did it would lose as it has less priority, the whole idea behind the original patch here).

As we can't be sure when the recipesets will be valid again for consideration in the scheduled_recipes_queue() this problem could pop up randomly.

Still haven't given much thought as to a good way of fixing this.

--- Additional comment from Nick Coghlan on 2013-04-08 16:30:13 EST ---

And we can't block waiting for the recipeset state to get updated, or we get the abort race back again :P

I hate to say it, but I think your idea of scanning for deadlocked recipes is the least-bad-option we have available at this point :(

Comment 1 Raymond Mancy 2013-06-13 06:04:05 UTC
Nick has offered some suggestions of dealing with the systems on the scheduler side, so we don't have to make expensive database calls. This may be a viable solution.

Comment 2 Raymond Mancy 2013-06-16 22:45:06 UTC
http://gerrit.beaker-project.org/2027

Comment 3 Nick Coghlan 2013-06-17 05:22:58 UTC
So, in reviewing the patch, we (mostly dcallagh) realised that the core problem here is the fact that "update_dirty_jobs()" runs in a different thread from the main scheduling loop. This means that after calling queue_processed_recipes() in the main loop, some of the queued recipes *may not* yet be marked as queued - they're be marked as processed+dirty instead.

We have a longer term plan to make this whole arrangement more sensible (see http://beaker-project.org/dev/proposals/event-driven-scheduler.html), but we want a simpler near term fix.

Dan's suggestion (and I now agree) is that we drop the separate "update_dirty_jobs_loop" and instead change main_recipes_loop to look something like this:

    @log_traceback(log)
    def main_recipes_loop(*args, **kwargs):
        while running:
            updated = update_dirty_jobs()
            abort_dead_recipes()
            updated = update_dirty_jobs() or updated
            if _virt_enabled():
                virt = provision_virt_recipes()
            else:
                virt = False
            updated = update_dirty_jobs() or updated
            queued = schedule_queued_recipes()
            updated = update_dirty_jobs() or updated
            scheduled = provision_scheduled_recipesets()
            updated = update_dirty_jobs() or updated
            if not virt and not queued and not scheduled and not updated:
                event.wait()
        log.debug("main recipes thread exiting")


It's a clumsy fix, but it will eliminate this issue and should also eliminate the database deadlock that can occur when both threads try to update the system_recipe_map table.

Comment 4 Raymond Mancy 2013-06-17 21:28:30 UTC
(In reply to Nick Coghlan from comment #3)
> So, in reviewing the patch, we (mostly dcallagh) realised that the core
> problem here is the fact that "update_dirty_jobs()" runs in a different
> thread from the main scheduling loop. This means that after calling
> queue_processed_recipes() in the main loop, some of the queued recipes *may
> not* yet be marked as queued - they're be marked as processed+dirty instead.
> 

This has 'always' (well, since 0.12) been the case, and was by design. The need for this BZ is exactly because of this.

> We have a longer term plan to make this whole arrangement more sensible (see
> http://beaker-project.org/dev/proposals/event-driven-scheduler.html), but we
> want a simpler near term fix.
> 
> Dan's suggestion (and I now agree) is that we drop the separate
> "update_dirty_jobs_loop" and instead change main_recipes_loop to look
> something like this:
> 
>     @log_traceback(log)
>     def main_recipes_loop(*args, **kwargs):
>         while running:
>             updated = update_dirty_jobs()
>             abort_dead_recipes()
>             updated = update_dirty_jobs() or updated
>             if _virt_enabled():
>                 virt = provision_virt_recipes()
>             else:
>                 virt = False
>             updated = update_dirty_jobs() or updated
>             queued = schedule_queued_recipes()
>             updated = update_dirty_jobs() or updated
>             scheduled = provision_scheduled_recipesets()
>             updated = update_dirty_jobs() or updated
>             if not virt and not queued and not scheduled and not updated:
>                 event.wait()
>         log.debug("main recipes thread exiting")
> 
> 
> It's a clumsy fix, but it will eliminate this issue and should also
> eliminate the database deadlock that can occur when both threads try to
> update the system_recipe_map table.

I think you meant to say "when both threads try to lock the same system record".

Did you mean to include process_new_recipes() in this loop? That will need to be in there to fix the deadlocking issue as well.

I guess the obvious problem with this is that serialising it means longer processing times. This is better than the deadlocks though.

Comment 5 Raymond Mancy 2013-06-20 07:43:21 UTC
http://gerrit.beaker-project.org/#/c/2045/

Comment 9 Raymond Mancy 2013-06-27 23:59:01 UTC
FWIW I don't think this is a proper verification. However, the way that the scheduler has been changed the original conditions cannot be reproduced. So I think that it's fine for this to stat 'VERIFIED'

Comment 10 Amit Saha 2013-07-11 02:44:37 UTC
Beaker 0.13.2 has been released. (http://beaker-project.org/docs/whats-new/release-0.13.html#beaker-0-13-2).


Note You need to log in before you can comment on or make changes to this bug.