Related to bug 1293007, we are trying to recover from a job which contained two recipes which accidentally reported a very large number of results in an infinite loop (31457 total results in each recipe, 63543 total result logs in each recipe). beaker-log-delete struggles to delete such a large job, because it has load the entire job object graph into memory, including N db roundtrips for each result's logs. For example, right now it has spent 49 CPU minutes and consumed 1511MB virtual, 361MB resident in trying to load the large job and I'm not sure how much longer until it will need... beaker-log-delete does allow applying a limit on the total number of jobs to delete in one run, but that doesn't help when there is a single job that is very large. It needs to be smarter about loading the logs, ideally in batches, without requiring the entire job object graph to be fetched from the database. Bonus points if it can query all the logs in O(1) instead of O(N) db roundtrips.
First patch is up: https://gerrit.beaker-project.org/#/c/4574/1
Dan has fixed regressions found when working on this and his patch on gerrit was merged: https://gerrit.beaker-project.org/#/c/4719/
Beaker 23.0 has been released.