I can reproduce this in production. The exception from server-debug.log is: 2012-06-26 01:07:24,134 cherrypy.msg INFO HTTP: Page handler: <bound method Tasks.do_search of <bkr.server.tasks.Tasks object at 0x7f42da62b710>> Traceback (most recent call last): File "/usr/lib/python2.6/site-packages/CherryPy-2.3.0-py2.6.egg/cherrypy/_cphttptools.py", line 121, in _run self.main() File "/usr/lib/python2.6/site-packages/CherryPy-2.3.0-py2.6.egg/cherrypy/_cphttptools.py", line 264, in main body = page_handler(*virtual_path, **self.params) File "<string>", line 3, in do_search File "/usr/lib/python2.6/site-packages/turbogears/controllers.py", line 361, in expose *args, **kw) File "<generated code>", line 0, in run_with_transaction File "/usr/lib/python2.6/site-packages/peak/rules/core.py", line 153, in __call__ return self.body(*args, **kw) File "/usr/lib/python2.6/site-packages/turbogears/database.py", line 458, in sa_rwt retval = func(*args, **kw) File "/usr/lib/python2.6/site-packages/turbogears/controllers.py", line 244, in _expose @abstract() File "<generated code>", line 0, in _expose File "/usr/lib/python2.6/site-packages/peak/rules/core.py", line 153, in __call__ return self.body(*args, **kw) File "/usr/lib/python2.6/site-packages/turbogears/controllers.py", line 390, in <lambda> fragment, options, args, kw))) File "/usr/lib/python2.6/site-packages/turbogears/controllers.py", line 425, in _execute_func output = errorhandling.try_call(func, *args, **kw) File "/usr/lib/python2.6/site-packages/turbogears/errorhandling.py", line 89, in try_call restart_transaction(1) File "/usr/lib/python2.6/site-packages/turbogears/database.py", line 397, in restart_transaction @abstract() File "<generated code>", line 0, in restart_transaction File "/usr/lib/python2.6/site-packages/peak/rules/core.py", line 153, in __call__ return self.body(*args, **kw) File "/usr/lib/python2.6/site-packages/turbogears/database.py", line 492, in sa_restart_transaction session.rollback() File "/usr/lib64/python2.6/site-packages/sqlalchemy/orm/scoping.py", line 139, in do return getattr(self.registry(), name)(*args, **kwargs) File "/usr/lib64/python2.6/site-packages/sqlalchemy/orm/session.py", line 583, in rollback self.transaction.rollback() File "/usr/lib64/python2.6/site-packages/sqlalchemy/orm/session.py", line 411, in rollback transaction._rollback_impl() File "/usr/lib64/python2.6/site-packages/sqlalchemy/orm/session.py", line 424, in _rollback_impl t[1].rollback() File "/usr/lib64/python2.6/site-packages/sqlalchemy/engine/base.py", line 1524, in rollback self._do_rollback() File "/usr/lib64/python2.6/site-packages/sqlalchemy/engine/base.py", line 1558, in _do_rollback self.connection._rollback_impl() File "/usr/lib64/python2.6/site-packages/sqlalchemy/engine/base.py", line 1088, in _rollback_impl self._handle_dbapi_exception(e, None, None, None, None) File "/usr/lib64/python2.6/site-packages/sqlalchemy/engine/base.py", line 1085, in _rollback_impl self.engine.dialect.do_rollback(self.connection) File "/usr/lib64/python2.6/site-packages/sqlalchemy/dialects/mysql/base.py", line 1657, in do_rollback connection.rollback() OperationalError: (OperationalError) (2013, 'Lost connection to MySQL server during query') None None which is a little worrying... We will have to see if there is something about the query/queries being issued that makes MySQL sad.
*** Bug 833292 has been marked as a duplicate of this bug. ***
Using Mike's original reproducer (Jan, I wasn't able to reproduce yours) I can easily get my local devbox to chew up 10G of RSS. I returns over 3 million rows and combined with the (over)eager loading it causes obvious problems. For the time being I'm suggesting to roll back the relevant change introduced by 751330. In doing this, the above query only chews up about 300MB. In the not too distant future we should probably look at this closer to see if we can come up with a better solution.
I just noticed that /tasks/executed is quite slow, especially with no filter criteria, because it suffers from the same problem as in bug 835912: MySQL is picking a suboptimal query plan because we join from RecipeTask -> Recipe -> RecipeSet -> Job in order to filter out deleted jobs. I'm not sure whether this is contributing to the other problems seen here though. We can probably speed it up by using the same trick again (convert to an EXISTS clause).
Beaker 0.9.2 has been released.