This first came up as part of the v0002.py migration script. Two developers ran into an issue running the script which is suspected to be because the collection was too full. The issue boils down to a central coordinator collection growing without boundary. If this was indeed the issue that the developers saw with the migration script, this is really bad; development environments don't typically see the level of uptime or usage an actual server would.
Added a reaper thread and timestamps to archived calls in order to periodically clean them out
build: 2.0.4
Jason - Can you give Preethi an idea of how to verify this? We should be able to tell her to set the reaper to something like 30 minutes, do some stuff, and then leave the server alone for 30 minutes and make sure there's nothing left. Can you give her the info on that config option and the mongo query to check that the collection is empty?
there's a new config value under [tasks] called: archived_call_lifetime 1:09 it's the length of time, in hours, to keep archived calls 1:10 the archived call db collection is: archived_calls 1:10 you can watch that collection grow as you execute tasks through the REST API 1:10 for instance, create a repo, sync a repo, publish a repo should result in 3 archived calls just set the config value to something nice and low, say 0 1:11 and watch them get cleaned up
verified [root@preethi-el6-pulp ~]# rpm -q pulp-rpm-server pulp-rpm-server-2.0.5-1.el6.noarch [root@preethi-el6-pulp ~]# [tasks] concurrency_threshold: 9 dispatch_interval: 0.5 archived_call_lifetime: 1 consumer_content_weight: 0 create_weight: 0 publish_weight: 1 sync_weight: 2 after running repo create, sync, and publish waited for over and hour and checked the db db.archived_calls.find() saw no archived calls
Moving these up against the 2.0 Beta so we can delete the CR-2 version from bugzilla.
Pulp 2.0 released.