Hide Forgot
Description of problem: [root@preethi ~]# rpm -q pulp pulp-0.0.190-1.fc14.noarch [root@preethi ~]# Not sure if this is a bug or if there is a way and I don't know about it. If a task is scheduled there is no way you could get rid of it unless you go & delete snapshots table from database Here is my use case 1. I scheduled a package install on a consumergroup which did not have any consumers associated with it The only way I could schedule another package install on the group was after deleting the snapshot table. 1) /etc/init.d/httpd stop 2) mongo 3) use pulp_database; 4) db.task_snapshots.drop(); 5) exit /etc/init.d/httpd start
if you have the 'task' command enabled, you should be able to do this first, you'll want to remove the snapshot and then remove the task if the task is currently running, you may also need to cancel it
Fixed in 0.202
fails_qa same as in https://bugzilla.redhat.com/show_bug.cgi?id=715091 [root@preethi ~]# rpm -q pulp pulp-0.0.212-1.fc14.noarch here is the use case I followed 1. Ran CDS sync. 2. While cds sync was running stopped goferd on the cds 3. Restarted pulp-cds 4. Ran cds sync [root@preethi ~]# pulp-admin cds sync --hostname=pulp-cds.usersys.redhat.com error: operation failed: Sync already in process for CDS [pulp-cds.usersys.redhat.com] 4. Ran task list [root@preethi ~]# pulp-admin task list Task: 73d0dc59-b3aa-11e0-9f87-002564a85a58 Scheduler: interval Call: cull_history Arguments: State: waiting Start time: None Finish time: None Scheduled time: 2011-07-22T05:00:00Z Result: None Exception: None Traceback: None Task: 73d0c466-b3aa-11e0-9f86-002564a85a58 Scheduler: interval Call: cull_audited_events Arguments: State: waiting Start time: None Finish time: None Scheduled time: 2011-07-22T01:00:00Z Result: None Exception: None Traceback: None Task: 80558a63-b3aa-11e0-b191-002564a85a58 Scheduler: immediate Call: CdsApi.cds_sync Arguments: pulp-cds.usersys.redhat.com State: running Start time: 2011-07-21T11:02:50-04:00 Finish time: None Scheduled time: 2011-07-21T15:02:50Z Result: None Exception: None Traceback: None 5. Delete task snapshot [root@preethi ~]# pulp-admin task delete_snapshot --id=80558a63-b3aa-11e0-b191-002564a85a58 Snapshot for task [80558a63-b3aa-11e0-b191-002564a85a58] deleted 6. Remove task [root@preethi ~]# pulp-admin task remove --id=80558a63-b3aa-11e0-b191-002564a85a58 Task [80558a63-b3aa-11e0-b191-002564a85a58] set for removal 7. Task cancel [root@preethi ~]# pulp-admin task cancel --id=80558a63-b3aa-11e0-b191-002564a85a58 Task [80558a63-b3aa-11e0-b191-002564a85a58] canceled 8. Run task list and see it hanging From pulp.log 2011-07-21 11:04:55,033 28824:140175678088960: pulp.server.tasking.task:WARNING: task:404 Deprecated base class Task.cancel() called for [Task 80558a63-b3aa-11e0-b191-002564a85a58: CdsApi.cds_sync(pulp-cds.usersys.redhat.com, )] 2011-07-21 11:04:57,739 28824:140175583282944: gofer.messaging.consumer:ERROR: consumer:387 aa8b56b8-9c33-4dde-aa08-3fb997f6b3e6 Traceback (most recent call last): File "/usr/lib/python2.7/site-packages/gofer/messaging/consumer.py", line 382, in __fetch return self.__receiver.fetch(timeout=timeout) File "<string>", line 8, in fetch File "/usr/lib64/python2.7/threading.py", line 137, in release raise RuntimeError("cannot release un-acquired lock") RuntimeError: cannot release un-acquired lock 2011-07-21 11:04:57,783 28824:140175583282944: pulp.server.api.cds:ERROR: cds:585 CDS threw an error during sync to CDS [pulp-cds.usersys.redhat.com] Traceback (most recent call last): File "/usr/lib/python2.7/site-packages/pulp/server/api/cds.py", line 568, in cds_sync self.dispatcher.sync(cds, payload) File "/usr/lib/python2.7/site-packages/pulp/server/cds/dispatcher.py", line 138, in sync self._send(stub.sync, data) File "/usr/lib/python2.7/site-packages/pulp/server/cds/dispatcher.py", line 170, in _send result = func(*args) File "/usr/lib/python2.7/site-packages/gofer/messaging/stub.py", line 71, in __call__ return self.stub._send(request, opts) File "/usr/lib/python2.7/site-packages/gofer/messaging/stub.py", line 142, in _send any=opts.any) File "/usr/lib/python2.7/site-packages/gofer/messaging/policy.py", line 123, in send reader.close() File "/usr/lib/python2.7/site-packages/gofer/messaging/consumer.py", line 316, in close self.__receiver.close() File "<string>", line 6, in close File "/usr/lib/python2.7/site-packages/qpid/messaging/endpoints.py", line 1040, in close try: CdsMethodException 2011-07-21 11:04:57,788 28824:140175583282944: pulp.server.tasking.task:ERROR: task:381 Task failed: Task 80558a63-b3aa-11e0-b191-002564a85a58: CdsApi.cds_sync(pulp-cds.usersys.redhat.com, ) Traceback (most recent call last): File "/usr/lib/python2.7/site-packages/pulp/server/tasking/task.py", line 330, in run result = self.callable(*self.args, **self.kwargs) File "/usr/lib/python2.7/site-packages/pulp/server/api/cds.py", line 568, in cds_sync self.dispatcher.sync(cds, payload) File "/usr/lib/python2.7/site-packages/pulp/server/cds/dispatcher.py", line 138, in sync self._send(stub.sync, data) File "/usr/lib/python2.7/site-packages/pulp/server/cds/dispatcher.py", line 170, in _send result = func(*args) File "/usr/lib/python2.7/site-packages/gofer/messaging/stub.py", line 71, in __call__ return self.stub._send(request, opts) File "/usr/lib/python2.7/site-packages/gofer/messaging/stub.py", line 142, in _send any=opts.any) File "/usr/lib/python2.7/site-packages/gofer/messaging/policy.py", line 123, in send reader.close() File "/usr/lib/python2.7/site-packages/gofer/messaging/consumer.py", line 316, in close self.__receiver.close() File "<string>", line 6, in close File "/usr/lib/python2.7/site-packages/qpid/messaging/endpoints.py", line 1040, in close try: PulpException: 'Error on the CDS during sync; check the server log for more information'
It appears the gofer errors in the log are a result of the exception injection used to kill the task. The side affects on locking are well known. Since this is a task issue and strongly related to: https://bugzilla.redhat.com/show_bug.cgi?id=715091 (already assigned to jconnor), I'm reassigning to jconnor.
This will not be fixed for v1.0 The fix is coming in the coordinator effort