I've been seeing this problem for a while but only just now getting to open a ticket about it, I'm seeing vms get stuck in bad states. For example right now I've got a guest in state "running" but has no host listed. If I click on the guest and click power off, I get this in the taskomatic log: INFO Tue May 12 19:36:50 +0000 2009 (2606) starting task_shutdown_or_destroy_vm ERROR Tue May 12 19:36:50 +0000 2009 (2606) VM already shut down? INFO Tue May 12 19:36:50 +0000 2009 (2606) done Yet the vm stays in state "running" Additionally I have another guest in state "stopped" but it has a host assigned to it. When I click start I get: INFO Tue May 12 19:51:23 +0000 2009 (2606) starting task_start_vm INFO Tue May 12 19:51:24 +0000 2009 (2606) VM will be started on node cnode3.fedoraproject.org ==> db-omatic.log <== INFO Tue May 12 19:51:26 +0000 2009 (2328) New object type pool ==> taskomatic.log <== ERROR Tue May 12 19:51:27 +0000 2009 (2606) Task action processing failed: RuntimeError: Unable to find volume fedora42 attached to pool guests01. ERROR Tue May 12 19:51:27 +0000 2009 (2606) /usr/share/ovirt-server/task-omatic/taskomatic.rb:211:in `connect_storage_pools'/usr/share/ovirt-server/task-omatic/taskomatic.rb:182:in `each'/usr/share/ovirt-server/task-omatic/taskomatic.rb:182:in `connect_storage_pools'/usr/share/ovirt-server/task-omatic/taskomatic.rb:345:in `task_start_vm'/usr/share/ovirt-server/task-omatic/taskomatic.rb:862:in `mainloop'/usr/share/ovirt-server/task-omatic/taskomatic.rb:848:in `each'/usr/share/ovirt-server/task-omatic/taskomatic.rb:848:in `mainloop'/usr/share/ovirt-server/task-omatic/taskomatic.rb:826:in `loop'/usr/share/ovirt-server/task-omatic/taskomatic.rb:826:in `mainloop'/usr/share/ovirt-server/task-omatic/taskomatic.rb:915 INFO Tue May 12 19:51:27 +0000 2009 (2606) done Even though I do have a fedora42 guests01 -wi--- 1.00G
Sounds like the libvirt-leaks-guests bug in our ancient libvirt version. We need to get on latest libvirt (which does not leak guests) and see if it's still a problem.
There is also an issue with dbomatic leaving the host set when a node goes down. This shouldn't cause huge issues but is not correct behavior. I have also recently added a patch to dbomatic to use save! instead of save in database handlings which will cause it to throw an exception on error. Currently it just silently fails. It is possible there is a problem setting states in dbomatic and we're just not hearing about it (I've seen this when doing out of the box things for sure). We'll have to see if the exceptions show the real cause of this bug.
This bugzilla product/component combination is no longer used: ovirt bugs are tracked under the bugzilla product 'oVirt'. If this bug is still valid, please reopen and set the correct product/component.