Bug 500468 - vms in a bad state
vms in a bad state
Product: Virtualization Tools
Classification: Community
Component: ovirt-server-suite (Show other bugs)
All Linux
low Severity medium
: ---
: ---
Assigned To: Ian Main
Depends On:
  Show dependency treegraph
Reported: 2009-05-12 15:53 EDT by Mike McGrath
Modified: 2014-07-06 15:31 EDT (History)
4 users (show)

See Also:
Fixed In Version:
Doc Type: Bug Fix
Doc Text:
Story Points: ---
Clone Of:
Last Closed: 2014-01-20 09:39:11 EST
Type: ---
Regression: ---
Mount Type: ---
Documentation: ---
Verified Versions:
Category: ---
oVirt Team: ---
RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: ---

Attachments (Terms of Use)

  None (edit)
Description Mike McGrath 2009-05-12 15:53:08 EDT
I've been seeing this problem for a while but only just now getting to open a ticket about it, I'm seeing vms get stuck in bad states.  For example right now I've got a guest in state "running" but has no host listed.  If I click on the guest and click power off, I get this in the taskomatic log:

INFO Tue May 12 19:36:50 +0000 2009 (2606) starting task_shutdown_or_destroy_vm
ERROR Tue May 12 19:36:50 +0000 2009 (2606) VM already shut down?
INFO Tue May 12 19:36:50 +0000 2009 (2606) done

Yet the vm stays in state "running"

Additionally I have another guest in state "stopped" but it has a host assigned to it.  When I click start I get:

INFO Tue May 12 19:51:23 +0000 2009 (2606) starting task_start_vm
INFO Tue May 12 19:51:24 +0000 2009 (2606) VM will be started on node cnode3.fedoraproject.org

==> db-omatic.log <==
INFO Tue May 12 19:51:26 +0000 2009 (2328) New object type pool

==> taskomatic.log <==
ERROR Tue May 12 19:51:27 +0000 2009 (2606) Task action processing failed: RuntimeError: Unable to find volume fedora42 attached to pool guests01.
ERROR Tue May 12 19:51:27 +0000 2009 (2606) /usr/share/ovirt-server/task-omatic/taskomatic.rb:211:in `connect_storage_pools'/usr/share/ovirt-server/task-omatic/taskomatic.rb:182:in `each'/usr/share/ovirt-server/task-omatic/taskomatic.rb:182:in `connect_storage_pools'/usr/share/ovirt-server/task-omatic/taskomatic.rb:345:in `task_start_vm'/usr/share/ovirt-server/task-omatic/taskomatic.rb:862:in `mainloop'/usr/share/ovirt-server/task-omatic/taskomatic.rb:848:in `each'/usr/share/ovirt-server/task-omatic/taskomatic.rb:848:in `mainloop'/usr/share/ovirt-server/task-omatic/taskomatic.rb:826:in `loop'/usr/share/ovirt-server/task-omatic/taskomatic.rb:826:in `mainloop'/usr/share/ovirt-server/task-omatic/taskomatic.rb:915
INFO Tue May 12 19:51:27 +0000 2009 (2606) done

Even though I do have a fedora42       guests01   -wi---  1.00G
Comment 1 Hugh Brock 2009-05-22 11:19:53 EDT
Sounds like the libvirt-leaks-guests bug in our ancient libvirt version. We need to get on latest libvirt (which does not leak guests) and see if it's still a problem.
Comment 2 Ian Main 2009-05-27 19:00:49 EDT
There is also an issue with dbomatic leaving the host set when a node goes down.  This shouldn't cause huge issues but is not correct behavior.  I have also recently added a patch to dbomatic to use save! instead of save in database handlings which will cause it to throw an exception on error.  Currently it just silently fails.  It is possible there is a problem setting states in dbomatic and we're just not hearing about it (I've seen this when doing out of the box things for sure).  We'll have to see if the exceptions show the real cause of this bug.
Comment 3 Cole Robinson 2014-01-20 09:39:11 EST
This bugzilla product/component combination is no longer used: ovirt bugs are tracked under the bugzilla product 'oVirt'. If this bug is still valid, please reopen and set the correct product/component.

Note You need to log in before you can comment on or make changes to this bug.