Bug 500468 - vms in a bad state
Summary: vms in a bad state
Keywords:
Status: CLOSED WONTFIX
Alias: None
Product: Virtualization Tools
Classification: Community
Component: ovirt-server-suite
Version: unspecified
Hardware: All
OS: Linux
low
medium
Target Milestone: ---
Assignee: Ian Main
QA Contact:
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2009-05-12 19:53 UTC by Mike McGrath
Modified: 2014-07-06 19:31 UTC (History)
4 users (show)

Fixed In Version:
Doc Type: Bug Fix
Doc Text:
Clone Of:
Environment:
Last Closed: 2014-01-20 14:39:11 UTC
Embargoed:


Attachments (Terms of Use)

Description Mike McGrath 2009-05-12 19:53:08 UTC
I've been seeing this problem for a while but only just now getting to open a ticket about it, I'm seeing vms get stuck in bad states.  For example right now I've got a guest in state "running" but has no host listed.  If I click on the guest and click power off, I get this in the taskomatic log:

INFO Tue May 12 19:36:50 +0000 2009 (2606) starting task_shutdown_or_destroy_vm
ERROR Tue May 12 19:36:50 +0000 2009 (2606) VM already shut down?
INFO Tue May 12 19:36:50 +0000 2009 (2606) done

Yet the vm stays in state "running"

Additionally I have another guest in state "stopped" but it has a host assigned to it.  When I click start I get:


INFO Tue May 12 19:51:23 +0000 2009 (2606) starting task_start_vm
INFO Tue May 12 19:51:24 +0000 2009 (2606) VM will be started on node cnode3.fedoraproject.org

==> db-omatic.log <==
INFO Tue May 12 19:51:26 +0000 2009 (2328) New object type pool

==> taskomatic.log <==
ERROR Tue May 12 19:51:27 +0000 2009 (2606) Task action processing failed: RuntimeError: Unable to find volume fedora42 attached to pool guests01.
ERROR Tue May 12 19:51:27 +0000 2009 (2606) /usr/share/ovirt-server/task-omatic/taskomatic.rb:211:in `connect_storage_pools'/usr/share/ovirt-server/task-omatic/taskomatic.rb:182:in `each'/usr/share/ovirt-server/task-omatic/taskomatic.rb:182:in `connect_storage_pools'/usr/share/ovirt-server/task-omatic/taskomatic.rb:345:in `task_start_vm'/usr/share/ovirt-server/task-omatic/taskomatic.rb:862:in `mainloop'/usr/share/ovirt-server/task-omatic/taskomatic.rb:848:in `each'/usr/share/ovirt-server/task-omatic/taskomatic.rb:848:in `mainloop'/usr/share/ovirt-server/task-omatic/taskomatic.rb:826:in `loop'/usr/share/ovirt-server/task-omatic/taskomatic.rb:826:in `mainloop'/usr/share/ovirt-server/task-omatic/taskomatic.rb:915
INFO Tue May 12 19:51:27 +0000 2009 (2606) done



Even though I do have a fedora42       guests01   -wi---  1.00G

Comment 1 Hugh Brock 2009-05-22 15:19:53 UTC
Sounds like the libvirt-leaks-guests bug in our ancient libvirt version. We need to get on latest libvirt (which does not leak guests) and see if it's still a problem.

Comment 2 Ian Main 2009-05-27 23:00:49 UTC
There is also an issue with dbomatic leaving the host set when a node goes down.  This shouldn't cause huge issues but is not correct behavior.  I have also recently added a patch to dbomatic to use save! instead of save in database handlings which will cause it to throw an exception on error.  Currently it just silently fails.  It is possible there is a problem setting states in dbomatic and we're just not hearing about it (I've seen this when doing out of the box things for sure).  We'll have to see if the exceptions show the real cause of this bug.

Comment 3 Cole Robinson 2014-01-20 14:39:11 UTC
This bugzilla product/component combination is no longer used: ovirt bugs are tracked under the bugzilla product 'oVirt'. If this bug is still valid, please reopen and set the correct product/component.


Note You need to log in before you can comment on or make changes to this bug.