Bug 496676 - Migration failing with connect failed
Migration failing with connect failed
Status: CLOSED WONTFIX
Product: Virtualization Tools
Classification: Community
Component: ovirt-server-suite (Show other bugs)
unspecified
All Linux
low Severity medium
: ---
: ---
Assigned To: Ian Main
:
Depends On:
Blocks:
  Show dependency treegraph
 
Reported: 2009-04-20 12:33 EDT by Mike McGrath
Modified: 2014-07-06 15:31 EDT (History)
6 users (show)

See Also:
Fixed In Version:
Doc Type: Bug Fix
Doc Text:
Story Points: ---
Clone Of:
Environment:
Last Closed: 2014-01-20 09:39:06 EST
Type: ---
Regression: ---
Mount Type: ---
Documentation: ---
CRM:
Verified Versions:
Category: ---
oVirt Team: ---
RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: ---


Attachments (Terms of Use)

  None (edit)
Description Mike McGrath 2009-04-20 12:33:39 EDT
When I click on a guest, click migrate, select another node to stick it on.  Sometimes I get:

INFO Mon Apr 20 16:27:08 +0000 2009 (22400) migrate_vm
INFO Mon Apr 20 16:27:08 +0000 2009 (22400) Migrating domain lookup complete, domain is com.redhat.libvirt:domain[0-1-1-37-3] 0-1-1-37-1
ERROR Mon Apr 20 16:27:09 +0000 2009 (22400) Error: Failed to open qemu+tcp://cnode3.fedoraproject.org/system
ERROR Mon Apr 20 16:27:09 +0000 2009 (22400) Task action processing failed: Libvirt::ConnectionError: Failed to open qemu+tcp://cnode3.fedoraproject.org/system
ERROR Mon Apr 20 16:27:09 +0000 2009 (22400) /usr/share/ovirt-server/task-omatic/taskomatic.rb:507:in `open'/usr/share/ovirt-server/task-omatic/taskomatic.rb:507:in `migrate'/usr/share/ovirt-server/task-omatic/taskomatic.rb:543:in `task_migrate_vm'/usr/share/ovirt-server/task-omatic/taskomatic.rb:849:in `mainloop'/usr/share/ovirt-server/task-omatic/taskomatic.rb:825:in `each'/usr/share/ovirt-server/task-omatic/taskomatic.rb:825:in `mainloop'/usr/share/ovirt-server/task-omatic/taskomatic.rb:803:in `loop'/usr/share/ovirt-server/task-omatic/taskomatic.rb:803:in `mainloop'/usr/share/ovirt-server/task-omatic/taskomatic.rb:887
INFO Mon Apr 20 16:27:09 +0000 2009 (22400) done


Other times I get:
INFO Mon Apr 20 16:31:52 +0000 2009 (22400) migrate_vm
INFO Mon Apr 20 16:31:52 +0000 2009 (22400) Migrating domain lookup complete, domain is com.redhat.libvirt:domain[0-1-1-40-11] 0-1-1-40-1
ERROR Mon Apr 20 16:32:53 +0000 2009 (22400) Error: Type Object has no attribute 'seq'
ERROR Mon Apr 20 16:32:53 +0000 2009 (22400) Task action processing failed: RuntimeError: Type Object has no attribute 'seq'
ERROR Mon Apr 20 16:32:53 +0000 2009 (22400) /usr/lib/ruby/site_ruby/1.8/qpid/qmf.rb:1052:in `method_missing'/usr/lib/ruby/site_ruby/1.8/qpid/qmf.rb:1094:in `invoke'/usr/lib/ruby/site_ruby/1.8/qpid/qmf.rb:1035:in `method_missing'/usr/share/ovirt-server/task-omatic/./task_storage.rb:154:in `connect'/usr/share/ovirt-server/task-omatic/taskomatic.rb:197:in `connect_storage_pools'/usr/share/ovirt-server/task-omatic/taskomatic.rb:182:in `each'/usr/share/ovirt-server/task-omatic/taskomatic.rb:182:in `connect_storage_pools'/usr/share/ovirt-server/task-omatic/taskomatic.rb:501:in `migrate'/usr/share/ovirt-server/task-omatic/taskomatic.rb:543:in `task_migrate_vm'/usr/share/ovirt-server/task-omatic/taskomatic.rb:849:in `mainloop'/usr/share/ovirt-server/task-omatic/taskomatic.rb:825:in `each'/usr/share/ovirt-server/task-omatic/taskomatic.rb:825:in `mainloop'/usr/share/ovirt-server/task-omatic/taskomatic.rb:803:in `loop'/usr/share/ovirt-server/task-omatic/taskomatic.rb:803:in `mainloop'/usr/share/ovirt-server/task-omatic/taskomatic.rb:887
Comment 1 Mike McGrath 2009-04-20 12:37:08 EDT
Cancel the "Other Times I get" comment.  That was from a different issue.  Now I get the "Feiled to open qemu+tcp://cnode3.fedoraproject.org/system" every time.
Comment 2 william 2009-04-20 15:59:45 EDT
I have the same problem!
If you need someone to test please let me know
Comment 3 william 2009-05-09 16:06:56 EDT
I can reproduce this with the latest development build (saturday 9 may 10:00 AM)
This happens when i try to migrate a vm from node80 to node92

The first message (
ActiveRecord::RecordNotFound (Couldn't find Host with ID=70195823946900):) is also available in the webui, when i then press migrate the below error occurs.

Any idea?

===================================================================

==> /var/log/ovirt-server/rails.log <==


Processing HostController#quick_summary (for 192.168.50.3 at 2009-05-09 21:59:30) [POST]
  Session ID: 35cc666e65be2bca694b1fc3665e7137
  Parameters: {"action"=>"quick_summary", "id"=>"1", "controller"=>"host"}


ActiveRecord::RecordNotFound (Couldn't find Host with ID=70195823946900):
    /usr/lib/ruby/gems/1.8/gems/activerecord-2.1.1/lib/active_record/base.rb:1383:in `find_one'
    /usr/lib/ruby/gems/1.8/gems/activerecord-2.1.1/lib/active_record/base.rb:1366:in `find_from_ids'
    /usr/lib/ruby/gems/1.8/gems/activerecord-2.1.1/lib/active_record/base.rb:541:in `find'
    /app/services/host_service.rb:116:in `lookup'
    /app/services/host_service.rb:64:in `svc_show'
    /app/controllers/host_controller.rb:51:in `quick_summary'
    /usr/lib/ruby/gems/1.8/gems/actionpack-2.1.1/lib/action_controller/base.rb:1166:in `send'
    /usr/lib/ruby/gems/1.8/gems/actionpack-2.1.1/lib/action_controller/base.rb:1166:in `perform_action_without_filters'
    /usr/lib/ruby/gems/1.8/gems/actionpack-2.1.1/lib/action_controller/filters.rb:579:in `call_filters'
    /usr/lib/ruby/gems/1.8/gems/actionpack-2.1.1/lib/action_controller/filters.rb:572:in `perform_action_without_benchmark'
    /usr/lib/ruby/gems/1.8/gems/actionpack-2.1.1/lib/action_controller/benchmarking.rb:68:in `perform_action_without_rescue'
    /usr/lib/ruby/1.8/benchmark.rb:293:in `measure'
    /usr/lib/ruby/gems/1.8/gems/actionpack-2.1.1/lib/action_controller/benchmarking.rb:68:in `perform_action_without_rescue'
    /usr/lib/ruby/gems/1.8/gems/actionpack-2.1.1/lib/action_controller/rescue.rb:201:in `perform_action_without_caching'
    /usr/lib/ruby/gems/1.8/gems/actionpack-2.1.1/lib/action_controller/caching/sql_cache.rb:13:in `perform_action'
    /usr/lib/ruby/gems/1.8/gems/activerecord-2.1.1/lib/active_record/connection_adapters/abstract/query_cache.rb:33:in `cache'
    /usr/lib/ruby/gems/1.8/gems/activerecord-2.1.1/lib/active_record/query_cache.rb:8:in `cache'
    /usr/lib/ruby/gems/1.8/gems/actionpack-2.1.1/lib/action_controller/caching/sql_cache.rb:12:in `perform_action'
    /usr/lib/ruby/gems/1.8/gems/actionpack-2.1.1/lib/action_controller/base.rb:529:in `send'
    /usr/lib/ruby/gems/1.8/gems/actionpack-2.1.1/lib/action_controller/base.rb:529:in `process_without_filters'
    /usr/lib/ruby/gems/1.8/gems/actionpack-2.1.1/lib/action_controller/filters.rb:568:in `process_without_session_management_support'
    /usr/lib/ruby/gems/1.8/gems/actionpack-2.1.1/lib/action_controller/session_management.rb:130:in `process'
    /usr/lib/ruby/gems/1.8/gems/actionpack-2.1.1/lib/action_controller/base.rb:389:in `process'
    /usr/lib/ruby/gems/1.8/gems/actionpack-2.1.1/lib/action_controller/dispatcher.rb:149:in `handle_request'
    /usr/lib/ruby/gems/1.8/gems/actionpack-2.1.1/lib/action_controller/dispatcher.rb:107:in `dispatch'
    /usr/lib/ruby/gems/1.8/gems/actionpack-2.1.1/lib/action_controller/dispatcher.rb:104:in `synchronize'
    /usr/lib/ruby/gems/1.8/gems/actionpack-2.1.1/lib/action_controller/dispatcher.rb:104:in `dispatch'
    /usr/lib/ruby/gems/1.8/gems/actionpack-2.1.1/lib/action_controller/dispatcher.rb:120:in `dispatch_cgi'
    /usr/lib/ruby/gems/1.8/gems/actionpack-2.1.1/lib/action_controller/dispatcher.rb:35:in `dispatch'
    /usr/lib/ruby/gems/1.8/gems/mongrel-1.0.1/lib/mongrel/rails.rb:78:in `process'
    /usr/lib/ruby/gems/1.8/gems/mongrel-1.0.1/lib/mongrel/rails.rb:76:in `synchronize'
    /usr/lib/ruby/gems/1.8/gems/mongrel-1.0.1/lib/mongrel/rails.rb:76:in `process'
    /usr/lib/ruby/gems/1.8/gems/mongrel-1.0.1/lib/mongrel.rb:618:in `process_client'
    /usr/lib/ruby/gems/1.8/gems/mongrel-1.0.1/lib/mongrel.rb:617:in `each'
    /usr/lib/ruby/gems/1.8/gems/mongrel-1.0.1/lib/mongrel.rb:617:in `process_client'
    /usr/lib/ruby/gems/1.8/gems/mongrel-1.0.1/lib/mongrel.rb:736:in `run'
    /usr/lib/ruby/gems/1.8/gems/mongrel-1.0.1/lib/mongrel.rb:736:in `initialize'
    /usr/lib/ruby/gems/1.8/gems/mongrel-1.0.1/lib/mongrel.rb:736:in `new'
    /usr/lib/ruby/gems/1.8/gems/mongrel-1.0.1/lib/mongrel.rb:736:in `run'
    /usr/lib/ruby/gems/1.8/gems/mongrel-1.0.1/lib/mongrel.rb:720:in `initialize'
    /usr/lib/ruby/gems/1.8/gems/mongrel-1.0.1/lib/mongrel.rb:720:in `new'
    /usr/lib/ruby/gems/1.8/gems/mongrel-1.0.1/lib/mongrel.rb:720:in `run'
    /usr/lib/ruby/gems/1.8/gems/mongrel-1.0.1/lib/mongrel/configurator.rb:271:in `run'
    /usr/lib/ruby/gems/1.8/gems/mongrel-1.0.1/lib/mongrel/configurator.rb:270:in `each'
    /usr/lib/ruby/gems/1.8/gems/mongrel-1.0.1/lib/mongrel/configurator.rb:270:in `run'
    /usr/lib/ruby/gems/1.8/gems/mongrel-1.0.1/bin/mongrel_rails:127:in `run'
    /usr/lib/ruby/gems/1.8/gems/mongrel-1.0.1/lib/mongrel/command.rb:211:in `run'
    /usr/lib/ruby/gems/1.8/gems/mongrel-1.0.1/bin/mongrel_rails:243
    /usr/bin/mongrel_rails:16:in `load'
    /usr/bin/mongrel_rails:16

Rendering template within layouts/popup
Rendering layouts/popup-error


Processing VmController#vm_action (for 192.168.50.3 at 2009-05-09 21:59:32) [POST]
  Session ID: 35cc666e65be2bca694b1fc3665e7137
  Parameters: {"vm_action_data"=>"1", "action"=>"vm_action", "id"=>"1", "controller"=>"vm", "vm_action"=>"migrate_vm"}
Completed in 0.05280 (18 reqs/sec) | Rendering: 0.00042 (0%) | LDAP: 0.00000 (0%) | DB: 0.03741 (70%) | 200 OK [http://manager.ric.nl/ovirt/vm/vm_action]


Processing ResourcesController#vms_json (for 192.168.50.3 at 2009-05-09 21:59:33) [POST]
  Session ID: 35cc666e65be2bca694b1fc3665e7137
  Parameters: {"qtype"=>"", "sortname"=>"description", "action"=>"vms_json", "sortorder"=>"asc", "id"=>"5", "controller"=>"resources", "query"=>"", "rp"=>"40", "page"=>"1"}
Completed in 0.02364 (42 reqs/sec) | Rendering: 0.00026 (1%) | LDAP: 0.00000 (0%) | DB: 0.01162 (49%) | 200 OK [http://manager.ric.nl/ovirt/resources/vms_json/5]

==> /var/log/ovirt-server/taskomatic.log <==
INFO Sat May 09 21:59:33 +0200 2009 (4976) starting task_migrate_vm
INFO Sat May 09 21:59:33 +0200 2009 (4976) Migrating domain lookup complete, domain is com.redhat.libvirt:domain[0-1-1-22-32] 0-1-1-22-1
ERROR Sat May 09 21:59:34 +0200 2009 (4976) Error: Call to function virDomainMigrate failed
ERROR Sat May 09 21:59:34 +0200 2009 (4976) Task action processing failed: Libvirt::Error: Call to function virDomainMigrate failed
ERROR Sat May 09 21:59:34 +0200 2009 (4976) /usr/share/ovirt-server/task-omatic/taskomatic.rb:517:in `migrate'/usr/share/ovirt-server/task-omatic/taskomatic.rb:517:in `migrate'/usr/share/ovirt-server/task-omatic/taskomatic.rb:551:in `task_migrate_vm'/usr/share/ovirt-server/task-omatic/taskomatic.rb:872:in `mainloop'/usr/share/ovirt-server/task-omatic/taskomatic.rb:848:in `each'/usr/share/ovirt-server/task-omatic/taskomatic.rb:848:in `mainloop'/usr/share/ovirt-server/task-omatic/taskomatic.rb:826:in `loop'/usr/share/ovirt-server/task-omatic/taskomatic.rb:826:in `mainloop'/usr/share/ovirt-server/task-omatic/taskomatic.rb:915
INFO Sat May 09 21:59:34 +0200 2009 (4976) done
Comment 4 Ian Main 2009-05-20 17:44:49 EDT
So there are actually probably about 3 bugs here.

The first is a WUI issue.  I'm seeing that same error although it doesn't seem to affect the actual migration call.

There is a connection issue which is likely a kerberos problem.

Then there is the virDomainMigrate call failing which is a more serious problem with libvirt itself.  Apparently there are issues with the current libvirt which are being addressed.

I have not been able to reproduce the connection failed problem.  I'm going to leave everything running for a while and see if maybe the tickets are expiring or something.
Comment 5 Hugh Brock 2009-05-22 11:15:37 EDT
We may need to pull in Chris' "tunneled migration" patch to get qmf migration to really work properly, although it is going to change before it goes upstream in libvirt (but at least we could test the concept).
Comment 6 Ian Main 2009-05-28 12:16:01 EDT
OK, I've just made a series of changes:

- Storage pool mounting was done with random strings, this is now changed to uniquely identify the mount but the mount point will be the the same across all nodes.

- I updated to the latest libvirtd (it's in the ovirt repo).

- I modified libvirt-qpid so that it can now do the migration from node to node!

So these changes should fix all the above issues.
Comment 7 Cole Robinson 2014-01-20 09:39:06 EST
This bugzilla product/component combination is no longer used: ovirt bugs are tracked under the bugzilla product 'oVirt'. If this bug is still valid, please reopen and set the correct product/component.

Note You need to log in before you can comment on or make changes to this bug.