Bug 660598 - [vdsm] failed migration leaves vm in pause although qemu is already dead
Summary: [vdsm] failed migration leaves vm in pause although qemu is already dead
Alias: None
Product: Red Hat Enterprise Linux 6
Classification: Red Hat
Component: vdsm
Version: 6.1
Hardware: x86_64
OS: Linux
Target Milestone: rc
: ---
Assignee: Eduardo Warszawski
QA Contact: yeylon@redhat.com
Depends On:
TreeView+ depends on / blocked
Reported: 2010-12-07 09:29 UTC by Haim
Modified: 2016-04-18 06:35 UTC (History)
11 users (show)

Fixed In Version: vdsm-4.9-51
Doc Type: Bug Fix
Doc Text:
Clone Of:
Last Closed: 2011-12-06 07:03:23 UTC
Target Upstream Version:

Attachments (Terms of Use)
vdsm log on source server (1.53 MB, application/x-gzip)
2010-12-07 09:29 UTC, Haim
no flags Details

System ID Private Priority Status Summary Last Updated
Red Hat Product Errata RHEA-2011:1782 0 normal SHIPPED_LIVE new packages: vdsm 2011-12-06 11:55:51 UTC

Description Haim 2010-12-07 09:29:12 UTC
Created attachment 465205 [details]
vdsm log on source server

Description of problem:

failed migration leaves vm in pause state although qemu process died during that process. 

 [root@nott-vds4 ~]# vdsClient -s 0 list table
940341c9-7074-4f20-9de1-dd05e295d0f2  14237  fedora13-pool-05     Paused 

[root@nott-vds4 ~]# virsh list
 Id Name                 State

Thread-40786::DEBUG::2010-12-07 10:31:39,110::clientIF::40::vds::(wrapper) []::call migrate with ({'src': 'nott-vds4.qa.lab.tlv.redhat.com', 'dst': '
nott-vds1.qa.lab.tlv.redhat.com:54321', 'vmId': '940341c9-7074-4f20-9de1-dd05e295d0f2', 'method': 'online'},) {}
Thread-40786::DEBUG::2010-12-07 10:31:39,111::clientIF::334::vds::(migrate) {'src': 'nott-vds4.qa.lab.tlv.redhat.com', 'dst': 'nott-vds1.qa.lab.tlv.redhat.com:
54321', 'vmId': '940341c9-7074-4f20-9de1-dd05e295d0f2', 'method': 'online'}
Thread-40787::DEBUG::2010-12-07 10:31:39,113::vm::261::vds.vmlog.940341c9-7074-4f20-9de1-dd05e295d0f2::(_setupVdsConnection) Destination server is: https://not
Thread-40786::DEBUG::2010-12-07 10:31:39,113::clientIF::45::vds::(wrapper) return migrate with {'status': {'message': 'Migration process starting', 'code': 0}}
Thread-40787::DEBUG::2010-12-07 10:31:39,113::vm::263::vds.vmlog.940341c9-7074-4f20-9de1-dd05e295d0f2::(_setupVdsConnection) Initiating connection with destina
Thread-40785::DEBUG::2010-12-07 10:31:39,142::vm::317::vds.vmlog.ea8d9158-5255-49bb-8c06-acdd2b5a3210::(_prepareGuest) migration Process begins

Thread-40787::ERROR::2010-12-07 10:31:39,421::vm::325::vds.vmlog.940341c9-7074-4f20-9de1-dd05e295d0f2::(_recover) Domain not found: no domain with matching uuid '940341c9-7074-4f20-9de1-dd05e295d0f2'
Thread-40787::ERROR::2010-12-07 10:31:39,512::vm::451::vds.vmlog.940341c9-7074-4f20-9de1-dd05e295d0f2::(run) Traceback (most recent call last):
  File "/usr/share/vdsm/vm.py", line 430, in run
  File "/usr/share/vdsm/libvirtvm.py", line 89, in _setupRemoteMachineParams
    self._machineParams['_srcDomXML'] = self._vm._dom.XMLDesc(0)
  File "/usr/share/vdsm/libvirtvm.py", line 146, in f
    raise e
libvirtError: Domain not found: no domain with matching uuid '940341c9-7074-4f20-9de1-dd05e295d0f2'

destination server: 

Thread-47125::DEBUG::2010-12-07 10:28:40,979::clientIF::40::vds::(wrapper) []::call getVmStats with ('940341c9-7074-4f20-9de1-dd05e295d0f2',) {}
Thread-47125::DEBUG::2010-12-07 10:28:40,981::clientIF::45::vds::(wrapper) return getVmStats with {'status': {'message': 'Virtual machine does not exist', 'code': 1}}

Thread-47135::DEBUG::2010-12-07 10:28:41,119::clientIF::40::vds::(wrapper) []::call destroy with ('940341c9-7074-4f20-9de1-dd05e295d0f2',) {}
Thread-47135::DEBUG::2010-12-07 10:28:41,120::clientIF::45::vds::(wrapper) return destroy with {'status': {'message': 'Virtual machine does not exist', 'code': 1}}

problem is i can't tell why it failed on destination server, as libvirt logs are not sufficient. 

repro steps (not conclusive)

1) perform migration 

[root@nott-vds2 ~]#

Comment 3 David Naori 2011-02-28 16:54:13 UTC
Verfied on vdsm-4.9-51

Comment 4 errata-xmlrpc 2011-12-06 07:03:23 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.


Note You need to log in before you can comment on or make changes to this bug.