Bug 798707

Summary: [ovirt] [vdsm] migration: vm status stuck on 'migrationDestination' for several minutes altough vm is running on libvirt
Product: [Retired] oVirt Reporter: Haim <hateya>
Component: vdsmAssignee: Dan Kenigsberg <danken>
Status: CLOSED CURRENTRELEASE QA Contact:
Severity: high Docs Contact:
Priority: unspecified    
Version: unspecifiedCC: abaron, acathrow, bazulay, iheim, jkt, lpeer, mgoldboi, michal.skrivanek, yeylon
Target Milestone: ---   
Target Release: ---   
Hardware: Unspecified   
OS: Unspecified   
Whiteboard: virt
Fixed In Version: Doc Type: Bug Fix
Doc Text:
Story Points: ---
Clone Of: Environment:
Last Closed: 2013-08-19 10:18:17 UTC Type: ---
Regression: --- Mount Type: ---
Documentation: --- CRM:
Verified Versions: Category: ---
oVirt Team: --- RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: --- Target Upstream Version:
Attachments:
Description Flags
vdsm log none

Description Haim 2012-02-29 16:03:08 UTC
Description of problem:

- run vm with os installed 
- migrate vm from host A to host B 
- VM moves to state: unknown '?' for several minutes
- VM moves to state UP

notes:

when I query the logs, I see that vmGetStats is sent on vm and vdsm returns the following for 4-5 minutes: 

Thread-3998::DEBUG::2012-02-29 17:46:19,596::BindingXMLRPC::854::vds::(wrapper) return vmGetStats with {'status': {'message': 'Done', 'code': 0}, 'statsList': [{'status': 'Migration Destination', 'hash': '0', 'acpiEnable': 'true', 'pid'
Thread-4002::DEBUG::2012-02-29 17:46:21,617::BindingXMLRPC::848::vds::(wrapper) client [10.35.97.30]::call vmGetStats with ('52cb68fd-00bb-4dbe-a17c-b63c372daf16',) {} flowID [93b8240]

please note that during that time, vm status is running on libvirt. 

git hash: cc3662eb4c59a4c68577828beeebbcb5fbf93f96

Comment 1 Haim 2012-02-29 16:05:44 UTC
Created attachment 566579 [details]
vdsm log

Comment 2 Dan Kenigsberg 2012-04-18 20:45:15 UTC
reproduciblity? how often is this?

Comment 3 Michal Skrivanek 2013-08-19 10:18:17 UTC
doesn't seem to be happening anymore. There were a lot of monitoring-related changes in engine in the meantime. 
Please reopen if still relevant and supply engine log