| Summary: | Failure to live migrate: domain not found | ||
|---|---|---|---|
| Product: | Red Hat Enterprise Virtualization Manager | Reporter: | Ohad Basan <obasan> |
| Component: | vdsm | Assignee: | Michal Skrivanek <michal.skrivanek> |
| Status: | CLOSED INSUFFICIENT_DATA | QA Contact: | meital avital <mavital> |
| Severity: | unspecified | Docs Contact: | |
| Priority: | unspecified | ||
| Version: | 3.3.0 | CC: | amureini, bazulay, eedri, hateya, iheim, lpeer, lsvaty, obasan, rcyriac, sgotliv, yeylon |
| Target Milestone: | --- | Keywords: | Triaged |
| Target Release: | 3.3.0 | ||
| Hardware: | Unspecified | ||
| OS: | Unspecified | ||
| Whiteboard: | virt | ||
| Fixed In Version: | Doc Type: | Bug Fix | |
| Doc Text: | Story Points: | --- | |
| Clone Of: | Environment: | ||
| Last Closed: | 2013-09-13 08:53:42 UTC | Type: | Bug |
| Regression: | --- | Mount Type: | --- |
| Documentation: | --- | CRM: | |
| Verified Versions: | Category: | --- | |
| oVirt Team: | --- | RHEL 7.3 requirements from Atomic Host: | |
| Cloudforms Team: | --- | Target Upstream Version: | |
Sergey, aren't you handling a similar issue already? (In reply to Allon Mureinik from comment #3) > Sergey, aren't you handling a similar issue already? No. The exception in teardownImage seems to be log issue only, since it caught and handled. According to that bug description, the issue was happen on 21/08 in 14:40, but attached vdsm logs contain information till 21/08 14:32. Ohad, can you attach the rest of the log, please? I've tested virsh connection from nari12 to nari13 yesterday and it seems to be working ok, do you still see the migration issue? I don't see that anymore... most likely an intermittent connectivity issue or firewall settings |
Description of problem: I have an iscsi dc with one active storage domain and two active hosts I am trying to live migrate the vm from one host to another and I am receiving a message cParams:{'path': ''} truesize:0 type:disk volExtensionChunk:1024 watermarkLimit:536870912 Traceback (most recent call last): File "/usr/share/vdsm/clientIF.py", line 355, in teardownVolumePath res = self.irs.teardownImage(drive['domainID'], File "/usr/share/vdsm/vm.py", line 1343, in __getitem__ raise KeyError(key) KeyError: 'domainID' Thread-2166::ERROR::2013-08-21 14:40:51,241::vm::2062::vm.Vm::(_startUnderlyingVm) vmId=`6b2a7374-3c3f-4c0e-8c06-ea583e303fda`::The vm start process failed Traceback (most recent call last): File "/usr/share/vdsm/vm.py", line 2040, in _startUnderlyingVm self._waitForIncomingMigrationFinish() File "/usr/share/vdsm/vm.py", line 3364, in _waitForIncomingMigrationFinish self._connection.lookupByUUIDString(self.id), File "/usr/lib64/python2.6/site-packages/vdsm/libvirtconnection.py", line 76, in wrapper ret = f(*args, **kwargs) File "/usr/lib64/python2.6/site-packages/libvirt.py", line 2838, in lookupByUUIDString if ret is None:raise libvirtError('virDomainLookupByUUIDString() failed', conn=self) libvirtError: Domain not found: no domain with matching uuid '6b2a7374-3c3f-4c0e-8c06-ea583e303fda' Thread-2166::DEBUG::2013-08-21 14:40:51,245::vm::2452::vm.Vm::(setDownStatus) vmId=`6b2a7374-3c3f-4c0e-8c06-ea583e303fda`::Changed state to Down: Domain not found: no domain with matching uuid '6b2a7374-3c3f-4c0 e-8c06-ea583e303fda' one of the host was previously connected to an nfs dc with an nfs storage so maybe something is not cleaned up properly.