Description of problem: If I try to migrate a VM to another host in cluster I get: vdsm.log:Thread-4129::DEBUG::2014-06-10 13:26:32,419::migration::372::vm.Vm::(run) vmId=`5c0adc08-edad-4403-8ff9-adfd56c8a449`::migration downtime thread exiting vdsm.log:Thread-4126::ERROR::2014-06-10 13:26:32,419::migration::160::vm.Vm::(_recover) vmId=`5c0adc08-edad-4403-8ff9-adfd56c8a449`::internal error Attempt to migrate guest to the same host localhost.localdomain vdsm.log:Thread-4126::ERROR::2014-06-10 13:26:32,714::migration::259::vm.Vm::(run) vmId=`5c0adc08-edad-4403-8ff9-adfd56c8a449`::Failed to migrate vdsm.log: File "/usr/share/vdsm/virt/migration.py", line 245, in run vdsm.log: self._startUnderlyingMigration(time.time()) vdsm.log: File "/usr/share/vdsm/virt/migration.py", line 324, in _startUnderlyingMigration Version-Release number of selected component (if applicable): libvirt-0.10.2-29.el6_5.8.x86_64 vdsm-4.15.0-92.gitd8f9cc9 How reproducible: 100% Steps to Reproduce: 1. setup cluste with two host with the same hostname (localhost.localdomain in my case) 2. run VM on one of them 3. try to migrate the VM Actual results: Attempt to migrate guest to the same host localhost.localdomain Expected results: I would expect VM to migrate despite the same hostname, because it's not the same host.
libvirt and Vdsm do not support migration where the source host is also the destination host. That's an ages-old limitation, which I do not believe we should solve. However, I believe that your issue is different. localhost.localdomain most commonly resolves to 127.0.0.1, which means that if you add a host with this name to your cluster, you would never be able to migrate to that host. We should probably block adding hosts with addresses that resolves to localhost (though we can never be sure).
Even the error message should be improved, because as it this now it suggest that the duplicate hostname is the problem.
The reassigment was by accident.
(In reply to Jiri Moskovcak from comment #2) > Even the error message should be improved, because as it this now it suggest > that the duplicate hostname is the problem. In Vdsm's perspective, it *is* the problem. On the source host, the name of the destination host resolves to the same host. This is not supported, and should be avoided on Engine level. It should also be avoided on ovirt-hosted-engine: you should add the local host with an fqdn, not with "localhost.localnetwork". Otherwise, migration to the host would not work.
(In reply to Jiri Moskovcak from comment #0) > Description of problem: > If I try to migrate a VM to another host in cluster I get: > > vdsm.log:Thread-4129::DEBUG::2014-06-10 > 13:26:32,419::migration::372::vm.Vm::(run) > vmId=`5c0adc08-edad-4403-8ff9-adfd56c8a449`::migration downtime thread > exiting > vdsm.log:Thread-4126::ERROR::2014-06-10 > 13:26:32,419::migration::160::vm.Vm::(_recover) > vmId=`5c0adc08-edad-4403-8ff9-adfd56c8a449`::internal error Attempt to > migrate guest to the same host localhost.localdomain Just for the record, this error comes straight from libvirt.
engine patch in progress
engine patch un-drafted
it is very unlikely that in production environments two different host will have the same hostname (I believe that many other things will broke in that scenario), so moving to low/low
Moving pending bugs not fixed in 3.5.0 to 3.5.1.
*** Bug 1160703 has been marked as a duplicate of this bug. ***
no need for z-stream
I have set the hosts hostname to: local.localdoamin and run migration. And from the engine side did not get any warning Or description of the problem, I find the root cause only in vdsm.log. I also test it with hostname which is different the local.localdoamin BUT it is the for both hosts and got the same results. kn Is this the behaviour expect? I don't see any change in the behaviour we still failed to migration and don't know why. No can't do action.
(In reply to Israel Pinto from comment #15) > I have set the hosts hostname to: local.localdoamin and run migration. > And from the engine side did not get any warning Or description of the > problem, > I find the root cause only in vdsm.log. > I also test it with hostname which is different the local.localdoamin BUT > it is the for both hosts and got the same results. > kn > Is this the behaviour expect? I don't see any change in the behaviour we > still failed to migration and don't know why. No can't do action. What you observed is the old buggy behaviour, this means the fix has not took effect. The culprit could be that Engine using the host's DNS resolver to do this check. Please make sure that the host running Engine sees the same hostname for the two hosts. For example, but there are many ways to do this, changing /etc/hosts *on the host running Engine*. If you changed the VDSM' hosts hostname but if Engine's host still sees the two with different names, you can end up in the situation you reported.
Verify with: Scenario: Add 2 hosts with FQDN, set on engine: On the engine host, edit /etc/hosts to read like the_real_ip_of_host_A hostA.redhat.com hostA the_real_ip_of_host_A hostB.redhat.com hostB Migrate VM. Results: Migrate didn't started, error message (VM name rhel_quest_1): rhel_guest_1: Cannot migrate VM. There is no host that satisfies current scheduling constraints. See below for details: The host host_1 did not satisfy internal filter Migration because it currently hosts the VM.. The host host_2 did not satisfy internal filter Migration because it currently hosts the VM Pass
oVirt 3.6.0 has been released and the bz verified, moving to closed current release.