Description of problem: I tried to change the priority of the HostedEngine VM in order to make the migration of this vm faster Version-Release number of selected component (if applicable): 3.6.2 How reproducible: Steps to Reproduce: 1. Edit the HostedEngine vm 2. On High Availability Tab -> Priority for Run/Migration queue 3. selected High 4. OK Actual results: Cannot migrate VM. There is no host that satisfies current scheduling constraints. See below for details: The host hosted_engine_2 did not satisfy internal filter HA because it is not a Hosted Engine host.. The host hosted_engine_1 did not satisfy internal filter Migration because it currently hosts the VM.. Expected results: Accepts and change the migration priority queue to High, therefore, the migration would be faster... Additional info: The HA checkbox was not selected... this vm is managed by the ha agent, so it's a special type of vm , therefore, would be interesting to disable the unused controls from the UI
look the score. [root@kvm1 ~]# hosted-engine --vm-status --== Host 1 status ==-- Status up-to-date : True Hostname : kvm1.brightsquid.com Host ID : 1 Engine status : {"reason": "vm not running on this host", "health": "bad", "vm": "down", "detail": "unknown"} Score : 3400 stopped : False Local maintenance : False crc32 : 1d58ff2b Host timestamp : 16211 --== Host 2 status ==-- Status up-to-date : True Hostname : kvm2.brightsquid.com Host ID : 2 Engine status : {"health": "good", "vm": "up", "detail": "up"} Score : 3400 stopped : False Local maintenance : False crc32 : 17c12b7c Host timestamp : 44504 [root@kvm1 ~]# [root@kvm2 ~]# hosted-engine --vm-status --== Host 1 status ==-- Status up-to-date : True Hostname : kvm1.brightsquid.com Host ID : 1 Engine status : {"reason": "vm not running on this host", "health": "bad", "vm": "down", "detail": "unknown"} Score : 3400 stopped : False Local maintenance : False crc32 : c1dec158 Host timestamp : 16244 --== Host 2 status ==-- Status up-to-date : True Hostname : kvm2.brightsquid.com Host ID : 2 Engine status : {"health": "good", "vm": "up", "detail": "up"} Score : 3400 stopped : False Local maintenance : False crc32 : 21c1afac Host timestamp : 44536
If I'm not wrong, before trying to change this queue priority, the vm were migrating automatically when one hosted_engine was down.... now, even configuring a VM as "HA" the vm's are not migrating automatically, however I can manually migrate the vm's and things are working good.
The user sees Hosted Engine HA: Active (Score: 3400) for both the host in the engine. But the engine refuses to migrate its VM with: 'The host hosted_engine_2 did not satisfy internal filter HA because it is not a Hosted Engine host..'
If I'm not wrong, before trying to change this queue priority, the vm were migrating automatically when one hosted_engine was down.... now, even configuring a VM as "HA" the vm's are not migrating automatically, however I can manually migrate the vm's this way works
Created attachment 1121405 [details] engine logs
Agree with you that HA should be hidden/disabled for the engine-VM. opened Bug 1305330 The ha feature from the engine requires a running engine. The engine "HA" feature is different than the hosted engine "HA" because the latter is mandated by the ovirt-ha-agent on the host. So, if you took down the host with hosted engine(is this what you did?) then the engine isn't running at the moment. The migration options for the engine-VM is manual, i.e you have to right-click and choose "migrate to" because we don't want that vm to be picked up be our load-balancer and to start migrating around.
I don't see why it's a high severity issue. We do need to hide this for HE VM though.
*** This bug has been marked as a duplicate of bug 1305330 ***