Bug 1305045 - HostedEngine Vm priority queue from LOW -> HIGH failed
HostedEngine Vm priority queue from LOW -> HIGH failed
Status: CLOSED DUPLICATE of bug 1305330
Product: ovirt-engine
Classification: oVirt
Component: BLL.HostedEngine (Show other bugs)
3.6.2.6
x86_64 Linux
unspecified Severity medium (vote)
: ---
: ---
Assigned To: Roy Golan
Ilanit Stein
sla
:
Depends On:
Blocks:
  Show dependency treegraph
 
Reported: 2016-02-05 07:38 EST by Luiz Goncalves
Modified: 2017-05-11 05:29 EDT (History)
4 users (show)

See Also:
Fixed In Version:
Doc Type: Bug Fix
Doc Text:
Story Points: ---
Clone Of:
Environment:
Last Closed: 2016-02-17 06:45:25 EST
Type: Bug
Regression: ---
Mount Type: ---
Documentation: ---
CRM:
Verified Versions:
Category: ---
oVirt Team: SLA
RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: ---
rule-engine: planning_ack?
rule-engine: devel_ack?
rule-engine: testing_ack?


Attachments (Terms of Use)
engine logs (6.20 MB, text/plain)
2016-02-05 10:06 EST, Luiz Goncalves
no flags Details

  None (edit)
Description Luiz Goncalves 2016-02-05 07:38:02 EST
Description of problem:

I tried to change the priority of the HostedEngine VM in order to make the migration of this vm faster

Version-Release number of selected component (if applicable):

3.6.2

How reproducible:


Steps to Reproduce:
1. Edit the HostedEngine vm
2. On High Availability Tab -> Priority for Run/Migration queue
3. selected High
4. OK

Actual results:

Cannot migrate VM. There is no host that satisfies current scheduling constraints. See below for details:
The host hosted_engine_2 did not satisfy internal filter HA because it is not a Hosted Engine host..
The host hosted_engine_1 did not satisfy internal filter Migration because it currently hosts the VM..


Expected results:

Accepts and change the migration priority queue to High, therefore, the migration would be faster...

Additional info:

The HA checkbox was not selected... this vm is managed by the ha agent, so it's a special type of vm , therefore, would be interesting to disable the unused controls from the UI
Comment 1 Luiz Goncalves 2016-02-05 08:30:13 EST
look the score.


[root@kvm1 ~]# hosted-engine --vm-status


--== Host 1 status ==--

Status up-to-date                  : True
Hostname                           : kvm1.brightsquid.com
Host ID                            : 1
Engine status                      : {"reason": "vm not running on this host", "health": "bad", "vm": "down", "detail": "unknown"}
Score                              : 3400
stopped                            : False
Local maintenance                  : False
crc32                              : 1d58ff2b
Host timestamp                     : 16211


--== Host 2 status ==--

Status up-to-date                  : True
Hostname                           : kvm2.brightsquid.com
Host ID                            : 2
Engine status                      : {"health": "good", "vm": "up", "detail": "up"}
Score                              : 3400
stopped                            : False
Local maintenance                  : False
crc32                              : 17c12b7c
Host timestamp                     : 44504
[root@kvm1 ~]#



[root@kvm2 ~]# hosted-engine --vm-status


--== Host 1 status ==--

Status up-to-date                  : True
Hostname                           : kvm1.brightsquid.com
Host ID                            : 1
Engine status                      : {"reason": "vm not running on this host", "health": "bad", "vm": "down", "detail": "unknown"}
Score                              : 3400
stopped                            : False
Local maintenance                  : False
crc32                              : c1dec158
Host timestamp                     : 16244


--== Host 2 status ==--

Status up-to-date                  : True
Hostname                           : kvm2.brightsquid.com
Host ID                            : 2
Engine status                      : {"health": "good", "vm": "up", "detail": "up"}
Score                              : 3400
stopped                            : False
Local maintenance                  : False
crc32                              : 21c1afac
Host timestamp                     : 44536
Comment 2 Luiz Goncalves 2016-02-05 08:33:25 EST
If I'm not wrong, before trying to change this queue priority, the vm were migrating automatically when one hosted_engine was down.... now, even configuring a VM as "HA" the vm's are not migrating automatically, however I can manually migrate the vm's and things are working good.
Comment 3 Simone Tiraboschi 2016-02-05 08:36:13 EST
The user sees Hosted Engine HA: Active (Score: 3400) for both the host in the engine.
But the engine refuses to migrate its VM with: 'The host hosted_engine_2 did not satisfy internal filter HA because it is not a Hosted Engine host..'
Comment 4 Luiz Goncalves 2016-02-05 09:40:24 EST
If I'm not wrong, before trying to change this queue priority, the vm were migrating automatically when one hosted_engine was down.... now, even configuring a VM as "HA" the vm's are not migrating automatically, however I can manually migrate the vm's this way works
Comment 5 Luiz Goncalves 2016-02-05 10:06 EST
Created attachment 1121405 [details]
engine logs
Comment 6 Roy Golan 2016-02-07 03:49:50 EST
Agree with you that HA should be hidden/disabled for the engine-VM. opened Bug 1305330 

The ha feature from the engine requires a running engine. The engine "HA" feature is different than the hosted engine "HA" because the latter is mandated by the ovirt-ha-agent on the host.

So, if you took down the host with hosted engine(is this what you did?) then the engine isn't running at the moment. The migration options for the engine-VM is manual, i.e you have to right-click and choose "migrate to" because we don't want that vm to be picked up be our load-balancer and to start migrating around.
Comment 7 Yaniv Kaul 2016-02-11 07:48:30 EST
I don't see why it's a high severity issue. We do need to hide this for HE VM though.
Comment 8 Roy Golan 2016-02-17 06:45:25 EST

*** This bug has been marked as a duplicate of bug 1305330 ***

Note You need to log in before you can comment on or make changes to this bug.