Bug 1305045 - HostedEngine Vm priority queue from LOW -> HIGH failed
Summary: HostedEngine Vm priority queue from LOW -> HIGH failed
Keywords:
Status: CLOSED DUPLICATE of bug 1305330
Alias: None
Product: ovirt-engine
Classification: oVirt
Component: BLL.HostedEngine
Version: 3.6.2.6
Hardware: x86_64
OS: Linux
unspecified
medium
Target Milestone: ---
: ---
Assignee: Roy Golan
QA Contact: Ilanit Stein
URL:
Whiteboard: sla
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2016-02-05 12:38 UTC by Luiz Goncalves
Modified: 2017-05-11 09:29 UTC (History)
4 users (show)

Fixed In Version:
Doc Type: Bug Fix
Doc Text:
Clone Of:
Environment:
Last Closed: 2016-02-17 11:45:25 UTC
oVirt Team: SLA
Embargoed:
rule-engine: planning_ack?
rule-engine: devel_ack?
rule-engine: testing_ack?


Attachments (Terms of Use)
engine logs (6.20 MB, text/plain)
2016-02-05 15:06 UTC, Luiz Goncalves
no flags Details

Description Luiz Goncalves 2016-02-05 12:38:02 UTC
Description of problem:

I tried to change the priority of the HostedEngine VM in order to make the migration of this vm faster

Version-Release number of selected component (if applicable):

3.6.2

How reproducible:


Steps to Reproduce:
1. Edit the HostedEngine vm
2. On High Availability Tab -> Priority for Run/Migration queue
3. selected High
4. OK

Actual results:

Cannot migrate VM. There is no host that satisfies current scheduling constraints. See below for details:
The host hosted_engine_2 did not satisfy internal filter HA because it is not a Hosted Engine host..
The host hosted_engine_1 did not satisfy internal filter Migration because it currently hosts the VM..


Expected results:

Accepts and change the migration priority queue to High, therefore, the migration would be faster...

Additional info:

The HA checkbox was not selected... this vm is managed by the ha agent, so it's a special type of vm , therefore, would be interesting to disable the unused controls from the UI

Comment 1 Luiz Goncalves 2016-02-05 13:30:13 UTC
look the score.


[root@kvm1 ~]# hosted-engine --vm-status


--== Host 1 status ==--

Status up-to-date                  : True
Hostname                           : kvm1.brightsquid.com
Host ID                            : 1
Engine status                      : {"reason": "vm not running on this host", "health": "bad", "vm": "down", "detail": "unknown"}
Score                              : 3400
stopped                            : False
Local maintenance                  : False
crc32                              : 1d58ff2b
Host timestamp                     : 16211


--== Host 2 status ==--

Status up-to-date                  : True
Hostname                           : kvm2.brightsquid.com
Host ID                            : 2
Engine status                      : {"health": "good", "vm": "up", "detail": "up"}
Score                              : 3400
stopped                            : False
Local maintenance                  : False
crc32                              : 17c12b7c
Host timestamp                     : 44504
[root@kvm1 ~]#



[root@kvm2 ~]# hosted-engine --vm-status


--== Host 1 status ==--

Status up-to-date                  : True
Hostname                           : kvm1.brightsquid.com
Host ID                            : 1
Engine status                      : {"reason": "vm not running on this host", "health": "bad", "vm": "down", "detail": "unknown"}
Score                              : 3400
stopped                            : False
Local maintenance                  : False
crc32                              : c1dec158
Host timestamp                     : 16244


--== Host 2 status ==--

Status up-to-date                  : True
Hostname                           : kvm2.brightsquid.com
Host ID                            : 2
Engine status                      : {"health": "good", "vm": "up", "detail": "up"}
Score                              : 3400
stopped                            : False
Local maintenance                  : False
crc32                              : 21c1afac
Host timestamp                     : 44536

Comment 2 Luiz Goncalves 2016-02-05 13:33:25 UTC
If I'm not wrong, before trying to change this queue priority, the vm were migrating automatically when one hosted_engine was down.... now, even configuring a VM as "HA" the vm's are not migrating automatically, however I can manually migrate the vm's and things are working good.

Comment 3 Simone Tiraboschi 2016-02-05 13:36:13 UTC
The user sees Hosted Engine HA: Active (Score: 3400) for both the host in the engine.
But the engine refuses to migrate its VM with: 'The host hosted_engine_2 did not satisfy internal filter HA because it is not a Hosted Engine host..'

Comment 4 Luiz Goncalves 2016-02-05 14:40:24 UTC
If I'm not wrong, before trying to change this queue priority, the vm were migrating automatically when one hosted_engine was down.... now, even configuring a VM as "HA" the vm's are not migrating automatically, however I can manually migrate the vm's this way works

Comment 5 Luiz Goncalves 2016-02-05 15:06:31 UTC
Created attachment 1121405 [details]
engine logs

Comment 6 Roy Golan 2016-02-07 08:49:50 UTC
Agree with you that HA should be hidden/disabled for the engine-VM. opened Bug 1305330 

The ha feature from the engine requires a running engine. The engine "HA" feature is different than the hosted engine "HA" because the latter is mandated by the ovirt-ha-agent on the host.

So, if you took down the host with hosted engine(is this what you did?) then the engine isn't running at the moment. The migration options for the engine-VM is manual, i.e you have to right-click and choose "migrate to" because we don't want that vm to be picked up be our load-balancer and to start migrating around.

Comment 7 Yaniv Kaul 2016-02-11 12:48:30 UTC
I don't see why it's a high severity issue. We do need to hide this for HE VM though.

Comment 8 Roy Golan 2016-02-17 11:45:25 UTC

*** This bug has been marked as a duplicate of bug 1305330 ***


Note You need to log in before you can comment on or make changes to this bug.