Bug 1421174 - Migration scheduler should work with per-VM cluster compatibility level
Summary: Migration scheduler should work with per-VM cluster compatibility level
Keywords:
Status: CLOSED CURRENTRELEASE
Alias: None
Product: ovirt-engine
Classification: oVirt
Component: Backend.Core
Version: 4.1.1
Hardware: Unspecified
OS: Unspecified
urgent
urgent
Target Milestone: ovirt-4.1.1
: 4.1.1.2
Assignee: Arik
QA Contact: sefi litmanovich
URL:
Whiteboard:
: 1421586 (view as bug list)
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2017-02-10 14:34 UTC by Jiri Belka
Modified: 2017-04-21 09:44 UTC (History)
8 users (show)

Fixed In Version:
Clone Of:
Environment:
Last Closed: 2017-04-21 09:44:54 UTC
oVirt Team: Virt
Embargoed:
rule-engine: ovirt-4.1+
rule-engine: blocker+
ahadas: devel_ack+


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
oVirt gerrit 72110 0 master MERGED core: scheduler should filter hosts based on their cluster level 2017-02-12 13:16:35 UTC
oVirt gerrit 72179 0 ovirt-engine-4.1 MERGED core: scheduler should filter hosts based on their cluster level 2017-02-13 14:24:02 UTC

Description Jiri Belka 2017-02-10 14:34:19 UTC
Description of problem:

If one has 3.6 compat level VMs running in 4.0 engine (eg. the cluster used to be 3.6 compat level) and hosts are update to 4.1, these 3.6 VMs won't be possible to migrate from 4.0 hosts to 4.1 hosts.

Let's create a event/warning, whatever, to highlight this issue for engine administrator. Ie. after a host which does not support placing VMs with older compat level is added/updated to engine, runnins VMs should be checked for their current compat level and if they are < that compat level of the host, there should be an event/warning/whatever...

------------------->%--------------------
~~~
The host slot-1 did not satisfy internal filter
Compatibility-Version because it doesn't support compatibility version '3.6'
which is required by the VM. Host supported compatibility versions are: 4.0,4.1..
~~~

It is not vdsm issue...

~~~
# rpm -q vdsm ; sed -n '/^version_info/,$p' /usr/lib/python2.7/site-packages/vdsm/dsaversion.py
vdsm-4.19.4-1.el7ev.x86_64
version_info = {
    'version_name': version_name,
    'software_version': software_version,
    'software_revision': software_revision,
    'supportedENGINEs': ['4.0', '4.1'],
    'clusterLevels': ['3.6', '4.0', '4.1'],
}
~~~

So, where's the problem?

~~~
engine=# select vm_name,creation_date,last_start_time,cluster_compatibility_version,custom_compatibility_version from vms where vm_name = 'ps-rh6';
-[ RECORD 1 ]-----------------+---------------------------
vm_name                       | ps-rh6
creation_date                 | 2016-04-13 07:43:05-04
last_start_time               | 2016-11-24 11:34:28.171-05
cluster_compatibility_version | 4.0
custom_compatibility_version  | 3.6
~~~

Ah, that's why the VM has little '(^)' icon, ie. to inform us we have to stop and start
the VM?
---------------------<%-------------------------


Version-Release number of selected component (if applicable):


How reproducible:


Steps to Reproduce:
1.
2.
3.

Actual results:


Expected results:


Additional info:

Comment 1 Michal Skrivanek 2017-02-10 14:54:57 UTC
it should not be a warning, it should be allowed:-)

Comment 2 Arik 2017-02-13 08:13:15 UTC
*** Bug 1421586 has been marked as a duplicate of this bug. ***

Comment 3 sefi litmanovich 2017-02-27 10:19:24 UTC
Verified on rhevm-4.1.1.2-0.1.el7.noarch.

1. Have 1 4.0 (vdsm-4.18.23-1.el7ev.x86_64) in a 3.6 cluster, host has:
    'supportedENGINEs': ['3.6', '4.0'],
    'clusterLevels': ['3.5', '3.6', '4.0'],
.
2. Create a vm and start the vm.
3. Upgrade cluster to 4.0 compatibility version, now vm is set with:

 vm_name |       creation_date        |      last_start_time       | cluster_compatibility_version | custom_compatibility_version 
---------+----------------------------+----------------------------+-------------------------------+------------------------------
 test-vm | 2017-02-27 11:50:42.902+02 | 2017-02-27 12:07:02.822+02 | 4.0                           | 3.6

as in the description.
3. Add a 4.1 host (vdsm-4.19.6-1.el7ev.x86_64), host has:

    'supportedENGINEs': ['4.0', '4.1'],
    'clusterLevels': ['3.6', '4.0', '4.1'],
4. Try to migrate vm from 4.0 to 4.1 host is successful.
5. Put host 4.1 on maintenance - vm migrates back to 4.0 host.
6. edit /usr/lib/python2.7/site-packages/vdsm/dsaversion.py and set:
clusterLevels': ['4.0', '4.1']
7. Restart host vdsm.
8. Activate host again.
9. Try to migrate the vm from the 4.0 to 4.1 host fails as expected becuase 4.1 host doesn't support 3.6 clusterLevel.

Comment 4 Jiri Belka 2017-02-27 10:47:55 UTC
(In reply to sefi litmanovich from comment #3)

> 6. edit /usr/lib/python2.7/site-packages/vdsm/dsaversion.py and set:
> clusterLevels': ['4.0', '4.1']

^^ can you explain this?

> 7. Restart host vdsm.
> 8. Activate host again.
> 9. Try to migrate the vm from the 4.0 to 4.1 host fails as expected becuase
> 4.1 host doesn't support 3.6 clusterLevel.

^^ what's the purpose? we put back 3.6 engine support into 4.19 (4.1) vdsm, see https://bugzilla.redhat.com/show_bug.cgi?id=1403846


Note You need to log in before you can comment on or make changes to this bug.