Bug 1154631
Summary: | Inhibit migrations RHEL7.0 -> RHEL 6.5 (or equivalent: CentOS) | ||
---|---|---|---|
Product: | Red Hat Enterprise Virtualization Manager | Reporter: | Michal Skrivanek <michal.skrivanek> |
Component: | ovirt-engine | Assignee: | Tomas Jelinek <tjelinek> |
Status: | CLOSED CURRENTRELEASE | QA Contact: | Ilanit Stein <istein> |
Severity: | high | Docs Contact: | |
Priority: | urgent | ||
Version: | 3.5.0 | CC: | bugs, ecohen, fromani, gklein, iheim, lpeer, lsurette, mavital, michal.skrivanek, nsednev, pdwyer, pstehlik, rbalakri, Rhev-m-bugs, sherold, s.kieske, tjelinek, yeylon |
Target Milestone: | --- | Keywords: | Triaged |
Target Release: | 3.5.0 | ||
Hardware: | Unspecified | ||
OS: | Unspecified | ||
Whiteboard: | virt | ||
Fixed In Version: | vt10 | Doc Type: | Known Issue |
Doc Text: |
Migrations from RHEL 7.x to RHEL 6.x are not supported. Therefore we don't allow mixed RHEL version inside a same cluster.
For manual migration from RHEL 6.x to 7.0 (which is supported) there is a new advanced option in the Migrate dialog to be able to migrate to a different cluster within the same data center. However, the suitability of the destination cluster is left up to the user, so one needs to use extra caution.
|
Story Points: | --- |
Clone Of: | 1150191 | Environment: | |
Last Closed: | 2015-02-17 08:29:04 UTC | Type: | Bug |
Regression: | --- | Mount Type: | --- |
Documentation: | --- | CRM: | |
Verified Versions: | Category: | --- | |
oVirt Team: | --- | RHEL 7.3 requirements from Atomic Host: | |
Cloudforms Team: | --- | Target Upstream Version: | |
Embargoed: | |||
Bug Depends On: | 1150191 | ||
Bug Blocks: | 1142879 |
Description
Michal Skrivanek
2014-10-20 11:28:10 UTC
Will migration 7.0->6.6 blocked as well? I see troubles with VM's being migrated 7.0->6.6 too. well, you will not be able to have 6x and 7x hosts in one cluster, so it will not be possible to migrate between them. You will be able to migrate between clusters, but only on your own risk, so you can face problems there. upstream is in On vt10.1 tested: 1. core: disable the mixing of RHEL6 and RHEL7 in one cluster Tested by: adding a rhel7 host to cluster containing rhel6.6, and vice versa. Result: host became non operational, and there was event on that. The reason for this host status, "Not possible to mix RHEL 6x and 7x hosts in one cluster." is under general tab, at the bottom, and it is not too much noticeable. This reason should be displayed in event log, as well. Tomas, can you please consider adding this event? 2.core: enable migration to a different cluster Tested by migrating between 2 clusters, from rhel6.6 to rhel7 - successful. 3. webadmin: allow migration to different cluster Tested Migration from rhel6.6 to rhel7 (different clusters), and from rhel6.6 to rhel6.6 (same cluster) - successful. 4. restapi: added support for optionally select the target cluster - Not tested yet. Tomas: I guess it is just by adding cluster field to migrate vm object? Michal, Is this cross cluster migration additional option documented? Hi Ilanit, 1: yes, I'll look at it today. 4: yes, it is only one optional parameter: <cluster id="6be85660-ef6c-42e0-a567-d6921bdb2a22"/> In continue to comment 4, 4. restapi: added support for optionally select the target cluster - Added cluster field to migrate vm object - migration from rhel6.6 to rhel7 cluster was successful. Please just add the "rhel 6x and 7x mix is not allowed" event. 2. Documentation: I think we should also explain that manual VM migration has now new option, of migration to another cluster, regardless to which OS is installed on the source/destination hosts. @Ilanit: created a patch which adds the event log: http://gerrit.ovirt.org/#/c/35497 Opened a separate bug for the event added in comment 7: Bug #1167827 testcase is required only to the extent that the dialog works and when you have 2 same clusters you can do the migration and it succeeds. Verification addition to comment #4: On vt13.1, Have 2 clusters, one with rhel6.6, and 2nd with rhel7. 1 VM running on rhel6.6. Try to put rhel6.6 in maintenance resulted with operation cancelled: "Error while executing action: The following Hosts have running VMs and cannot be switched to maintenance mode: <hostname>. Please ensure that the following Clusters have at least one Host in UP state: <rhel6.6 clustername>." Verified on vt13.5, with rhel6.6, and rhel 7.1 Beta (Maipo): ======================================= 1. core: disable the mixing of RHEL6 and RHEL7 in one cluster Tested by: adding a rhel7.1 host to cluster containing rhel6.6, and vice versa. Result: host became non operational, and there was event on that. The reason for this host status, "Not possible to mix RHEL 6x and 7x hosts in one cluster." is under general tab, at the bottom, and it is not too much noticeable. Reason is displayed in event log. 2.core: enable migration to a different cluster Tested by migrating between 2 clusters, from rhel6.6 to rhel7.1 - successful. 3. webadmin: allow migration to different cluster Tested Migration from rhel6.6 to rhel7.1 (different clusters), and from rhel7.1 to rhel6.6 (same cluster) - Failed, as expected. 4. restapi: added support for optionally select the target cluster - migration from rhel6.6 to rhel7.1 cluster was successful, migrate action completed and response OK. - as expected. migration from rhel7.1 to rhel6.6 cluster: The migration itself was not successful, as expected, and there was event on that. But the migrate action completed and response OK. - which is NOT as expected.I'll file a separate bug on that. 5. Have 2 clusters, one with rhel6.6, and 2nd with rhel7.1. 1 VM running on rhel6.6. Try to put rhel6.6 host in maintenance resulted with operation cancelled: "Error while executing action: The following Hosts have running VMs and cannot be switched to maintenance mode: <rhel6.6 hostname>. Please ensure that the following Clusters have at least one Host in UP state: <rhel6.6 clustername>." (In reply to Ilanit Stein from comment #11 > migration from rhel7.1 to rhel6.6 cluster: > The migration itself was not successful, as expected, and there was event on > that. But the migrate action completed and response OK. - which is NOT as > expected.I'll file a separate bug on that. > That's normal, initiating the action should succeed. Then you're supposed to monitor the migration progress, which should be going on for some time and then fail, like in the UI RHEV-M 3.5.0 has been released |