Bugzilla will be upgraded to version 5.0. The upgrade date is tentatively scheduled for 2 December 2018, pending final testing and feedback.
Bug 1441632 - Hide "advanced" migration option a bit better
Hide "advanced" migration option a bit better
Status: CLOSED CURRENTRELEASE
Product: ovirt-engine
Classification: oVirt
Component: BLL.Virt (Show other bugs)
4.1.0
Unspecified Unspecified
unspecified Severity medium (vote)
: ovirt-4.1.3
: 4.1.3.2
Assigned To: jniederm
meital avital
:
Depends On:
Blocks:
  Show dependency treegraph
 
Reported: 2017-04-12 07:26 EDT by Michal Skrivanek
Modified: 2018-09-11 15:51 EDT (History)
12 users (show)

See Also:
Fixed In Version:
Doc Type: Deprecated Functionality
Doc Text:
With this update, the migration of a virtual machine to a different cluster can no longer be invoked from the UI. Regular migration within a cluster remains unchanged.
Story Points: ---
Clone Of:
Environment:
Last Closed: 2017-07-06 09:34:56 EDT
Type: Bug
Regression: ---
Mount Type: ---
Documentation: ---
CRM:
Verified Versions:
Category: ---
oVirt Team: Virt
RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: ---
tjelinek: ovirt‑4.1?
tjelinek: planning_ack?
rule-engine: devel_ack+
rule-engine: testing_ack+


Attachments (Terms of Use)


External Trackers
Tracker ID Priority Status Summary Last Updated
oVirt gerrit 75667 master MERGED webadmin: Cluster selection removed from Migrate dialog 2017-05-24 10:38 EDT
oVirt gerrit 77258 master MERGED webadmin: MigrateModel.vm property removed 2017-05-24 12:40 EDT
oVirt gerrit 77356 ovirt-engine-4.1 MERGED webadmin: Cluster selection removed from Migrate dialog 2017-05-26 05:24 EDT
oVirt gerrit 77357 ovirt-engine-4.1 ABANDONED webadmin: MigrateModel.vm property removed 2017-05-25 12:57 EDT

  None (edit)
Description Michal Skrivanek 2017-04-12 07:26:12 EDT
the advanced option in Migrate dialog allows people to migrate across clusters. It assumes people know what they re doing and check all cluster settings which need to match (e.g. networks) - that's often not the case, so we need to hide it better so people do not use it accidentally
Comment 1 Tomas Jelinek 2017-04-19 09:26:35 EDT
It would be best to delete this option from the UI and leave it only on the API
Comment 2 rhev-integ 2017-05-28 10:46:06 EDT
INFO: Bug status wasn't changed from MODIFIED to ON_QA due to the following reason:

[Tag 'ovirt-engine-4.1.3' doesn't contain patch 'https://gerrit.ovirt.org/77356']
gitweb: https://gerrit.ovirt.org/gitweb?p=ovirt-engine.git;a=shortlog;h=refs/tags/ovirt-engine-4.1.3

For more info please contact: infra@ovirt.org
Comment 3 Martin Tessun 2017-05-29 08:32:58 EDT
(In reply to Tomas Jelinek from comment #1)
> It would be best to delete this option from the UI and leave it only on the
> API

I wouldn't remove it from Web-UI. It is completely fine having this option there. We just need to make sure that the user knows what he is doing and why he needs to take care.

So having the warning and some link to some documentation, what needs to be considered in this case should be fine.

If we move it to API only, I would consider this as a regression.
Comment 4 Logan Kuhn 2017-06-01 09:05:09 EDT
(In reply to Martin Tessun from comment #3)
> (In reply to Tomas Jelinek from comment #1)
> > It would be best to delete this option from the UI and leave it only on the
> > API
> 
> I wouldn't remove it from Web-UI. It is completely fine having this option
> there. We just need to make sure that the user knows what he is doing and
> why he needs to take care.
> 
> So having the warning and some link to some documentation, what needs to be
> considered in this case should be fine.
> 
> If we move it to API only, I would consider this as a regression.


I would also view this as a regression.

Our use case is that we have 10 to 15 VMs that we use purely for stress testing a new VM host and normally they are just distributed throughout the cluster.  However, when we add a new host we migrate them to the new host which is in a cluster named Testing and beat the hell out of the new host for a few days to expose any oddities that may arise before we put production VMs on it.
Comment 5 Michal Skrivanek 2017-06-19 06:47:27 EDT
cross-cluster migration was never intended as a general purpose mechanism. There are not checks at all for validity of configuration of the new cluster. It was there only for el6 to el7 upgrade which is replaced by InClusterUpgrade for a long time.
Comment 8 Martin Tessun 2017-06-21 12:26:25 EDT
(In reply to Michal Skrivanek from comment #5)
> cross-cluster migration was never intended as a general purpose mechanism.
> There are not checks at all for validity of configuration of the new
> cluster. It was there only for el6 to el7 upgrade which is replaced by
> InClusterUpgrade for a long time.

I agree, but this has been there since I remember.
So indeed the checks are not done, and it is up to the customer/used to ensure that the requirements are there, still I would not completely remove it (esp. as there are quite some good reasons for doing cross-cluster migrations).

Removing them from the UI completely would not be the right approach. Having some warnings there (ensure network/CPU/whatever compatibility, e.g.) is the better approach from my pov.
Comment 9 meital avital 2017-06-25 04:24:51 EDT
Guys, bottom Line, what is the decision?
Comment 10 Martin Tessun 2017-06-26 04:25:08 EDT
Hi Meital,

(In reply to meital avital from comment #9)
> Guys, bottom Line, what is the decision?

I would not completeley remove it. It is already hidden, so maybe just adding an additional warning that this is unsupported and is used at customers risk might be useful.

On the other side this feature already is "hidden" in the "Advanced TAB"

So my take on this:
1. Keep it as is
2. Have it documented that this is a risky feature due to possible Networking issues between clusters
3. Add a warning in case a Cross-Cluster Migration is done, stating that the admin needs to make sure that the prerequisites
   are met and that it might crash the VM if that isn't the case.
Comment 11 Tomas Jelinek 2017-06-26 07:33:04 EDT
(In reply to Martin Tessun from comment #10)
> Hi Meital,
> 
> (In reply to meital avital from comment #9)
> > Guys, bottom Line, what is the decision?
> 
> I would not completeley remove it. 

Well, the whole feature of cross cluster migration was a workaround to migrate VMs from EL6 to EL7 which is not needed when upgrading EL7 to newer EL7.

Unfortunately the users misunderstood it as a generic upgrade way which is not good because it is a very rocky feature.

BTW it is not completely removed, the REST API still supports it. I really don't see a reason to keep that around in the UI in 4.1.

> It is already hidden, so maybe just
> adding an additional warning that this is unsupported and is used at
> customers risk might be useful.
> 
> On the other side this feature already is "hidden" in the "Advanced TAB"
> 
> So my take on this:
> 1. Keep it as is
> 2. Have it documented that this is a risky feature due to possible
> Networking issues between clusters
> 3. Add a warning in case a Cross-Cluster Migration is done, stating that the
> admin needs to make sure that the prerequisites
>    are met and that it might crash the VM if that isn't the case.

...or end up in a state that the VM is actually running in one cluster but engine thinks it is running in a different and you can fix this only by direct DB manipulation.
Comment 12 Michal Skrivanek 2017-06-28 09:44:30 EDT
It is hidden in API. That's exactly the right place for such a thing.
From my perspective this bug is complete and does what the description says. This is also the current state.

If you disagree with how it is right now then feel free to open a bug requesting a change. But a cross-cluster migration with today's constraints is not a meaningful feature to invest into. 
As explained above, it's been a bug/workaround for a long time and whoever is using it these days have to stop abusing the system.
Comment 13 Michal Skrivanek 2017-06-28 09:47:11 EDT
(In reply to Logan Kuhn from comment #4)

> Our use case is that we have 10 to 15 VMs that we use purely for stress
> testing a new VM host and normally they are just distributed throughout the
> cluster.  However, when we add a new host we migrate them to the new host
> which is in a cluster named Testing and beat the hell out of the new host
> for a few days to expose any oddities that may arise before we put
> production VMs on it.

why would you need to change the clusters using live migration? Can't you simply stop the VM, change its cluster to Testing, and start it up and tun the workload?
Comment 14 Logan Kuhn 2017-06-28 09:52:08 EDT
(In reply to Michal Skrivanek from comment #13)
> (In reply to Logan Kuhn from comment #4)
> 
> > Our use case is that we have 10 to 15 VMs that we use purely for stress
> > testing a new VM host and normally they are just distributed throughout the
> > cluster.  However, when we add a new host we migrate them to the new host
> > which is in a cluster named Testing and beat the hell out of the new host
> > for a few days to expose any oddities that may arise before we put
> > production VMs on it.
> 
> why would you need to change the clusters using live migration? Can't you
> simply stop the VM, change its cluster to Testing, and start it up and tun
> the workload?

When I first read through this bug and responded I was under the impression that all cross cluster functionality was being removed, including changing it at any point after it was created, as opposed to just live migration.

I still stand by my comment about it being convenient when we go to test it, but my original comment was from a mistaken perspective and I no longer have a as strong of an opinion against it.

Logan
Comment 15 Martin Tessun 2017-06-28 11:41:50 EDT
After discussing with SBR, and checking all the details around issued with this, I think we can move this forward as requested.

Still I would like to highlight this, as it is a removed functionality, that might be expected by quite some customers using it, so they need to be aware of that change.

@Meital: You may move this forward to VERIFIED.

Thanks!
Martin
Comment 16 Marina 2017-06-28 12:48:19 EDT
+1

I will add my opinion here as well - between-clusters migration was never a feature of RHEV and it should have blocked in UI from the very beginning. It was a bug having it in there. 

I vote to remove it from UI as well, especially now, that there is no need for el6 to el7 hosts upgrade, the only scenario in which this migration was allowed.

For the future, if a new scenario would appear, that will require between clusters migration, we can add this functionality for that use case only.
Comment 17 meital avital 2017-06-29 05:27:33 EDT
Verified on version: 4.1.3.5-0.1.el7
Comment 18 Yamakasi 2017-06-30 10:07:33 EDT
I'm just reading about this and this a real pity that it's removed!

I mostly setup new clusters if needed and moved text machines to other clusters for production when they are ready.

This was the most ideal feature just because of EL6 -> EL7 and can be used for so many other reasons! So please make it available or let us be able to enable it from the CLI like mac-spoof, etc as well.

If people don't know what they are doing with it I hope they don't maintain clusters as well.
Comment 19 Yamakasi 2017-06-30 10:11:40 EDT
Sorry for my typo's it's indeed what Martin Tessun says in #15 we need to beware of these things and at least this is not nice.

My Typo:

I mostly setup new clusters if needed and move the machines to other clusters for production when they are ready.
Comment 20 Michal Skrivanek 2017-07-21 10:01:31 EDT
(In reply to Yamakasi from comment #18)
> So please make it available or let us be able to
> enable it from the CLI like mac-spoof, etc as well.

Hi, please note it is still possible over REST API and so the CLI works as well (since it's just a shell wrapper for API).
Comment 21 Martin Tessun 2017-08-01 05:29:19 EDT
(In reply to Yamakasi from comment #18)
> I'm just reading about this and this a real pity that it's removed!
> 
> I mostly setup new clusters if needed and moved text machines to other
> clusters for production when they are ready.

Do you do a live migration to the new cluster, once they are production ready?
If you don't need this, you can also shut the VM down and start it in the new cluster. This is still possible and the recommended way of doing a "cluster change".

And as Michal said in comment #20 you can still use the REST API and the CLI for live migrating the VM between two clusters.

> 
> This was the most ideal feature just because of EL6 -> EL7 and can be used
> for so many other reasons! So please make it available or let us be able to
> enable it from the CLI like mac-spoof, etc as well.
> 
> If people don't know what they are doing with it I hope they don't maintain
> clusters as well.

I agree that it works perfectly well, if you "know what you are doing" as well as how virtualization and networking work in detail. As such it is still available in the REST-API (and is not planned to be removed there). This step was mainly taken to avoid issues due to people doing cross-cluster migrations without respecting the restrictions.

So my question is: Is it sufficient having the "cold" cluster migration and the live-migration between clusters via REST?

Thanks!
Martin

Note You need to log in before you can comment on or make changes to this bug.