Bug 1457250 - [RFE] Provide Live Migration for VMs based on "High Performance VM" Profile - manual migrations
Summary: [RFE] Provide Live Migration for VMs based on "High Performance VM" Profile -...
Keywords:
Status: CLOSED CURRENTRELEASE
Alias: None
Product: ovirt-engine
Classification: oVirt
Component: BLL.Virt
Version: future
Hardware: All
OS: All
urgent
urgent
Target Milestone: ovirt-4.2.6
: ---
Assignee: Sharon Gratch
QA Contact: Polina
URL:
Whiteboard:
Depends On: 1457239
Blocks: 1571024 1619210
TreeView+ depends on / blocked
 
Reported: 2017-05-31 12:17 UTC by Martin Tessun
Modified: 2019-01-23 08:01 UTC (History)
11 users (show)

Fixed In Version: ovirt-engine-4.2.6.4
Doc Type: Enhancement
Doc Text:
Feature: This feature provides the ability to enable the live migration for those HP VMs (and in general to all VM types with pinning settings). Reason: n oVirt 4.2 we added a new “High Performance” VM profile type. This required configuration settings includes pinning the VM to a host based on the host specific configuration. Due to that pinning settings, the migration option for the HP VM type was automatically forced to be disabled. Result: in oVirt 4.2.x we will provide the ability to manual migrate the HP VM. This is the first phase solution as mentioned in the feature page. In next oVirt release 4.2 we will provide a full automatic solution. This solution for 4.2.x includes: 1. Only manual migration can be done for HP VMs via the UI. In addition the user will have to choose the destination host to migrate to. 2. Manual migration is also supported for Server/Desktop VM types with pinned configuration, but only via REST api. For more details on this first phase/manual solution, please refer to the feature page: https://www.ovirt.org/develop/release-management/features/virt/high-performance-vm-migration/
Clone Of:
: 1571024 1619210 (view as bug list)
Environment:
Last Closed: 2018-09-03 15:09:02 UTC
oVirt Team: Virt
Embargoed:
rule-engine: ovirt-4.2+
mtessun: planning_ack+
michal.skrivanek: devel_ack+
rule-engine: testing_ack+


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
oVirt gerrit 92552 0 'None' MERGED backend: fix constraints for manual HP VM migration 2020-08-12 06:58:11 UTC
oVirt gerrit 92788 0 'None' MERGED webadmin: fix constraints for manual HP VM migration 2020-08-12 06:58:12 UTC
oVirt gerrit 93270 0 'None' MERGED backend: fix constraints for manual HP VM migration 2020-08-12 06:58:11 UTC
oVirt gerrit 93271 0 'None' MERGED webadmin: fix constraints for manual HP VM migration 2020-08-12 06:58:11 UTC

Description Martin Tessun 2017-05-31 12:17:42 UTC
As the requirements for the "High Performance" VM Profile will prevent this VM from being Live migrated, this RFE is for adding this ability back to this type of VMs.

Needs for this:
- Orchestration for doing unpinning - migration - pinning
- Some prechecks that the host CPUs are the same (as hostpassthrough is used)

Maybe in case we do not have a host with the same NUMA topology, pinning won't be re-enabled until the VM is migrated back to its originating host (or a host with the same topology).
This would of course have some performance impact, but might be acceptable.

Ideally in these cases a Warning should be displayed, so that the administrator is aware of the possible performance loss.

Comment 1 Martin Tessun 2018-01-29 12:13:45 UTC
Options:
- require same hardware
  ==> This would simplify the pinning
- requires "comptible" hardware
  ==> repinning would need to follow a logic or be a manual step.
  Just unpin, live migrate, and leave it unpinned.

From a usability point of view, we should require the same hardware or have "multihost pinning" in RHV for "compatible" hardware.

As it is also hard to identify the hosts the initial pinning is for, it maybe that multihost pinning is the best approach here.

With that aproach you could use slightly different hardware, but the key-settings (CPUs, NUMA zones) need to be the same.

Comment 2 Martin Tessun 2018-01-29 12:17:41 UTC
Hi Jarda, 

some questions to this feature:

Is there a way of changing the pinning/vNUMA pining during the live migration in libvirt?

How would you migrate a pinned VM between two hosts that are not 100% identical in libvirt?

Comment 3 Jaroslav Suchanek 2018-02-06 09:51:52 UTC
(In reply to Martin Tessun from comment #2)
> Hi Jarda, 
> 
> some questions to this feature:
> 
> Is there a way of changing the pinning/vNUMA pining during the live
> migration in libvirt?
> 
> How would you migrate a pinned VM between two hosts that are not 100%
> identical in libvirt?

Yes this is possible. Either there is a migration API which accepts target domain XML which can be different to source. Or you can use post-migration hooks where you can modify the destination guest definition.

More info can provide Jiri Denemark, or virt-devel.

Btw. why are comments 1, 2, 3 private? ;)

Comment 4 Michal Skrivanek 2018-02-15 12:42:00 UTC
now they're not:)

Thanks, we would initially try not to change any pinning

Regarding multi-host pinning, it's problematic from UX point of view. We basically have a configuration screen for a single host and mapping for a single host. It would be possible to filter unsuitable hosts in scheduling, that way you could see the list of compatible hosts (with similar-enough layout) in the list of available hosts for migration. 
If we keep the single host pinning it would only start on the original host when you shut it down (or you have to reconfigure it). But it simplifies the UX aspects a lot (basically no change is needed) and backend changes are simple. Plus the scheduling side which we need anyway, to decide which hosts are "similar enough".

Comment 5 Jon Benedict 2018-02-21 20:07:40 UTC
I have 2 "what if" questions:
1 - what if this was part of the "resource reservation" feature? If the HPVM had the resource reservation checked, then it would look for the same NUMA availability in the cluster... this leads me to the next question..
2 - what if from a UX standpoint (looking at Comment #4) the configuration screen included 2 hosts? the one host would be the initial host that the VM starts on, the 2nd (or additional hosts) would be hosts that have the NUMA capabilities already mapped out in preparation, even if it's just stored in a config file to be used at live migration/HA restart time.. This would likely also involve affinity rules...

these are just thoughts..

Comment 6 Michal Skrivanek 2018-08-20 11:00:27 UTC
allow manual migration only in 4.2.z

Comment 7 Polina 2018-09-02 06:04:25 UTC
- tested on upstream (ovirt-release42-snapshot-4.2.6-0.3.rc3.20180826015005.git2aa33d5.el7.noarch) that High-Performance VM has two options - "manual migration only" (default) and don't allow migration.

- tested migration  (only allowed for the specifically chosen host)

- tested that the feature works for the VM created from templateHP and from VM Pool 

- tested migration failure if the HP VM was running on Numa supporting host with NUMA pinning and was migrated to the host not support NUMA.


Note You need to log in before you can comment on or make changes to this bug.