Bug 1571024 - [Downstream clone] [RFE] Provide Live Migration for VMs based on "High Performance VM" Profile
Summary: [Downstream clone] [RFE] Provide Live Migration for VMs based on "High Perfor...
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: Red Hat Enterprise Virtualization Manager
Classification: Red Hat
Component: ovirt-engine
Version: 4.1.10
Hardware: All
OS: All
urgent
urgent
Target Milestone: ovirt-4.3.0
: ---
Assignee: Sharon Gratch
QA Contact: Polina
URL:
Whiteboard:
Depends On: 1457239 1457250 1619210
Blocks:
TreeView+ depends on / blocked
 
Reported: 2018-04-24 00:36 UTC by Germano Veit Michel
Modified: 2021-12-10 16:09 UTC (History)
22 users (show)

Fixed In Version:
Doc Type: Enhancement
Doc Text:
This feature provides the ability to enable live migration for HP VMs (and, in general, to all VM types with pinning settings). Previously, Red Hat Virtualization 4.2 added a new High-Performance VM profile type. This required configuration settings including pinning the VM to a host based on the host-specific configuration. Due to the pinning settings, the migration option for the HP VM type was automatically forced to be disabled. Now, Red Hat Virtualization 4.3 provides the ability for live migration of HP VMs (and all other VMs with a pinned configuration like NUMA pinning, CPU pinning, and CPU pass-through enabled). For more details, see the feature page: https://ovirt.org/develop/release-management/features/virt/high-performance-vm-migration.html
Clone Of: 1457250
Environment:
Last Closed: 2019-05-08 12:37:35 UTC
oVirt Team: Virt
Target Upstream Version:
Embargoed:
lsvaty: testing_plan_complete-


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
Red Hat Issue Tracker RHV-44209 0 None None None 2021-12-10 16:09:10 UTC
Red Hat Product Errata RHEA-2019:1085 0 None None None 2019-05-08 12:37:56 UTC

Description Germano Veit Michel 2018-04-24 00:36:28 UTC
+++ This bug was initially created as a clone of Bug #1457250 +++

As the requirements for the "High Performance" VM Profile will prevent this VM from being Live migrated, this RFE is for adding this ability back to this type of VMs.

Needs for this:
- Orchestration for doing unpinning - migration - pinning
- Some prechecks that the host CPUs are the same (as hostpassthrough is used)

Maybe in case we do not have a host with the same NUMA topology, pinning won't be re-enabled until the VM is migrated back to its originating host (or a host with the same topology).
This would of course have some performance impact, but might be acceptable.

Ideally in these cases a Warning should be displayed, so that the administrator is aware of the possible performance loss.

--- Additional comment from Martin Tessun on 2018-01-29 07:13:45 EST ---

Options:
- require same hardware
  ==> This would simplify the pinning
- requires "comptible" hardware
  ==> repinning would need to follow a logic or be a manual step.
  Just unpin, live migrate, and leave it unpinned.

From a usability point of view, we should require the same hardware or have "multihost pinning" in RHV for "compatible" hardware.

As it is also hard to identify the hosts the initial pinning is for, it maybe that multihost pinning is the best approach here.

With that aproach you could use slightly different hardware, but the key-settings (CPUs, NUMA zones) need to be the same.

--- Additional comment from Martin Tessun on 2018-01-29 07:17:41 EST ---

Hi Jarda, 

some questions to this feature:

Is there a way of changing the pinning/vNUMA pining during the live migration in libvirt?

How would you migrate a pinned VM between two hosts that are not 100% identical in libvirt?

--- Additional comment from Jaroslav Suchanek on 2018-02-06 04:51:52 EST ---

(In reply to Martin Tessun from comment #2)
> Hi Jarda, 
> 
> some questions to this feature:
> 
> Is there a way of changing the pinning/vNUMA pining during the live
> migration in libvirt?
> 
> How would you migrate a pinned VM between two hosts that are not 100%
> identical in libvirt?

Yes this is possible. Either there is a migration API which accepts target domain XML which can be different to source. Or you can use post-migration hooks where you can modify the destination guest definition.

More info can provide Jiri Denemark, or virt-devel.

Btw. why are comments 1, 2, 3 private? ;)

--- Additional comment from Michal Skrivanek on 2018-02-15 07:42:00 EST ---

now they're not:)

Thanks, we would initially try not to change any pinning

Regarding multi-host pinning, it's problematic from UX point of view. We basically have a configuration screen for a single host and mapping for a single host. It would be possible to filter unsuitable hosts in scheduling, that way you could see the list of compatible hosts (with similar-enough layout) in the list of available hosts for migration. 
If we keep the single host pinning it would only start on the original host when you shut it down (or you have to reconfigure it). But it simplifies the UX aspects a lot (basically no change is needed) and backend changes are simple. Plus the scheduling side which we need anyway, to decide which hosts are "similar enough".

--- Additional comment from Jon Benedict on 2018-02-21 15:07:40 EST ---

I have 2 "what if" questions:
1 - what if this was part of the "resource reservation" feature? If the HPVM had the resource reservation checked, then it would look for the same NUMA availability in the cluster... this leads me to the next question..
2 - what if from a UX standpoint (looking at Comment #4) the configuration screen included 2 hosts? the one host would be the initial host that the VM starts on, the 2nd (or additional hosts) would be hosts that have the NUMA capabilities already mapped out in preparation, even if it's just stored in a config file to be used at live migration/HA restart time.. This would likely also involve affinity rules...

these are just thoughts..

Comment 2 Yaniv Kaul 2018-04-24 06:21:27 UTC
Duplicate of https://bugzilla.redhat.com/show_bug.cgi?id=1457250 ?

Comment 3 Germano Veit Michel 2018-04-24 06:26:15 UTC
(In reply to Yaniv Kaul from comment #2)
> Duplicate of https://bugzilla.redhat.com/show_bug.cgi?id=1457250 ?

Downstream clone.

Comment 4 RHV bug bot 2018-12-10 15:12:55 UTC
WARN: Bug status wasn't changed from MODIFIED to ON_QA due to the following reason:

[Found non-acked flags: '{'rhevm-4.3-ga': '?'}', ]

For more info please contact: rhv-devops: Bug status wasn't changed from MODIFIED to ON_QA due to the following reason:

[Found non-acked flags: '{'rhevm-4.3-ga': '?'}', ]

For more info please contact: rhv-devops

Comment 5 RHV bug bot 2019-01-15 23:35:24 UTC
WARN: Bug status wasn't changed from MODIFIED to ON_QA due to the following reason:

[Found non-acked flags: '{'rhevm-4.3-ga': '?'}', ]

For more info please contact: rhv-devops: Bug status wasn't changed from MODIFIED to ON_QA due to the following reason:

[Found non-acked flags: '{'rhevm-4.3-ga': '?'}', ]

For more info please contact: rhv-devops

Comment 7 Nils Koenig 2019-03-06 17:33:53 UTC
Please note that this is still an issue and should be re-opend.
Tested with RHV 4.2 migration of CPU and NUMA Pinned VMs is not possible.
Furthermore, the CPU pinning should be elaborated in a way which allows pinning to e.g. socket/ half a socket,... 
in an easy and intuitive way to the user, rather then providing the pinning string yourself (as of today).

Comment 8 Sharon Gratch 2019-03-10 17:29:17 UTC
(In reply to Nils Koenig from comment #7)
> Please note that this is still an issue and should be re-opend.
> Tested with RHV 4.2 migration of CPU and NUMA Pinned VMs is not possible.

This is supported also in 4.2.6 for manual migration only.

> Furthermore, the CPU pinning should be elaborated in a way which allows
> pinning to e.g. socket/ half a socket,... 
> in an easy and intuitive way to the user, rather then providing the pinning
> string yourself (as of today).

This is a separate feature request not related directly for enabling migration of HP VMs. If you think it's important then can you please open a separate RFE for that?

Comment 9 Polina 2019-03-12 13:54:12 UTC
Verified on ovirt-engine-4.3.2-0.1.el7.noarch & vdsm-4.30.10-1.el7ev.x86_64

TestRun link https://polarion.engineering.redhat.com/polarion/#/project/RHEVM3/testrun?id=05_03_19&tab=records

Comment 11 errata-xmlrpc 2019-05-08 12:37:35 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHEA-2019:1085


Note You need to log in before you can comment on or make changes to this bug.