+++ This bug was initially created as a clone of Bug #1457250 +++ As the requirements for the "High Performance" VM Profile will prevent this VM from being Live migrated, this RFE is for adding this ability back to this type of VMs. Needs for this: - Orchestration for doing unpinning - migration - pinning - Some prechecks that the host CPUs are the same (as hostpassthrough is used) Maybe in case we do not have a host with the same NUMA topology, pinning won't be re-enabled until the VM is migrated back to its originating host (or a host with the same topology). This would of course have some performance impact, but might be acceptable. Ideally in these cases a Warning should be displayed, so that the administrator is aware of the possible performance loss. --- Additional comment from Martin Tessun on 2018-01-29 07:13:45 EST --- Options: - require same hardware ==> This would simplify the pinning - requires "comptible" hardware ==> repinning would need to follow a logic or be a manual step. Just unpin, live migrate, and leave it unpinned. From a usability point of view, we should require the same hardware or have "multihost pinning" in RHV for "compatible" hardware. As it is also hard to identify the hosts the initial pinning is for, it maybe that multihost pinning is the best approach here. With that aproach you could use slightly different hardware, but the key-settings (CPUs, NUMA zones) need to be the same. --- Additional comment from Martin Tessun on 2018-01-29 07:17:41 EST --- Hi Jarda, some questions to this feature: Is there a way of changing the pinning/vNUMA pining during the live migration in libvirt? How would you migrate a pinned VM between two hosts that are not 100% identical in libvirt? --- Additional comment from Jaroslav Suchanek on 2018-02-06 04:51:52 EST --- (In reply to Martin Tessun from comment #2) > Hi Jarda, > > some questions to this feature: > > Is there a way of changing the pinning/vNUMA pining during the live > migration in libvirt? > > How would you migrate a pinned VM between two hosts that are not 100% > identical in libvirt? Yes this is possible. Either there is a migration API which accepts target domain XML which can be different to source. Or you can use post-migration hooks where you can modify the destination guest definition. More info can provide Jiri Denemark, or virt-devel. Btw. why are comments 1, 2, 3 private? ;) --- Additional comment from Michal Skrivanek on 2018-02-15 07:42:00 EST --- now they're not:) Thanks, we would initially try not to change any pinning Regarding multi-host pinning, it's problematic from UX point of view. We basically have a configuration screen for a single host and mapping for a single host. It would be possible to filter unsuitable hosts in scheduling, that way you could see the list of compatible hosts (with similar-enough layout) in the list of available hosts for migration. If we keep the single host pinning it would only start on the original host when you shut it down (or you have to reconfigure it). But it simplifies the UX aspects a lot (basically no change is needed) and backend changes are simple. Plus the scheduling side which we need anyway, to decide which hosts are "similar enough". --- Additional comment from Jon Benedict on 2018-02-21 15:07:40 EST --- I have 2 "what if" questions: 1 - what if this was part of the "resource reservation" feature? If the HPVM had the resource reservation checked, then it would look for the same NUMA availability in the cluster... this leads me to the next question.. 2 - what if from a UX standpoint (looking at Comment #4) the configuration screen included 2 hosts? the one host would be the initial host that the VM starts on, the 2nd (or additional hosts) would be hosts that have the NUMA capabilities already mapped out in preparation, even if it's just stored in a config file to be used at live migration/HA restart time.. This would likely also involve affinity rules... these are just thoughts..
Duplicate of https://bugzilla.redhat.com/show_bug.cgi?id=1457250 ?
(In reply to Yaniv Kaul from comment #2) > Duplicate of https://bugzilla.redhat.com/show_bug.cgi?id=1457250 ? Downstream clone.
WARN: Bug status wasn't changed from MODIFIED to ON_QA due to the following reason: [Found non-acked flags: '{'rhevm-4.3-ga': '?'}', ] For more info please contact: rhv-devops: Bug status wasn't changed from MODIFIED to ON_QA due to the following reason: [Found non-acked flags: '{'rhevm-4.3-ga': '?'}', ] For more info please contact: rhv-devops
Please note that this is still an issue and should be re-opend. Tested with RHV 4.2 migration of CPU and NUMA Pinned VMs is not possible. Furthermore, the CPU pinning should be elaborated in a way which allows pinning to e.g. socket/ half a socket,... in an easy and intuitive way to the user, rather then providing the pinning string yourself (as of today).
(In reply to Nils Koenig from comment #7) > Please note that this is still an issue and should be re-opend. > Tested with RHV 4.2 migration of CPU and NUMA Pinned VMs is not possible. This is supported also in 4.2.6 for manual migration only. > Furthermore, the CPU pinning should be elaborated in a way which allows > pinning to e.g. socket/ half a socket,... > in an easy and intuitive way to the user, rather then providing the pinning > string yourself (as of today). This is a separate feature request not related directly for enabling migration of HP VMs. If you think it's important then can you please open a separate RFE for that?
Verified on ovirt-engine-4.3.2-0.1.el7.noarch & vdsm-4.30.10-1.el7ev.x86_64 TestRun link https://polarion.engineering.redhat.com/polarion/#/project/RHEVM3/testrun?id=05_03_19&tab=records
Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory, and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://access.redhat.com/errata/RHEA-2019:1085