+++ This bug was initially created as a clone of Bug #1348907 +++ --- Additional comment from Michal Skrivanek on 2016-07-13 11:46:45 CEST --- we can take advantage of VM custom compatibility override introduced in 4.0 and temporarily change the VM's compat level to the old cluster. We can use the next_run config to revert back to the default(no override, inheriting the cluster's level) on VM shutdown
Not to confuse with the original bug: <mskrivanek> mku: that's another improvement of the process, but that is not backportable to 3.6 as it depends on a different 4.0 feature
the way warning during the cluster level upgrade should work is: during CL update - warn on suspended VMs, VMs with snapshots with RAM, running VMs, paused VMs after CL upgrade - reconfig icon for suspended VMs, running VMs, paused VMs on snapshot preview - block restoring with RAM
pending testing with HE still may cause issues on many running VMs due to bug 1366786 which is 4.0.4
Tested upgrade from cluster 3.6 (2 hosts with vdsm-4.17.33-1) to cluster 4.0 (upgraded the hosts to vdsm-4.18.11-1). On 3.6 created and ran vms with various configurations and kept vms running during/after upgrade. Tested flows/configurations: 1. Snapshot creation before upgrade (with and without memory) and restoring after upgrade: without memory - the vm snapshot is restored and vm starts with 4.0 xml. with memory - we get the expected warning message, after confirming the vm is restored and starts with 3.6 xml. Both are expected behaviours. 2. Snapshot create-preview-undo-clone-remove after upgrade - pass. 3. Migration after upgrade - passed. 5. Run once + cloud init - Tested run once before upgrade and used cloud init to set hostname/username/password, custom cpu type, changed console from spice to vnc - all configurations works fine as expected. After upgrade poweroff vm - vm's configuration revert back and vm starts with 4.0 xml. - pass 6. Consoles - both vnc and spice consoles had no regression after upgrade. On spice checked usb support, file transfer, copy paste support. - passed. 7. Memory hotplug after upgrade - passed. 8. Cpu hotplug after upgrade - passed. Tested twice: once when 'HotPlugCpuSupported' is set to 'false' for 3.6 with arch x86_64 - couldn't hot plug and got the expected error message. Once when it set to true - hot plug succeeded. 9. Nic hotplug - passed. 10. disk hotplug - passed. 11. HA vm - after upgrade kill vm's process see that it starts right away - passed. 12. Hyperv enlightenment for windows vm - passed: checked configuration level only - vms created as windows vm had all the hyper-v flags enabled in xml, whereas linux vms did not. 13. I/O threads - set a vm with 4 I/O threads - configuration wasn't changed after upgrade. - passed. Problems: Both problems that were found in pre integration build persisted: 1. HA VM isn't updated correctly in case that the process is killed in qemu level and engine invokes restart - https://bugzilla.redhat.com/show_bug.cgi?id=1369521 2. When upgrading cluster with HE vm, the HE vm's xml doesn't change and no mark for restart appears - https://bugzilla.redhat.com/show_bug.cgi?id=1370120 Verifying as there are open bugs on the issue and in most flows the upgrade works as expected.