During cluster level(CL) change the VM's "hardware" changes and the system's behavior towards those VMs changes. Therefore we now require VMs to be Down during the CL change in order to start them up with the new parameters matching the new CL (unless they use the per-VM CL override introduced in 4.0). It's a pretty strict and constraining requirement since it requires all of the VMs to be down at the same time. We can make it a bit less harsh by issuing a warning on that matter, and flag all the running VMs with a configuration change icon. It corresponds to the inherent config change that will happen on next VM start. Until the VMs are restarted the VM's behavior is not going to be correct, but highlighting it in a UI as "pending change" should be intuitive enough to understand it needs a restart
Can we get this into 4.0.1 instead of 4.0.2?
as soon as it's ready, the actual TR is going to be adjusted then. IT looks fine so far, but it needs a really thorough testing
Am I correct that in the mean-time there is no way to upgrade cluster level when using hosted-engine because I can't set all hosts to maintenance and still access engine ?
(In reply to Ralf Schenk from comment #3) > Am I correct that in the mean-time there is no way to upgrade cluster level > when using hosted-engine because I can't set all hosts to maintenance and > still access engine ? Yes. I had to shut all the VMs and the engine down and manually edit the database by hand to change the cluster level. See bug 1341023.
(In reply to Ralf Schenk from comment #3) > Am I correct that in the mean-time there is no way to upgrade cluster level > when using hosted-engine because I can't set all hosts to maintenance and > still access engine ? Oh, but I would not upgrade yet if self-hosted engine HA is important to you; it's broken in 4.0. See bug 1343005.
Eyal, can you please take a look? I couldn't clone this bug to downstream. Error in jenkins: Bug 1348907 fails criteria: - Flag ovirt-3.6.z[?+] not found - Flag ovirt-4.0.0+ not found - Flag blocker is missing (+) value - Flag exception is missing (+) value
I've sent a patch to fix the job to work with 4.0.z instead of 4.0.0. i see the bug was cloned already or removed with flags, so i can't check it, if you have another bug you can try it, the fix is cherry picked in the job.
we can take advantage of VM custom compatibility override introduced in 4.0 and temporarily change the VM's compat level to the old cluster. We can use the next_run config to revert back to the default(no override, inheriting the cluster's level) on VM shutdown
Let's move customer's discussion to the d/s clone: https://bugzilla.redhat.com/show_bug.cgi?id=1356194 And this documentation bug: https://bugzilla.redhat.com/show_bug.cgi?id=1356198
*** Bug 1356198 has been marked as a duplicate of this bug. ***
note: warning messages during CL upgrade for various VM states are not optimal, will be handled in bug 1356027
Verified with rhevm-4.0.2.7-0.1.el7ev.noarch
Hi, reading this issue I'm not sure if this should work when a vm reboots itself, ie I run systemctl reboot within the vm. This doesn't work for me -- is it supposed to work like this or do I have to initiate a restart through the manager? Greetings Klaas
Hi Klaas, right - restart via engine is expected. Or shut down the VM and start once again.