Bugzilla will be upgraded to version 5.0. The upgrade date is tentatively scheduled for 2 December 2018, pending final testing and feedback.
Bug 1348907 - During cluster level upgrade - warn and mark VMs as pending a configuration change when they are running
During cluster level upgrade - warn and mark VMs as pending a configuration c...
Status: CLOSED CURRENTRELEASE
Product: ovirt-engine
Classification: oVirt
Component: BLL.Virt (Show other bugs)
4.0.0
Unspecified Unspecified
high Severity high with 1 vote (vote)
: ovirt-4.0.2
: 4.0.2
Assigned To: Marek Libra
sefi litmanovich
:
: 1356198 (view as bug list)
Depends On:
Blocks: 1356027 1356194 1357513
  Show dependency treegraph
 
Reported: 2016-06-22 05:44 EDT by Michal Skrivanek
Modified: 2017-11-06 04:07 EST (History)
20 users (show)

See Also:
Fixed In Version:
Doc Type: Enhancement
Doc Text:
Previously, cluster compatibility version upgrades were blocked if there was a running virtual machine in the cluster. Now, the user is informed about running/suspended virtual machines in a cluster when changing the cluster version. All such virtual machines are marked with a Next Run Configuration symbol to denote the requirement for rebooting them as soon as possible after the cluster version upgrade.
Story Points: ---
Clone Of:
: 1356027 1356194 (view as bug list)
Environment:
Last Closed: 2016-08-22 08:31:31 EDT
Type: Bug
Regression: ---
Mount Type: ---
Documentation: ---
CRM:
Verified Versions:
Category: ---
oVirt Team: Virt
RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: ---
rule-engine: ovirt‑4.0.z+
rule-engine: planning_ack+
michal.skrivanek: devel_ack+
rule-engine: testing_ack+


Attachments (Terms of Use)


External Trackers
Tracker ID Priority Status Summary Last Updated
Red Hat Knowledge Base (Solution) 2442801 None None None 2016-07-19 13:43 EDT
oVirt gerrit 59607 master MERGED webadmin: Warn for running VMs when cluster level change 2016-07-03 10:17 EDT
oVirt gerrit 59630 ovirt-engine-4.0 MERGED webadmin: Warn for running VMs when cluster level change 2016-07-04 07:48 EDT

  None (edit)
Description Michal Skrivanek 2016-06-22 05:44:31 EDT
During cluster level(CL) change the VM's "hardware" changes and the system's behavior towards those VMs changes. Therefore we now require VMs to be Down during the CL change in order to start them up with the new parameters matching the new CL (unless they use the per-VM CL override introduced in 4.0). It's a pretty strict and constraining requirement since it requires all of the VMs to be down at the same time.
We can make it a bit less harsh by issuing a warning on that matter, and flag all the running VMs with a configuration change icon. It corresponds to the inherent config change that will happen on next VM start.

Until the VMs are restarted the VM's behavior is not going to be correct, but highlighting it in a UI as "pending change" should be intuitive enough to understand it needs a restart
Comment 1 Sandro Bonazzola 2016-06-22 07:54:07 EDT
Can we get this into 4.0.1 instead of 4.0.2?
Comment 2 Michal Skrivanek 2016-06-22 08:48:06 EDT
as soon as it's ready, the actual TR is going to be adjusted then. IT looks fine so far, but it needs a really thorough testing
Comment 3 Ralf Schenk 2016-06-29 09:59:09 EDT
Am I correct that in the mean-time there is no way to upgrade cluster level when using hosted-engine because I can't set all hosts to maintenance and still access engine ?
Comment 4 Carl Thompson 2016-06-29 11:16:42 EDT
(In reply to Ralf Schenk from comment #3)
> Am I correct that in the mean-time there is no way to upgrade cluster level
> when using hosted-engine because I can't set all hosts to maintenance and
> still access engine ?

Yes. I had to shut all the VMs and the engine down and manually edit the database by hand to change the cluster level. See bug 1341023.
Comment 5 Carl Thompson 2016-06-29 11:19:15 EDT
(In reply to Ralf Schenk from comment #3)
> Am I correct that in the mean-time there is no way to upgrade cluster level
> when using hosted-engine because I can't set all hosts to maintenance and
> still access engine ?

Oh, but I would not upgrade yet if self-hosted engine HA is important to you; it's broken in 4.0. See bug 1343005.
Comment 7 Marina 2016-07-12 14:03:57 EDT
Eyal, can you please take a look?

I couldn't clone this bug to downstream.
Error in jenkins:

Bug 1348907 fails criteria:
    - Flag ovirt-3.6.z[?+] not found
    - Flag ovirt-4.0.0+ not found
    - Flag blocker is missing (+) value
    - Flag exception is missing (+) value
Comment 8 Eyal Edri 2016-07-12 14:48:39 EDT
I've sent a patch to fix the job to work with 4.0.z instead of 4.0.0.
i see the bug was cloned already or removed with flags, so i can't check it,
if you have another bug you can try it, the fix is cherry picked in the job.
Comment 11 Michal Skrivanek 2016-07-13 05:46:45 EDT
we can take advantage of VM custom compatibility override introduced in 4.0 and temporarily change the VM's compat level to the old cluster. We can use the next_run config to revert back to the default(no override, inheriting the cluster's level) on VM shutdown
Comment 14 Marina 2016-07-13 12:20:11 EDT
Let's move customer's discussion to the d/s clone:
https://bugzilla.redhat.com/show_bug.cgi?id=1356194
And this documentation bug:
https://bugzilla.redhat.com/show_bug.cgi?id=1356198
Comment 15 Yaniv Lavi 2016-07-19 07:28:12 EDT
*** Bug 1356198 has been marked as a duplicate of this bug. ***
Comment 20 Michal Skrivanek 2016-08-04 05:58:36 EDT
note: warning messages during CL upgrade for various VM states are not optimal, will be handled in bug 1356027
Comment 21 sefi litmanovich 2016-08-21 11:45:33 EDT
Verified with rhevm-4.0.2.7-0.1.el7ev.noarch
Comment 22 Klaas Demter 2017-11-06 03:16:43 EST
Hi,
reading this issue I'm not sure if this should work when a vm reboots itself, ie I run systemctl reboot within the vm. This doesn't work for me -- is it supposed to work like this or do I have to initiate a restart through the manager?

Greetings
Klaas
Comment 23 Marek Libra 2017-11-06 04:07:46 EST
Hi Klaas, right - restart via engine is expected. Or shut down the VM and start once again.

Note You need to log in before you can comment on or make changes to this bug.