Bug 1348907 - During cluster level upgrade - warn and mark VMs as pending a configuration change when they are running
Summary: During cluster level upgrade - warn and mark VMs as pending a configuration c...
Keywords:
Status: CLOSED CURRENTRELEASE
Alias: None
Product: ovirt-engine
Classification: oVirt
Component: BLL.Virt
Version: 4.0.0
Hardware: Unspecified
OS: Unspecified
high
high with 1 vote
Target Milestone: ovirt-4.0.2
: 4.0.2
Assignee: Marek Libra
QA Contact: sefi litmanovich
URL:
Whiteboard:
: 1356198 (view as bug list)
Depends On:
Blocks: 1356027 1356194 1357513
TreeView+ depends on / blocked
 
Reported: 2016-06-22 09:44 UTC by Michal Skrivanek
Modified: 2017-11-06 09:07 UTC (History)
20 users (show)

Fixed In Version:
Clone Of:
: 1356027 1356194 (view as bug list)
Environment:
Last Closed: 2016-08-22 12:31:31 UTC
oVirt Team: Virt
Embargoed:
rule-engine: ovirt-4.0.z+
rule-engine: planning_ack+
michal.skrivanek: devel_ack+
rule-engine: testing_ack+


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
Red Hat Knowledge Base (Solution) 2442801 0 None None None 2016-07-19 17:43:05 UTC
oVirt gerrit 59607 0 master MERGED webadmin: Warn for running VMs when cluster level change 2016-07-03 14:17:28 UTC
oVirt gerrit 59630 0 ovirt-engine-4.0 MERGED webadmin: Warn for running VMs when cluster level change 2016-07-04 11:48:51 UTC

Description Michal Skrivanek 2016-06-22 09:44:31 UTC
During cluster level(CL) change the VM's "hardware" changes and the system's behavior towards those VMs changes. Therefore we now require VMs to be Down during the CL change in order to start them up with the new parameters matching the new CL (unless they use the per-VM CL override introduced in 4.0). It's a pretty strict and constraining requirement since it requires all of the VMs to be down at the same time.
We can make it a bit less harsh by issuing a warning on that matter, and flag all the running VMs with a configuration change icon. It corresponds to the inherent config change that will happen on next VM start.

Until the VMs are restarted the VM's behavior is not going to be correct, but highlighting it in a UI as "pending change" should be intuitive enough to understand it needs a restart

Comment 1 Sandro Bonazzola 2016-06-22 11:54:07 UTC
Can we get this into 4.0.1 instead of 4.0.2?

Comment 2 Michal Skrivanek 2016-06-22 12:48:06 UTC
as soon as it's ready, the actual TR is going to be adjusted then. IT looks fine so far, but it needs a really thorough testing

Comment 3 Ralf Schenk 2016-06-29 13:59:09 UTC
Am I correct that in the mean-time there is no way to upgrade cluster level when using hosted-engine because I can't set all hosts to maintenance and still access engine ?

Comment 4 Carl Thompson 2016-06-29 15:16:42 UTC
(In reply to Ralf Schenk from comment #3)
> Am I correct that in the mean-time there is no way to upgrade cluster level
> when using hosted-engine because I can't set all hosts to maintenance and
> still access engine ?

Yes. I had to shut all the VMs and the engine down and manually edit the database by hand to change the cluster level. See bug 1341023.

Comment 5 Carl Thompson 2016-06-29 15:19:15 UTC
(In reply to Ralf Schenk from comment #3)
> Am I correct that in the mean-time there is no way to upgrade cluster level
> when using hosted-engine because I can't set all hosts to maintenance and
> still access engine ?

Oh, but I would not upgrade yet if self-hosted engine HA is important to you; it's broken in 4.0. See bug 1343005.

Comment 7 Marina Kalinin 2016-07-12 18:03:57 UTC
Eyal, can you please take a look?

I couldn't clone this bug to downstream.
Error in jenkins:

Bug 1348907 fails criteria:
    - Flag ovirt-3.6.z[?+] not found
    - Flag ovirt-4.0.0+ not found
    - Flag blocker is missing (+) value
    - Flag exception is missing (+) value

Comment 8 Eyal Edri 2016-07-12 18:48:39 UTC
I've sent a patch to fix the job to work with 4.0.z instead of 4.0.0.
i see the bug was cloned already or removed with flags, so i can't check it,
if you have another bug you can try it, the fix is cherry picked in the job.

Comment 11 Michal Skrivanek 2016-07-13 09:46:45 UTC
we can take advantage of VM custom compatibility override introduced in 4.0 and temporarily change the VM's compat level to the old cluster. We can use the next_run config to revert back to the default(no override, inheriting the cluster's level) on VM shutdown

Comment 14 Marina Kalinin 2016-07-13 16:20:11 UTC
Let's move customer's discussion to the d/s clone:
https://bugzilla.redhat.com/show_bug.cgi?id=1356194
And this documentation bug:
https://bugzilla.redhat.com/show_bug.cgi?id=1356198

Comment 15 Yaniv Lavi 2016-07-19 11:28:12 UTC
*** Bug 1356198 has been marked as a duplicate of this bug. ***

Comment 20 Michal Skrivanek 2016-08-04 09:58:36 UTC
note: warning messages during CL upgrade for various VM states are not optimal, will be handled in bug 1356027

Comment 21 sefi litmanovich 2016-08-21 15:45:33 UTC
Verified with rhevm-4.0.2.7-0.1.el7ev.noarch

Comment 22 Klaas Demter 2017-11-06 08:16:43 UTC
Hi,
reading this issue I'm not sure if this should work when a vm reboots itself, ie I run systemctl reboot within the vm. This doesn't work for me -- is it supposed to work like this or do I have to initiate a restart through the manager?

Greetings
Klaas

Comment 23 Marek Libra 2017-11-06 09:07:46 UTC
Hi Klaas, right - restart via engine is expected. Or shut down the VM and start once again.


Note You need to log in before you can comment on or make changes to this bug.