Bug 1348907
Summary: | During cluster level upgrade - warn and mark VMs as pending a configuration change when they are running | |||
---|---|---|---|---|
Product: | [oVirt] ovirt-engine | Reporter: | Michal Skrivanek <michal.skrivanek> | |
Component: | BLL.Virt | Assignee: | Marek Libra <mlibra> | |
Status: | CLOSED CURRENTRELEASE | QA Contact: | sefi litmanovich <slitmano> | |
Severity: | high | Docs Contact: | ||
Priority: | high | |||
Version: | 4.0.0 | CC: | aperotti, baptiste.agasse, bugs, c.handel, dmoessne, eedri, jentrena, jiri.slezka, klaas, miac.romanov, michal.skrivanek, mkalinin, mlibra, mtessun, redhat, rs, sbonazzo, sites-redhat, trichard, ylavi | |
Target Milestone: | ovirt-4.0.2 | Flags: | rule-engine:
ovirt-4.0.z+
rule-engine: planning_ack+ michal.skrivanek: devel_ack+ rule-engine: testing_ack+ |
|
Target Release: | 4.0.2 | |||
Hardware: | Unspecified | |||
OS: | Unspecified | |||
Whiteboard: | ||||
Fixed In Version: | Doc Type: | Enhancement | ||
Doc Text: |
Previously, cluster compatibility version upgrades were blocked if there was a running virtual machine in the cluster. Now, the user is informed about running/suspended virtual machines in a cluster when changing the cluster version. All such virtual machines are marked with a Next Run Configuration symbol to denote the requirement for rebooting them as soon as possible after the cluster version upgrade.
|
Story Points: | --- | |
Clone Of: | ||||
: | 1356027 1356194 (view as bug list) | Environment: | ||
Last Closed: | 2016-08-22 12:31:31 UTC | Type: | Bug | |
Regression: | --- | Mount Type: | --- | |
Documentation: | --- | CRM: | ||
Verified Versions: | Category: | --- | ||
oVirt Team: | Virt | RHEL 7.3 requirements from Atomic Host: | ||
Cloudforms Team: | --- | Target Upstream Version: | ||
Embargoed: | ||||
Bug Depends On: | ||||
Bug Blocks: | 1356027, 1356194, 1357513 |
Description
Michal Skrivanek
2016-06-22 09:44:31 UTC
Can we get this into 4.0.1 instead of 4.0.2? as soon as it's ready, the actual TR is going to be adjusted then. IT looks fine so far, but it needs a really thorough testing Am I correct that in the mean-time there is no way to upgrade cluster level when using hosted-engine because I can't set all hosts to maintenance and still access engine ? (In reply to Ralf Schenk from comment #3) > Am I correct that in the mean-time there is no way to upgrade cluster level > when using hosted-engine because I can't set all hosts to maintenance and > still access engine ? Yes. I had to shut all the VMs and the engine down and manually edit the database by hand to change the cluster level. See bug 1341023. (In reply to Ralf Schenk from comment #3) > Am I correct that in the mean-time there is no way to upgrade cluster level > when using hosted-engine because I can't set all hosts to maintenance and > still access engine ? Oh, but I would not upgrade yet if self-hosted engine HA is important to you; it's broken in 4.0. See bug 1343005. Eyal, can you please take a look? I couldn't clone this bug to downstream. Error in jenkins: Bug 1348907 fails criteria: - Flag ovirt-3.6.z[?+] not found - Flag ovirt-4.0.0+ not found - Flag blocker is missing (+) value - Flag exception is missing (+) value I've sent a patch to fix the job to work with 4.0.z instead of 4.0.0. i see the bug was cloned already or removed with flags, so i can't check it, if you have another bug you can try it, the fix is cherry picked in the job. we can take advantage of VM custom compatibility override introduced in 4.0 and temporarily change the VM's compat level to the old cluster. We can use the next_run config to revert back to the default(no override, inheriting the cluster's level) on VM shutdown Let's move customer's discussion to the d/s clone: https://bugzilla.redhat.com/show_bug.cgi?id=1356194 And this documentation bug: https://bugzilla.redhat.com/show_bug.cgi?id=1356198 *** Bug 1356198 has been marked as a duplicate of this bug. *** note: warning messages during CL upgrade for various VM states are not optimal, will be handled in bug 1356027 Verified with rhevm-4.0.2.7-0.1.el7ev.noarch Hi, reading this issue I'm not sure if this should work when a vm reboots itself, ie I run systemctl reboot within the vm. This doesn't work for me -- is it supposed to work like this or do I have to initiate a restart through the manager? Greetings Klaas Hi Klaas, right - restart via engine is expected. Or shut down the VM and start once again. |