| Summary: | Warn user when VMs with memory snapshots would end up in cluster with newer compatibility version | ||||||||||||
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
| Product: | [oVirt] ovirt-engine | Reporter: | Israel Pinto <ipinto> | ||||||||||
| Component: | BLL.Virt | Assignee: | Marek Libra <mlibra> | ||||||||||
| Status: | CLOSED CURRENTRELEASE | QA Contact: | Israel Pinto <ipinto> | ||||||||||
| Severity: | high | Docs Contact: | |||||||||||
| Priority: | high | ||||||||||||
| Version: | 3.6.0.3 | CC: | bugs, gklein, ipinto, mavital, mgoldboi, michal.skrivanek, sbonazzo, tjelinek, ylavi | ||||||||||
| Target Milestone: | ovirt-3.6.3 | Flags: | rule-engine:
ovirt-3.6.z+
rule-engine: blocker+ mgoldboi: planning_ack+ michal.skrivanek: devel_ack+ mavital: testing_ack+ |
||||||||||
| Target Release: | 3.6.3.1 | ||||||||||||
| Hardware: | Unspecified | ||||||||||||
| OS: | Unspecified | ||||||||||||
| Whiteboard: | |||||||||||||
| Fixed In Version: | Doc Type: | Bug Fix | |||||||||||
| Doc Text: |
Cause: User suspends a VM (snapshot with memory is created) and tries to resume it on different cluster level.
Consequence: Unpredictable issues during VM runtime might occur.
Fix: Suspended VMs should not be resumed on different cluster level.
Result: User gets warning when changing
- cluster level (Cluster Edit dialog) when there's at least one suspended VM
or
- VM's cluster to different level (Edit VM dialog)
- custom compatibility version (Edit VM)
when the VM is suspended.
|
Story Points: | --- | ||||||||||
| Clone Of: | Environment: | ||||||||||||
| Last Closed: | 2016-03-11 07:23:07 UTC | Type: | Bug | ||||||||||
| Regression: | --- | Mount Type: | --- | ||||||||||
| Documentation: | --- | CRM: | |||||||||||
| Verified Versions: | Category: | --- | |||||||||||
| oVirt Team: | Virt | RHEL 7.3 requirements from Atomic Host: | |||||||||||
| Cloudforms Team: | --- | Target Upstream Version: | |||||||||||
| Bug Depends On: | 1292398 | ||||||||||||
| Bug Blocks: | 1285700 | ||||||||||||
| Attachments: |
|
||||||||||||
Created attachment 1114734 [details]
engine_log_2
Israel, not sure but I think we'll need VDSM logs as well - please attach them. Created attachment 1114816 [details]
vdsm_log
(In reply to Israel Pinto from comment #3) > Created attachment 1114816 [details] > vdsm_log Thanks - much more helpful (though I could use third of it, no need for such a big log). If libvirt is in debug mode, then libvirt logs would be great too. Created attachment 1114820 [details]
vm_qemu_log
(In reply to Yaniv Kaul from comment #4) > (In reply to Israel Pinto from comment #3) > > Created attachment 1114816 [details] > > vdsm_log > > Thanks - much more helpful (though I could use third of it, no need for such > a big log). If libvirt is in debug mode, then libvirt logs would be great > too. it not in debug hope it can help to :) this is not planned to be supported. Suspended 3.5 VMs can't be resumed in 3.6 compatibility level. That's what the "compatibility level" means. Suspended VMs cannot be "upgraded" without losing their state, hence a forced Power Off is the only option changing the scope of the bug to: warn on cluster change block resuming VM when compatibility level differs (careful - as master supports a custom one) actually, just the warning. The blocking is tracked under bug 1292398 Changing title since the actual blocking of the resume is tracked in https://bugzilla.redhat.com/show_bug.cgi?id=1292398 Verify: engine 3.5: 3.5.7-0.1.el6ev engine 3.6: 3.6.3.2-0.1.el6 Scenario: 1. Create VM and start it on Host_1 2. Suspend VM 3. Upgrade engine to 3.6 4. Update VM cluster to 3.6 Actual results as expected: New window is opened with the content: "Ti tel: Operation canceled Error while executing action: Cannot update a VM in this status. Try stopping the VM first." PASS |
Created attachment 1114733 [details] engine_log_1 Description of problem: After upgrade engine from 3.5.7 to 3.6.2 suspended VM failed to resume, it when down and up again. Version-Release number of selected component (if applicable): engine 3.5: 3.5.7-0.1.el6ev engine 3.6: 3.6.2-0.1.el6 Setup: DC 3.5 with cluster 3.5 2 host 7.1 in cluster. Steps to Reproduce: 1. Create VM and start it on Host_1 2. Suspend VM 3. Upgrade Host_2 and Host_1 to 7.2 4. Upgrade Cluster level to 3.6 4. Start VM Actual results: VM is down, and restart again. Expected results: VM should run and not got down Additional info: I see the error: VM vm_71-2 is down with error. Exit message: Wake up from hibernation failed:internal error: cannot parse json {"return": , "id": "libvirt-99"}: parse error: unallowed token at this point in JSON text {"return": , "id": "libvirt-99"} (right here) ------^