Bug 1266973
Summary: | Cannot start or revert VM with failed stateless snapshot | ||||||
---|---|---|---|---|---|---|---|
Product: | [oVirt] ovirt-engine | Reporter: | Brian Sipos <BSipos> | ||||
Component: | General | Assignee: | Arik <ahadas> | ||||
Status: | CLOSED WORKSFORME | QA Contact: | |||||
Severity: | medium | Docs Contact: | |||||
Priority: | high | ||||||
Version: | 3.5.1.1 | CC: | amureini, BSipos, bugs, mgoldboi, tjelinek | ||||
Target Milestone: | ovirt-4.0.0-beta | Flags: | tjelinek:
ovirt-4.0.0?
mgoldboi: testing_plan_complete? rule-engine: planning_ack? rule-engine: devel_ack? rule-engine: testing_ack? |
||||
Target Release: | --- | ||||||
Hardware: | x86_64 | ||||||
OS: | Linux | ||||||
Whiteboard: | |||||||
Fixed In Version: | Doc Type: | Bug Fix | |||||
Doc Text: | Story Points: | --- | |||||
Clone Of: | Environment: | ||||||
Last Closed: | 2016-05-23 09:34:45 UTC | Type: | Bug | ||||
Regression: | --- | Mount Type: | --- | ||||
Documentation: | --- | CRM: | |||||
Verified Versions: | Category: | --- | |||||
oVirt Team: | Virt | RHEL 7.3 requirements from Atomic Host: | |||||
Cloudforms Team: | --- | Target Upstream Version: | |||||
Embargoed: | |||||||
Attachments: |
|
Description
Brian Sipos
2015-09-28 17:24:13 UTC
This may be the same issue as #1072375 but I cannot tell for sure. I am very much interested in at least a workaround for the non-startable VM I currently have though. yes, please provide all the logs Created attachment 1106050 [details]
a segment of engine.log during the failed restoration
Moving from 4.0 alpha to 4.0 beta since 4.0 alpha has been already released and bug is not ON_QA. I followed the reproduction steps and didn't manage to reproduce it. Generally speaking, I don't think that this case justifies a mechanism to remove the stateless snapshot - it should be done automatically by the system whenever a stateless VM goes down or powering up with stateless snapshot that we didn't manage to remove before. In this particular case, the stateless snapshot removal fails because we reach an inconsistent state in the database: on the one hand, we have a device for the disk and on the other hand the disk itself doesn't exist. I wonder how could it happen. I don't see a way to get to this state - could it be that the disk has been manually removed from the database in order to recover from the problem with the storage domain (what was the problem)? anyway, without further information/logs that could explain how the disk has been removed, we cannot make any further progress with this. So I'm closing it as it couldn't be reproduced with the reported reproduction steps. Obviously we reached a state we should't get to so if there is additional information about that - feel free to reopen. |