Bug 1024811

Summary: [engine] Failure during live snapshot leaves vm configured to use new volume on next start
Product: Red Hat Enterprise Virtualization Manager Reporter: Gadi Ickowicz <gickowic>
Component: ovirt-engineAssignee: Liron Aravot <laravot>
Status: CLOSED DUPLICATE QA Contact: Aharon Canan <acanan>
Severity: high Docs Contact:
Priority: unspecified    
Version: 3.3.0CC: acathrow, amureini, fsimonce, iheim, laravot, lpeer, michal.skrivanek, nlevinki, Rhev-m-bugs, shyu, yeylon
Target Milestone: ---Keywords: Triaged
Target Release: 3.3.0Flags: amureini: needinfo-
Hardware: Unspecified   
OS: Unspecified   
Whiteboard: storage
Fixed In Version: Doc Type: Bug Fix
Doc Text:
Story Points: ---
Clone Of: Environment:
Last Closed: 2013-11-11 21:46:42 UTC Type: Bug
Regression: --- Mount Type: ---
Documentation: --- CRM:
Verified Versions: Category: ---
oVirt Team: Storage RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: --- Target Upstream Version:
Embargoed:
Attachments:
Description Flags
engine and vdsm logs none

Description Gadi Ickowicz 2013-10-30 13:23:34 UTC
Created attachment 817457 [details]
engine and vdsm logs

Description of problem:
If a live snapshot fails when attempting to configure the vm to use the new volume (after creating it successfully), the vm is still configured to use the new volume for the next time it is launched.
Also, the following message is displayed in the engine log:
2013-10-30 14:41:20,364 WARN  [org.ovirt.engine.core.bll.CreateAllSnapshotsFromVmCommand] (pool-4-thread-50) Wasnt able to live snapshot due to error: VdcBLLException: VdcBLLException: org.ovirt.engine.core.vdsbroker.vdsbroker.VDSErrorException: VDSGenericException: VDSErrorException: Failed to SnapshotVDS, error = Snapshot failed (Failed with error SNAPSHOT_FAILED and code 48). VM will still be configured to the new created snapshot

Version-Release number of selected component (if applicable):
rhevm-3.3.0-0.28.beta1.el6ev.noarch

How reproducible:
100%

Steps to Reproduce:
1. Create a live snapshot of a vm and have it fail after volume creation (e.g. after volume is created, block connection from host to storage on the host running the vm)


Actual results:
VM is configured to run with the new volume on next start

Expected results:
If the vm could not be configured to change to the new volume after creation, the snapshot process should be considered "failed" and the snapshot deleted on next vm start. The vm configuration should keep using the old volume

Additional info:

Comment 1 Allon Mureinik 2013-11-07 13:13:32 UTC
Liron, is this related to the recent changes you've been doing around that area?

Comment 2 Liron Aravot 2013-11-07 14:06:44 UTC
Allon, nop.
Right now in case of failure in the live snapshot verb the only treathment that we have is a message to the user that the new volumes were created and that on the next vm restart it will start writing to them.

Of course that this is not optimal, but handle this failure in a "smarter" way (e.g - exploring the retrieved error from the execution of the live snapshot and act accordingly) requires few changes in engine, as the the scenario of having it fail is also very rare, so IMO that's not 3.3 material.

Regardless, Fede is working on a patch to provide that "smarter" handling there -
http://gerrit.ovirt.org/#/c/20281/

IMO the severity can be reduced and it can be postponed.

Comment 3 Federico Simoncelli 2013-11-11 21:46:42 UTC

*** This bug has been marked as a duplicate of bug 1018867 ***