Bug 1024811 - [engine] Failure during live snapshot leaves vm configured to use new volume on next start
[engine] Failure during live snapshot leaves vm configured to use new volume ...
Status: CLOSED DUPLICATE of bug 1018867
Product: Red Hat Enterprise Virtualization Manager
Classification: Red Hat
Component: ovirt-engine (Show other bugs)
3.3.0
Unspecified Unspecified
unspecified Severity high
: ---
: 3.3.0
Assigned To: Liron Aravot
Aharon Canan
storage
: Triaged
Depends On:
Blocks:
  Show dependency treegraph
 
Reported: 2013-10-30 09:23 EDT by Gadi Ickowicz
Modified: 2016-02-10 11:52 EST (History)
11 users (show)

See Also:
Fixed In Version:
Doc Type: Bug Fix
Doc Text:
Story Points: ---
Clone Of:
Environment:
Last Closed: 2013-11-11 16:46:42 EST
Type: Bug
Regression: ---
Mount Type: ---
Documentation: ---
CRM:
Verified Versions:
Category: ---
oVirt Team: Storage
RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: ---
amureini: needinfo-


Attachments (Terms of Use)
engine and vdsm logs (12.65 MB, application/x-bzip)
2013-10-30 09:23 EDT, Gadi Ickowicz
no flags Details

  None (edit)
Description Gadi Ickowicz 2013-10-30 09:23:34 EDT
Created attachment 817457 [details]
engine and vdsm logs

Description of problem:
If a live snapshot fails when attempting to configure the vm to use the new volume (after creating it successfully), the vm is still configured to use the new volume for the next time it is launched.
Also, the following message is displayed in the engine log:
2013-10-30 14:41:20,364 WARN  [org.ovirt.engine.core.bll.CreateAllSnapshotsFromVmCommand] (pool-4-thread-50) Wasnt able to live snapshot due to error: VdcBLLException: VdcBLLException: org.ovirt.engine.core.vdsbroker.vdsbroker.VDSErrorException: VDSGenericException: VDSErrorException: Failed to SnapshotVDS, error = Snapshot failed (Failed with error SNAPSHOT_FAILED and code 48). VM will still be configured to the new created snapshot

Version-Release number of selected component (if applicable):
rhevm-3.3.0-0.28.beta1.el6ev.noarch

How reproducible:
100%

Steps to Reproduce:
1. Create a live snapshot of a vm and have it fail after volume creation (e.g. after volume is created, block connection from host to storage on the host running the vm)


Actual results:
VM is configured to run with the new volume on next start

Expected results:
If the vm could not be configured to change to the new volume after creation, the snapshot process should be considered "failed" and the snapshot deleted on next vm start. The vm configuration should keep using the old volume

Additional info:
Comment 1 Allon Mureinik 2013-11-07 08:13:32 EST
Liron, is this related to the recent changes you've been doing around that area?
Comment 2 Liron Aravot 2013-11-07 09:06:44 EST
Allon, nop.
Right now in case of failure in the live snapshot verb the only treathment that we have is a message to the user that the new volumes were created and that on the next vm restart it will start writing to them.

Of course that this is not optimal, but handle this failure in a "smarter" way (e.g - exploring the retrieved error from the execution of the live snapshot and act accordingly) requires few changes in engine, as the the scenario of having it fail is also very rare, so IMO that's not 3.3 material.

Regardless, Fede is working on a patch to provide that "smarter" handling there -
http://gerrit.ovirt.org/#/c/20281/

IMO the severity can be reduced and it can be postponed.
Comment 3 Federico Simoncelli 2013-11-11 16:46:42 EST

*** This bug has been marked as a duplicate of bug 1018867 ***

Note You need to log in before you can comment on or make changes to this bug.