Bug 908327

Summary: Trying to import a template again after a previously failed import attempt results in 'Error while executing action: Cannot copy Template. The Storage Domain already contains the target disk(s)'
Product: Red Hat Enterprise Virtualization Manager Reporter: Sean Cohen <scohen>
Component: ovirt-engineAssignee: Liron Aravot <laravot>
Status: CLOSED ERRATA QA Contact: Jakub Libosvar <jlibosva>
Severity: high Docs Contact:
Priority: high    
Version: 3.1.3CC: abaron, acanan, acathrow, amureini, iheim, jkt, laravot, lpeer, pzhukov, Rhev-m-bugs, yeylon
Target Milestone: ---Flags: laravot: needinfo-
Target Release: 3.3.0   
Hardware: Unspecified   
OS: Unspecified   
Whiteboard: storage
Fixed In Version: is1 Doc Type: Bug Fix
Doc Text:
Previously, if importing a virtual machine failed for any reason, then trying to re-import the virtual machine again failed, stating "the Storage Domain already contains the target disk(s)". Only manual removal of the disks from the storage domain would allow the virtual machine to be re-imported properly. Now, if a virtual machine fails to import correctly, the disks are removed from the storage domain automatically and the virtual machine can be re-imported.
Story Points: ---
Clone Of: Environment:
Last Closed: 2014-01-21 17:13:12 UTC Type: Bug
Regression: --- Mount Type: ---
Documentation: --- CRM:
Verified Versions: Category: ---
oVirt Team: Storage RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: --- Target Upstream Version:
Embargoed:
Bug Depends On:    
Bug Blocks: 1019461    

Description Sean Cohen 2013-02-06 12:49:58 UTC
See bug 890922 for description

Comment 1 Sean Cohen 2013-02-06 12:55:06 UTC
Workaround: user can overcome and import the template as a clone

Comment 2 Ayal Baron 2013-03-20 10:38:43 UTC
Liron, aren't you taking care of this issue already? (if so, please close as duplicate)

Comment 3 Jakub Libosvar 2013-08-13 10:20:27 UTC
Verified vdsm-4.12.0-52.gitce029ba.el6ev.x86_64 rhevm-3.3.0-0.14.master.el6ev.noarch

Import of vm having 1GB and 5GB disks started. 
After copy of 1GB disk finished I restarted vdsm
279e939c-2c96-4886-9255-2e4d4df3b854:
total 1.1G
-rw-rw----. 1 vdsm kvm 1.0G Aug 13 11:59 d42a2b70-fbc9-413f-b170-db90da17ebd3
-rw-rw----. 1 vdsm kvm 1.0M Aug 13 11:58 d42a2b70-fbc9-413f-b170-db90da17ebd3.lease
-rw-r--r--. 1 vdsm kvm  273 Aug 13 11:59 d42a2b70-fbc9-413f-b170-db90da17ebd3.meta

a80181cc-dd83-4950-aef1-f1f0bbe7378b:
total 1.2G
-rw-rw----. 1 vdsm kvm 1.2G Aug 13 12:00 8b767946-837e-465b-a50e-844305d768cc
-rw-rw----. 1 vdsm kvm 1.0M Aug 13 11:58 8b767946-837e-465b-a50e-844305d768cc.lease
-rw-r--r--. 1 vdsm kvm  274 Aug 13 11:58 8b767946-837e-465b-a50e-844305d768cc.meta


After DC went up again, there was no remains on storage domain:
[root@slot-5 images]# ll
total 0


VM could be imported again successfully

Comment 4 Charlie 2013-11-28 00:21:39 UTC
This bug is currently attached to errata RHEA-2013:15231. If this change is not to be documented in the text for this errata please either remove it from the errata, set the requires_doc_text flag to minus (-), or leave a "Doc Text" value of "--no tech note required" if you do not have permission to alter the flag.

Otherwise to aid in the development of relevant and accurate release documentation, please fill out the "Doc Text" field above with these four (4) pieces of information:

* Cause: What actions or circumstances cause this bug to present.
* Consequence: What happens when the bug presents.
* Fix: What was done to fix the bug.
* Result: What now happens when the actions or circumstances above occur. (NB: this is not the same as 'the bug doesn't present anymore')

Once filled out, please set the "Doc Type" field to the appropriate value for the type of change made and submit your edits to the bug.

For further details on the Cause, Consequence, Fix, Result format please refer to:

https://bugzilla.redhat.com/page.cgi?id=fields.html#cf_release_notes 

Thanks in advance.

Comment 6 errata-xmlrpc 2014-01-21 17:13:12 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

http://rhn.redhat.com/errata/RHSA-2014-0038.html