Bug 1658249

Summary: Importing a VM from OVA that has been imported already fails and its disk status becomes illegal
Product: Red Hat Enterprise Virtualization Manager Reporter: Ron van der Wees <rvdwees>
Component: ovirt-engineAssignee: shani <sleviim>
Status: CLOSED ERRATA QA Contact: Nisim Simsolo <nsimsolo>
Severity: high Docs Contact:
Priority: high    
Version: 4.2.5CC: amarchuk, eedri, frolland, nsimsolo, rbarry, Rhev-m-bugs, tnisan
Target Milestone: ovirt-4.3.1Flags: lsvaty: testing_plan_complete-
Target Release: 4.3.0   
Hardware: All   
OS: Linux   
Whiteboard:
Fixed In Version: ovirt-engine-4.3.1.1 Doc Type: If docs needed, set a value
Doc Text:
Story Points: ---
Clone Of: Environment:
Last Closed: 2019-05-08 12:39:09 UTC Type: Bug
Regression: --- Mount Type: ---
Documentation: --- CRM:
Verified Versions: Category: ---
oVirt Team: Storage RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: --- Target Upstream Version:
Embargoed:
Bug Depends On: 1684140    
Bug Blocks:    

Description Ron van der Wees 2018-12-11 15:47:28 UTC
Description of problem:
Importing a VM from OVA that has imported before corrupts the disk of the first imported VM. Same with importing a VM from OVA on a HE where the original VM still exists will render the disk of the original VM "illegal".

Version-Release number of selected component (if applicable):
ovirt-engine-setup-4.2.5.2-0.1.el7ev.noarch

How reproducible:
Always

Steps to Reproduce:
1. Export a VM to OVA:
   webUI -> Compute -> Virtual Machines -> <vm> -> Menu -> Export as OVA
2. On a different HE, Compute -> Virtual Machines -> Menu -> Import
   Source: Virtual Appliance (OVA)
   and follow the steps to import the VM
3. Rename the imported VM
4. Repeat step 2

Actual results:
The second import fails, the task displays a RED "X" with
"Importing VM host1 to Cluster Default" 
The disk of the original (first imported VM) under
Compute -> Virtual Machines -> <vm> -> Disks
shows: "Status: Illegal"


Expected results:
Import to succeed resulting in a new VM with new UUID and new disk.


Additional info:
It seems that importing a VM from OVA will keep its UUID but its disk will get
a new UUID (seen from the API) resulting in the first imported VM having a
failed imported disk which is corrupt.

First imported or original VM data:
https://rhvm.home.lan//ovirt-engine/api/vms/dfc25f65-cc4e-44b5-9356-79e70354261e/diskattachments
<disk href="/ovirt-engine/api/disks/05339fff-775b-403b-8c6b-05b4b450c7e9" id="05339fff-775b-403b-8c6b-05b4b450c7e9"/>

Second imported VM has a different UUID for the disk:
<disk href="/ovirt-engine/api/disks/90c9db9d-5625-47a8-bd3a-e78b4abeb1ac" id="90c9db9d-5625-47a8-bd3a-e78b4abeb1ac"/>

However the UUID of the VM remained the same!

Comment 1 Ron van der Wees 2018-12-11 15:50:48 UTC
Same happens when importing the VM from OVA on the HE where the VM still is
available and thus corrupting the original VM!

Comment 2 Sandro Bonazzola 2019-01-28 09:44:38 UTC
This bug has not been marked as blocker for oVirt 4.3.0.
Since we are releasing it tomorrow, January 29th, this bug has been re-targeted to 4.3.1.

Comment 5 RHV bug bot 2019-02-21 17:26:24 UTC
WARN: Bug status wasn't changed from MODIFIED to ON_QA due to the following reason:

[Found non-acked flags: '{'rhevm-4.3-ga': '?'}', ]

For more info please contact: rhv-devops: Bug status wasn't changed from MODIFIED to ON_QA due to the following reason:

[Found non-acked flags: '{'rhevm-4.3-ga': '?'}', ]

For more info please contact: rhv-devops

Comment 8 Eyal Edri 2019-02-24 11:40:19 UTC
Please make sure to attach it to the Errata and not move manually to ON_QA.

Comment 10 Nisim Simsolo 2019-03-19 12:09:06 UTC
Verified:
ovirt-engine-4.4.0-0.0.master.20190318180517.git576124b.el7
libvirt-client-4.5.0-10.el7_6.6.x86_64
qemu-kvm-ev-2.12.0-18.el7_6.3.1.x86_64
vdsm-4.40.0-96.gite291014.el7.x86_64
sanlock-3.6.0-1.el7.x86_64

Verification scenario:
1. Export VM with 5 disks as OVA.
2. Import OVA 10 times.
3. Verify imports succeed.
4. Run VMs, verify VMs are running with 5 disks inside each one.

Comment 12 errata-xmlrpc 2019-05-08 12:39:09 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHEA-2019:1085

Comment 13 Daniel Gur 2019-08-28 13:15:28 UTC
sync2jira

Comment 14 Daniel Gur 2019-08-28 13:21:16 UTC
sync2jira