Hide Forgot
Description of problem: Importing a VM from OVA that has imported before corrupts the disk of the first imported VM. Same with importing a VM from OVA on a HE where the original VM still exists will render the disk of the original VM "illegal". Version-Release number of selected component (if applicable): ovirt-engine-setup-4.2.5.2-0.1.el7ev.noarch How reproducible: Always Steps to Reproduce: 1. Export a VM to OVA: webUI -> Compute -> Virtual Machines -> <vm> -> Menu -> Export as OVA 2. On a different HE, Compute -> Virtual Machines -> Menu -> Import Source: Virtual Appliance (OVA) and follow the steps to import the VM 3. Rename the imported VM 4. Repeat step 2 Actual results: The second import fails, the task displays a RED "X" with "Importing VM host1 to Cluster Default" The disk of the original (first imported VM) under Compute -> Virtual Machines -> <vm> -> Disks shows: "Status: Illegal" Expected results: Import to succeed resulting in a new VM with new UUID and new disk. Additional info: It seems that importing a VM from OVA will keep its UUID but its disk will get a new UUID (seen from the API) resulting in the first imported VM having a failed imported disk which is corrupt. First imported or original VM data: https://rhvm.home.lan//ovirt-engine/api/vms/dfc25f65-cc4e-44b5-9356-79e70354261e/diskattachments <disk href="/ovirt-engine/api/disks/05339fff-775b-403b-8c6b-05b4b450c7e9" id="05339fff-775b-403b-8c6b-05b4b450c7e9"/> Second imported VM has a different UUID for the disk: <disk href="/ovirt-engine/api/disks/90c9db9d-5625-47a8-bd3a-e78b4abeb1ac" id="90c9db9d-5625-47a8-bd3a-e78b4abeb1ac"/> However the UUID of the VM remained the same!
Same happens when importing the VM from OVA on the HE where the VM still is available and thus corrupting the original VM!
This bug has not been marked as blocker for oVirt 4.3.0. Since we are releasing it tomorrow, January 29th, this bug has been re-targeted to 4.3.1.
WARN: Bug status wasn't changed from MODIFIED to ON_QA due to the following reason: [Found non-acked flags: '{'rhevm-4.3-ga': '?'}', ] For more info please contact: rhv-devops: Bug status wasn't changed from MODIFIED to ON_QA due to the following reason: [Found non-acked flags: '{'rhevm-4.3-ga': '?'}', ] For more info please contact: rhv-devops
Please make sure to attach it to the Errata and not move manually to ON_QA.
Verified: ovirt-engine-4.4.0-0.0.master.20190318180517.git576124b.el7 libvirt-client-4.5.0-10.el7_6.6.x86_64 qemu-kvm-ev-2.12.0-18.el7_6.3.1.x86_64 vdsm-4.40.0-96.gite291014.el7.x86_64 sanlock-3.6.0-1.el7.x86_64 Verification scenario: 1. Export VM with 5 disks as OVA. 2. Import OVA 10 times. 3. Verify imports succeed. 4. Run VMs, verify VMs are running with 5 disks inside each one.
Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory, and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://access.redhat.com/errata/RHEA-2019:1085
sync2jira