Bug 1733843 - Export to OVA fails if VM is running on the Host doing the export
Summary: Export to OVA fails if VM is running on the Host doing the export
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: Red Hat Enterprise Virtualization Manager
Classification: Red Hat
Component: ovirt-engine
Version: 4.3.4
Hardware: x86_64
OS: Linux
unspecified
medium
Target Milestone: ovirt-4.4.0
: ---
Assignee: Liran Rotenberg
QA Contact: Nisim Simsolo
URL:
Whiteboard:
Depends On: 1785939 1825638
Blocks:
TreeView+ depends on / blocked
 
Reported: 2019-07-29 04:24 UTC by Germano Veit Michel
Modified: 2020-08-04 13:20 UTC (History)
6 users (show)

Fixed In Version: ovirt-engine-4.4.0_beta3 rhv-4.4.0-29
Doc Type: Bug Fix
Doc Text:
Previously, exporting a virtual machine (VM) to an Open Virtual Appliance (OVA) file archive failed if the VM was running on the Host performing the export operation. The export process failed because doing so created a virtual machine snapshot, and while the image was in use, the RHV Manager could not tear down the virtual machine. The current release fixes this issue. If the VM is running, the RHV Manager skips tearing down the image. Exporting the OVA of a running VM succeeds.
Clone Of:
Environment:
Last Closed: 2020-08-04 13:20:00 UTC
oVirt Team: Virt
Target Upstream Version:
Embargoed:
lsvaty: testing_plan_complete-


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
Red Hat Product Errata RHSA-2020:3247 0 None None None 2020-08-04 13:20:34 UTC
oVirt gerrit 106860 0 master MERGED engine: remove VM down requirement for export OVA 2021-02-17 13:36:32 UTC

Description Germano Veit Michel 2019-07-29 04:24:56 UTC
Description of problem:

When exporting an OVA, the engine attempts to teardown the image even if the host used for export is running the VM. This fails as the VM is using the volumes. The engine does not handle the exception and the OVA export task "fails", when it actually just failed to teardown.

1. Export to OVA
2019-07-29 14:12:14,404+10 INFO  [org.ovirt.engine.core.bll.CreateOvaCommand] (EE-ManagedThreadFactory-engineScheduled-Thread-5) [4ec10176] Running command: CreateOvaCommand internal: true.

2. Prepare (Why? The VM is running on the same host...)
2019-07-29 14:12:14,408+10 INFO  [org.ovirt.engine.core.vdsbroker.vdsbroker.PrepareImageVDSCommand] (EE-ManagedThreadFactory-engineScheduled-Thread-5) [4ec10176] START, PrepareImageVDSCommand(HostName = rhel-h2, PrepareImageVDSCommandParameters:{hostId='ee5f0ee7-8c2c-4fc8-8b06-50e08242436b'}), log id: 6769ebe2

3. Export finishes

2019-07-29 14:12:43,075+10 INFO  [org.ovirt.engine.core.common.utils.ansible.AnsibleExecutor] (EE-ManagedThreadFactory-engineScheduled-Thread-5) [4ec10176] Ansible playbook command has exited with value: 0

4. Teardown (VM is running on the same host)
2019-07-29 14:12:43,086+10 INFO  [org.ovirt.engine.core.vdsbroker.vdsbroker.TeardownImageVDSCommand] (EE-ManagedThreadFactory-engineScheduled-Thread-5) [4ec10176] START, TeardownImageVDSCommand(HostName = rhel-h2, ImageActionsVDSCommandParameters:{hostId='ee5f0ee7-8c2c-4fc8-8b06-50e08242436b'}), log id: 15033352

5. Teardown fails as the volume is in use

2019-07-29 14:13:03,061+10 ERROR [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] (EE-ManagedThreadFactory-engineScheduled-Thread-5) [4ec10176] EVENT_ID: VDS_BROKER_COMMAND_FAILURE(10,802), VDSM rhel-h2 command TeardownImageVDS failed: Cannot deactivate Logical Volume: ('General Storage Exception: ("5 [] [\'  Logical volume e839d116-dc89-467e-a458-178706b6d581/7b68ad0a-e676-46c9-81ba-784f59e607f4 in use.\', \'  Logical volume e839d116-dc89-467e-a458-178706b6d581/e7a40ed3-b0a6-4eb6-9742-96262eb4989d in use.\']\\ne839d116-dc89-467e-a458-178706b6d581/[\'7b68ad0a-e676-46c9-81ba-784f59e607f4\', \'e7a40ed3-b0a6-4eb6-9742-96262eb4989d\']",)',)

6. OVA export Fails due to the exception from above

2019-07-29 14:13:03,072+10 ERROR [org.ovirt.engine.core.bll.SerialChildCommandsExecutionCallback] (EE-ManagedThreadFactory-engineScheduled-Thread-5) [4ec10176] EngineException: CannotDeactivateLogicalVolume (Failed with error CannotDeactivateLogicalVolume and code 552): org.ovirt.engine.core.common.errors.EngineException: EngineException: CannotDeactivateLogicalVolume (Failed with error CannotDeactivateLogicalVolume and code 552)
        at org.ovirt.engine.core.bll.exportimport.ExportOvaCommand.createOva(ExportOvaCommand.java:117) [bll.jar:]
        at org.ovirt.engine.core.bll.exportimport.ExportVmToOvaCommand.executeNextOperation(ExportVmToOvaCommand.java:224) [bll.jar:]
        at org.ovirt.engine.core.bll.exportimport.ExportVmToOvaCommand.performNextOperation(ExportVmToOvaCommand.java:216) [bll.jar:]
        at org.ovirt.engine.core.bll.SerialChildCommandsExecutionCallback.childCommandsExecutionEnded(SerialChildCommandsExecutionCallback.java:32) [bll.jar:]
        at org.ovirt.engine.core.bll.ChildCommandsCallbackBase.doPolling(ChildCommandsCallbackBase.java:77) [bll.jar:]
        at org.ovirt.engine.core.bll.tasks.CommandCallbacksPoller.invokeCallbackMethodsImpl(CommandCallbacksPoller.java:175) [bll.jar:]
        at org.ovirt.engine.core.bll.tasks.CommandCallbacksPoller.invokeCallbackMethods(CommandCallbacksPoller.java:109) [bll.jar:]
        at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) [rt.jar:1.8.0_212]
        at java.util.concurrent.FutureTask.runAndReset(FutureTask.java:308) [rt.jar:1.8.0_212]
        at org.glassfish.enterprise.concurrent.internal.ManagedScheduledThreadPoolExecutor$ManagedScheduledFutureTask.access$201(ManagedScheduledThreadPoolExecutor.java:383) [javax.enterprise.concurrent.jar:1.0.0.redhat-1]
        at org.glassfish.enterprise.concurrent.internal.ManagedScheduledThreadPoolExecutor$ManagedScheduledFutureTask.run(ManagedScheduledThreadPoolExecutor.java:534) [javax.enterprise.concurrent.jar:1.0.0.redhat-1]
        at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) [rt.jar:1.8.0_212]
        at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) [rt.jar:1.8.0_212]
        at java.lang.Thread.run(Thread.java:748) [rt.jar:1.8.0_212]
        at org.glassfish.enterprise.concurrent.ManagedThreadFactoryImpl$ManagedThread.run(ManagedThreadFactoryImpl.java:250) [javax.enterprise.concurrent.jar:1.0.0.redhat-1]

2019-07-29 14:13:03,075+10 INFO  [org.ovirt.engine.core.bll.SerialChildCommandsExecutionCallback] (EE-ManagedThreadFactory-engineScheduled-Thread-5) [4ec10176] Command 'ExportVmToOva' id: '53165a01-742d-494a-ba82-f883004d5f7f' child commands '[cdd0ff4b-38a1-4e1b-b7ab-3d47170efb20]' executions were completed, status 'FAILED'


7. Export "failed"

2019-07-29 14:13:05,153+10 ERROR [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] (EE-ManagedThreadFactory-engineScheduled-Thread-71) [] EVENT_ID: IMPORTEXPORT_EXPORT_VM_TO_OVA_FAILED(1,225), Failed to export Vm ASD as a Virtual Appliance to path /tmp/ASD.ova on Host rhel-h2

Version-Release number of selected component (if applicable):
rhvm-4.3.4.3-0.1.el7.noarch
vdsm-4.30.17-1.el7ev.x86_64

How reproducible:
Always

Steps to Reproduce:
1. Create VM with disk on block storage
2. Start VM
3. Export as OVA using the same host running the VM

Actual results:
OVA is created, but engine fails due to exception on teardown

Expected results:
Do not teardown if the host is running the VM, or handle the exception

Comment 1 Daniel Gur 2019-08-28 13:12:15 UTC
sync2jira

Comment 2 Daniel Gur 2019-08-28 13:16:27 UTC
sync2jira

Comment 3 Michal Skrivanek 2020-03-26 08:16:29 UTC
Liran, anything more to do here? I assume not

Comment 4 Liran Rotenberg 2020-03-26 08:21:05 UTC
(In reply to Michal Skrivanek from comment #3)
> Liran, anything more to do here? I assume not

Nothing else.
Moving to modified.

Comment 6 Nisim Simsolo 2020-06-02 15:35:21 UTC
Verified:
ovirt-engine-4.4.1.1-0.5.el8ev
vdsm-4.40.17-1.el8ev.x86_64
libvirt-daemon-6.0.0-22.module+el8.2.1+6815+1c792dc8.x86_64
qemu-kvm-4.2.0-22.module+el8.2.1+6758+cb8d64c2.x86_64

Verification scenario:
1. Keep only one active host in order to export VM as OVA using the same host running the VM.
2. Export running VM with disk on block storage as OVA.
   Verify VM exported successfully.
3. Import OVA (set block SD as storage destination)
   Run imported VM and verify VM is running successfully.
4. Repeat steps 2-3, this time using NFS storage domain.

Comment 12 errata-xmlrpc 2020-08-04 13:20:00 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory (Important: RHV Manager (ovirt-engine) 4.4 security, bug fix, and enhancement update), and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHSA-2020:3247


Note You need to log in before you can comment on or make changes to this bug.