Bug 1235179 - Source VM is deleted after failed cloning attempt
Summary: Source VM is deleted after failed cloning attempt
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: Red Hat Enterprise Virtualization Manager
Classification: Red Hat
Component: ovirt-engine
Version: 3.5.1
Hardware: Unspecified
OS: Unspecified
urgent
urgent
Target Milestone: ovirt-3.6.0-rc
: 3.6.0
Assignee: Tomas Jelinek
QA Contact: Nisim Simsolo
URL:
Whiteboard:
Depends On:
Blocks: 1236608
TreeView+ depends on / blocked
 
Reported: 2015-06-24 08:28 UTC by akotov
Modified: 2019-08-15 04:48 UTC (History)
14 users (show)

Fixed In Version: 3.6.0-4 alpha3
Doc Type: Bug Fix
Doc Text:
Previously, if an error occurred during copy of the virtual machine images during the Clone VM operation, the cleanup incorrectly deleted the source virtual machine instead of the destination one. Now, after a failed attempt to clone a virtual machine, the source virtual machine with its disks stays in the system and the incomplete destination virtual machine is deleted.
Clone Of:
: 1236608 (view as bug list)
Environment:
Last Closed: 2016-03-09 21:08:58 UTC
oVirt Team: Virt


Attachments (Terms of Use)


Links
System ID Priority Status Summary Last Updated
Red Hat Product Errata RHEA-2016:0376 normal SHIPPED_LIVE Red Hat Enterprise Virtualization Manager 3.6.0 2016-03-10 01:20:52 UTC
oVirt gerrit 42911 master MERGED core: Source VM is deleted after failed cloning attempt Never
oVirt gerrit 42967 ovirt-engine-3.5 MERGED core: Source VM is deleted after failed cloning attempt Never

Description akotov 2015-06-24 08:28:57 UTC
Description of problem:

Cloning of VM is attempted

2015-06-22 12:17:01,981 INFO  [org.ovirt.engine.core.vdsbroker.vdsbroker.ConnectStoragePoolVDSCommand] (DefaultQuartzScheduler_Worker-38) [589ee957] START, ConnectStoragePoolVDSCommand(HostName = hrz-rhevnode-0102, HostId = 1cfcdc21-15a6-4472-87b4-45b896d21c55, vdsId = 1cfcdc21-15a6-4472-87b4-45b896d21c55, storagePoolId = 52ea0f9b-e031-4643-8e55-5d413ff9f143, masterVersion = 1), log id: 330e84aa
2015-06-22 12:17:02,271 INFO  [org.ovirt.engine.core.bll.CloneVmCommand] (ajp-/127.0.0.1:8702-13) [31b80e04] Running command: CloneVmCommand internal: false. Entities affected :  ID: e033eba7-b8c7-4110-adee-2a3372e00fb8 Type: VMAction group CREATE_VM with role type USER


exception happens:

2015-06-22 12:39:28,720 WARN  [org.ovirt.engine.core.bll.lock.InMemoryLockManager] (org.ovirt.thread.pool-7-thread-31) Trying to release exclusive lock which does not exist, lock key: plantilla_perconaVM_NAME
2015-06-22 12:39:28,720 INFO  [org.ovirt.engine.core.bll.CloneVmCommand] (org.ovirt.thread.pool-7-thread-31) Lock freed to object EngineLock [exclusiveLocks= key: plantilla_percona value: VM_NAME
, sharedLocks= key: 7c41630f-843c-4810-86ae-d8df2df5d6c3 value: DISK
key: 1ce743ee-3c81-499b-977a-1e140dc64708 value: DISK
key: e033eba7-b8c7-4110-adee-2a3372e00fb8 value: VM
]
2015-06-22 12:39:28,759 ERROR [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogableBase] (org.ovirt.thread.pool-7-thread-31) Error during log command: org.ovirt.engine.core.bll.CloneVmCommand. Except
ion java.lang.NullPointerException: java.lang.NullPointerException
        at org.ovirt.engine.core.bll.CloneVmCommand.getVm(CloneVmCommand.java:140) [bll.jar:]
        at org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogableBase.getVds(AuditLogableBase.java:353) [dal.jar:]
        at org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogableBase.getVdsIdRef(AuditLogableBase.java:237) [dal.jar:]
        at org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector.saveToDb(AuditLogDirector.java:140) [dal.jar:]
        at org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector.log(AuditLogDirector.java:107) [dal.jar:]
        at org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector.log(AuditLogDirector.java:100) [dal.jar:]
        at org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector.log(AuditLogDirector.java:96) [dal.jar:]
        at org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogableBase.log(AuditLogableBase.java:459) [dal.jar:]
        at org.ovirt.engine.core.bll.CommandBase.logCommand(CommandBase.java:1443) [bll.jar:]
        at org.ovirt.engine.core.bll.CommandBase.endAction(CommandBase.java:513) [bll.jar:]
        at org.ovirt.engine.core.bll.tasks.DecoratedCommand.endAction(DecoratedCommand.java:17) [bll.jar:]
        at org.ovirt.engine.core.bll.tasks.CoCoAsyncTaskHelper.endAction(CoCoAsyncTaskHelper.java:337) [bll.jar:]
        at org.ovirt.engine.core.bll.tasks.CommandCoordinatorImpl.endAction(CommandCoordinatorImpl.java:383) [bll.jar:]
        at org.ovirt.engine.core.bll.tasks.CommandAsyncTask.endCommandAction(CommandAsyncTask.java:149) [bll.jar:]
        at org.ovirt.engine.core.bll.tasks.CommandAsyncTask.access$000(CommandAsyncTask.java:28) [bll.jar:]
        at org.ovirt.engine.core.bll.tasks.CommandAsyncTask$1.run(CommandAsyncTask.java:109) [bll.jar:]
        at org.ovirt.engine.core.utils.threadpool.ThreadPoolUtil$InternalWrapperRunnable.run(ThreadPoolUtil.java:90) [utils.jar:]
        at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471) [rt.jar:1.7.0_79]
        at java.util.concurrent.FutureTask.run(FutureTask.java:262) [rt.jar:1.7.0_79]
        at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145) [rt.jar:1.7.0_79]
        at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615) [rt.jar:1.7.0_79]
        at java.lang.Thread.run(Thread.java:745) [rt.jar:1.7.0_79]

2015-06-22 12:38:00,103 ERROR [org.ovirt.engine.core.bll.CloneVmCommand] (org.ovirt.thread.pool-7-thread-31) [31b80e04] Ending command with failure: org.ovirt.engine.core.bll.CloneVmCommand

Then without user intervention RHEV purges the source VM:

2015-06-22 13:30:37,594 INFO  [org.ovirt.engine.core.bll.ProcessOvfUpdateForStoragePoolCommand] (DefaultQuartzScheduler_Worker-50) [5c373536] Running command: ProcessOvfUpdateForStoragePoolCommand internal: true. Entities affected :  ID: 7eb08a94-6a36-46c8-846f-7857c9f6bdda Type: StoragePool
2015-06-22 13:30:37,595 INFO  [org.ovirt.engine.core.bll.ProcessOvfUpdateForStoragePoolCommand] (DefaultQuartzScheduler_Worker-50) [5c373536] Attempting to update VM OVFs in Data Center dc-rhev-0050
2015-06-22 13:30:37,615 INFO  [org.ovirt.engine.core.bll.ProcessOvfUpdateForStoragePoolCommand] (DefaultQuartzScheduler_Worker-50) [5c373536] Successfully updated VM OVFs in Data Center dc-rhev-0050
2015-06-22 13:30:37,615 INFO  [org.ovirt.engine.core.bll.ProcessOvfUpdateForStoragePoolCommand] (DefaultQuartzScheduler_Worker-50) [5c373536] Attempting to update template OVFs in Data Center dc-rhev-0050
2015-06-22 13:30:37,617 INFO  [org.ovirt.engine.core.bll.ProcessOvfUpdateForStoragePoolCommand] (DefaultQuartzScheduler_Worker-50) [5c373536] Successfully updated templates OVFs in Data Center dc-rhev-0050
2015-06-22 13:30:37,617 INFO  [org.ovirt.engine.core.bll.ProcessOvfUpdateForStoragePoolCommand] (DefaultQuartzScheduler_Worker-50) [5c373536] Attempting to remove unneeded template/vm OVFs in Data Center dc-rhev-0050
2015-06-22 13:30:37,618 INFO  [org.ovirt.engine.core.vdsbroker.irsbroker.RemoveVMVDSCommand] (DefaultQuartzScheduler_Worker-50) [5c373536] START, RemoveVMVDSCommand( storagePoolId = 7eb08a94-6a36-46c8-846f-7857c9f6bdda, ignoreFailoverLimit = false, storageDomainId = 00000000-0000-0000-0000-000000000000, vmGuid = e033eba7-b8c7-4110-adee-2a3372e00fb8), log id: 6c7e2393
2015-06-22 13:30:37,657 INFO  [org.ovirt.engine.core.vdsbroker.irsbroker.RemoveVMVDSCommand] (DefaultQuartzScheduler_Worker-50) [5c373536] FINISH, RemoveVMVDSCommand, log id: 6c7e2393
2015-06-22 13:30:37,665 INFO  [org.ovirt.engine.core.bll.ProcessOvfUpdateForStoragePoolCommand] (DefaultQuartzScheduler_Worker-50) [5c373536] Successfully removed unneeded template/vm OVFs in Data Center dc-rhev-0050
2015-06-22 13:30:37,666 INFO  [org.ovirt.engine.core.bll.ProcessOvfUpdateForStoragePoolCommand] (DefaultQuartzScheduler_Worker-50) [5c373536] Lock freed to object EngineLock [exclusiveLocks= key: 7eb08a94-6a36-46c8-846f-7857c9f6bdda value: OVF_UPDATE
, sharedLocks= ]



Version-Release number of selected component (if applicable):

rhevm-backend-3.5.1-0.4.el6ev.noarch


Actual results:

VM disappears from RHEV-M with logical volumes still left on storage 

Expected results:

There was no command issues to remove the VM, VM should stay in RHEV after successful or failed clone operation

Additional info:

Comment 2 Nisim Simsolo 2015-07-07 14:05:07 UTC
For bug verification procedure, see test case: https://polarion.engineering.redhat.com/polarion/#/project/RHEVM3/workitem?id=RHEVM3-12336

Comment 4 Nisim Simsolo 2015-09-16 06:53:52 UTC
Verified:
rhevm-3.6.0-0.13.master.el6
vdsm-4.17.6-1.el7ev.noarch
qemu-kvm-rhev-2.3.0-19.el7.x86_64
sanlock-3.2.4-1.el7.x86_64
libvirt-client-1.2.17-5.el7.x86_64

Procedure: 
use VM with iSCSI, create clone and during clone creation logout host iscsi session.
Verify cloned VM deleted with its disks.
Verify source VM still exist and can be run.

Comment 6 errata-xmlrpc 2016-03-09 21:08:58 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://rhn.redhat.com/errata/RHEA-2016-0376.html


Note You need to log in before you can comment on or make changes to this bug.