Bug 1232396 - Failed cleanup of disk entry from database after failed disk copy operation
Summary: Failed cleanup of disk entry from database after failed disk copy operation
Keywords:
Status: CLOSED CURRENTRELEASE
Alias: None
Product: ovirt-engine
Classification: oVirt
Component: General
Version: 3.6.0
Hardware: x86_64
OS: Unspecified
unspecified
high
Target Milestone: ovirt-3.6.1
: 3.6.1
Assignee: Liron Aravot
QA Contact: Elad
URL:
Whiteboard: storage
Depends On:
Blocks: 1282693 1282694 1284250
TreeView+ depends on / blocked
 
Reported: 2015-06-16 16:09 UTC by Kevin Alon Goldblatt
Modified: 2016-05-08 08:23 UTC (History)
11 users (show)

Fixed In Version:
Doc Type: Bug Fix
Doc Text:
Clone Of:
: 1282693 1284250 (view as bug list)
Environment:
Last Closed: 2015-12-16 12:22:23 UTC
oVirt Team: Storage
rule-engine: ovirt-3.6.z+
rule-engine: exception+
ylavi: planning_ack+
amureini: devel_ack+
acanan: testing_ack+


Attachments (Terms of Use)
engine, vdsm, server logs (344.56 KB, application/x-gzip)
2015-06-16 16:14 UTC, Kevin Alon Goldblatt
no flags Details


Links
System ID Priority Status Summary Last Updated
oVirt gerrit 48635 master MERGED core: MoveOrCopyDisk - attempt to perform rollback for vm/floating disk Never
oVirt gerrit 48928 ovirt-engine-3.6 MERGED core: MoveOrCopyDisk - attempt to perform rollback for vm/floating disk Never
oVirt gerrit 49091 ovirt-engine-3.6.1 MERGED core: MoveOrCopyDisk - attempt to perform rollback for vm/floating disk Never

Description Kevin Alon Goldblatt 2015-06-16 16:09:54 UTC
Version-Release number of selected component (if applicable):
v3.6
ovirt-engine-3.6.0-0.0.master.20150519172219.git9a2e2b3.el6.noarch
vdsm-4.17.0-822.git9b11a18.el7.noarch

How reproducible:
100%

Steps to Reproduce:
1. Create a VM with 4 disks block - prealloc and thin, nfs - prealloc and thin
2. From the Disk tab - select the block preallocated disk and press the copy option - the copy operation starts
3. Restart the vdsm on the host right after starting the copy operation - the copy operation fails as expected
4. Upon investigation found that the new Image is displayed in the Disk Tab despite the fact the the copy disk operation failed. The same disk does not however appear on the host
The cleanup of after the failed operation was therefore not successful. The disk was recorded in the database but after the failed creation was not removed from the database during the cleanup

Actual results:
The failed disk copy operation fails to remove the entry of the disk from the database during the cleanup operation

Expected results:
The entry should have been removed from the database during the cleanup operation


Additional info:

Engine.log
--------------
2015-06-16 18:11:57,178 INFO  [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] (org.ovirt.thread.pool-12-thread-23) [4a5644cb] Correlation ID: 4a5644cb, Job ID: ff75d23e-1c4a-47cf-ad48-27a2fb7830c8, Call Stack: null
, Custom Event ID: -1, Message: User admin@internal is copying template disk vm4_Disk1 to domain block1.
2015-06-16 18:11:57,180 INFO  [org.ovirt.engine.core.bll.tasks.SPMAsyncTask] (org.ovirt.thread.pool-12-thread-23) [4a5644cb] BaseAsyncTask::startPollingTask: Starting to poll task '0d3624fa-bbbf-4a4d-8ed4-bf2444818ecd'.
2015-06-16 18:11:57,901 INFO  [org.ovirt.engine.core.bll.tasks.AsyncTaskManager] (DefaultQuartzScheduler_Worker-44) [285b5dc1] Polling and updating Async Tasks: 1 tasks, 1 tasks to poll now
2015-06-16 18:11:57,928 INFO  [org.ovirt.engine.core.bll.tasks.SPMAsyncTask] (DefaultQuartzScheduler_Worker-44) [285b5dc1] SPMAsyncTask::PollTask: Polling task '0d3624fa-bbbf-4a4d-8ed4-bf2444818ecd' (Parent Command 'MoveOrCopyDisk', Parameters Type 'org.ovirt.engine.core.common.asynctasks.AsyncTaskParameters') returned status 'running'.

2015-06-16 18:11:57,122 INFO  [org.ovirt.engine.core.bll.tasks.AsyncTaskManager] (org.ovirt.thread.pool-12-thread-23) [4a5644cb] Adding task '0d3624fa-bbbf-4a4d-8ed4-bf2444818ecd' (Parent Command 'MoveOrCopyDisk', Parameters Type 'org.ov
irt.engine.core.common.asynctasks.AsyncTaskParameters'), polling hasn't started yet..
2015-06-16 18:11:57,178 INFO  [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] (org.ovirt.thread.pool-12-thread-23) [4a5644cb] Correlation ID: 4a5644cb, Job ID: ff75d23e-1c4a-47cf-ad48-27a2fb7830c8, Call Stack: null
, Custom Event ID: -1, Message: User admin@internal is copying template disk vm4_Disk1 to domain block1.
2015-06-16 18:11:57,180 INFO  [org.ovirt.engine.core.bll.tasks.SPMAsyncTask] (org.ovirt.thread.pool-12-thread-23) [4a5644cb] BaseAsyncTask::startPollingTask: Starting to poll task '0d3624fa-bbbf-4a4d-8ed4-bf2444818ecd'.
2015-06-16 18:11:57,901 INFO  [org.ovirt.engine.core.bll.tasks.AsyncTaskManager] (DefaultQuartzScheduler_Worker-44) [285b5dc1] Polling and updating Async Tasks: 1 tasks, 1 tasks to poll now
2015-06-16 18:11:57,928 INFO  [org.ovirt.engine.core.bll.tasks.SPMAsyncTask] (DefaultQuartzScheduler_Worker-44) [285b5dc1] SPMAsyncTask::PollTask: Polling task '0d3624fa-bbbf-4a4d-8ed4-bf2444818ecd' (Parent Command 'MoveOrCopyDisk', Para
meters Type 'org.ovirt.engine.core.common.asynctasks.AsyncTaskParameters') returned status 'running'.
2015-06-16 18:11:57,929 INFO  [org.ovirt.engine.core.bll.tasks.AsyncTaskManager] (DefaultQuartzScheduler_Worker-44) [285b5dc1] Finished polling Tasks, will poll again in 10 seconds.
2015-06-16 18:12:05,436 ERROR [org.ovirt.vdsm.jsonrpc.client.reactors.Reactor] (SSL Stomp Reactor) [] Unable to process messages
2015-06-16 18:12:05,445 ERROR [org.ovirt.vdsm.jsonrpc.client.reactors.Reactor] (SSL Stomp Reactor) [] Unable to process messages
2015-06-16 18:12:05,439 ERROR [org.ovirt.engine.core.vdsbroker.vdsbroker.ListVDSCommand] (DefaultQuartzScheduler_Worker-56) [511372f7] Command 'ListVDSCommand(HostName = blond-vdsh, VdsIdAndVdsVDSCommandParametersBase:{runAsync='true', hostId='d973d83f-c5c0-4278-b06f-41ab0517fd8a', vds='Host[blond-vdsh,d973d83f-c5c0-4278-b06f-41ab0517fd8a]'})' execution failed: VDSGenericException: VDSNetworkException: Connection reset by peer
2015-06-16 18:12:05,451 ERROR [org.ovirt.engine.core.utils.timer.SchedulerUtilQuartzImpl] (DefaultQuartzScheduler_Worker-56) [511372f7] Failed to invoke scheduled method vmsMonitoring: null
2015-06-16 18:12:07,211 INFO  [org.ovirt.vdsm.jsonrpc.client.reactors.ReactorClient] (SSL Stomp Reactor) [] Connecting to /10.35.64.12
2015-06-16 18:12:07,213 WARN  [org.ovirt.vdsm.jsonrpc.client.utils.retry.Retryable] (SSL Stomp Reactor) [] Retry failed
2015-06-16 18:12:07,213 ERROR [org.ovirt.vdsm.jsonrpc.client.reactors.ReactorClient] (DefaultQuartzScheduler_Worker-58) [] Exception during connection
2015-06-16 18:12:07,218 ERROR [org.ovirt.engine.core.vdsbroker.vdsbroker.SpmStatusVDSCommand] (DefaultQuartzScheduler_Worker-58) [] Command 'SpmStatusVDSCommand(HostName = blond-vdsh, SpmStatusVDSCommandParameters:{runAsync='true', hostId='d973d83f-c5c0-4278-b06f-41ab0517fd8a', storagePoolId='dbd815c4-9d7f-4dc9-be1a-5818d62b2f6c'})' execution failed: null
2015-06-16 18:12:07,222 INFO  [org.ovirt.engine.core.bll.storage.SetStoragePoolStatusCommand] (DefaultQuartzScheduler_Worker-58) [772cb966] Running command: SetStoragePoolStatusCommand internal: true. Entities affected :  ID: dbd815c4-9d7f-4dc9-be1a-5818d62b2f6c Type: StoragePool
2015-06-16 18:12:07,226 INFO  [org.ovirt.engine.core.vdsbroker.storage.StoragePoolDomainHelper] (DefaultQuartzScheduler_Worker-58) [772cb966] Storage Pool 'dbd815c4-9d7f-4dc9-be1a-5818d62b2f6c' - Updating Storage Domain '6844608f-a89c-488a-8952-1baa77be3224' status from 'Active' to 'Unknown', reason: null
2015-06-16 18:12:07,228 INFO  [org.ovirt.engine.core.vdsbroker.storage.StoragePoolDomainHelper] (DefaultQuartzScheduler_Worker-58) [772cb966] Storage Pool 'dbd815c4-9d7f-4dc9-be1a-5818d62b2f6c' - Updating Storage Domain '8d8df86c-1014-4c2c-bbd0-4a2e3c45d9e5' status from 'Active' to 'Unknown', reason: null
2015-06-16 18:12:07,230 INFO  [org.ovirt.engine.core.vdsbroker.storage.StoragePoolDomainHelper] (DefaultQuartzScheduler_Worker-58) [772cb966] Storage Pool 'dbd815c4-9d7f-4dc9-be1a-5818d62b2f6c' - Updating Storage Domain '785e2753-d14a-4906-8c30-4616aa9ee439' status from 'Active' to 'Unknown', reason: null
2015-06-16 18:12:07,231 INFO  [org.ovirt.engine.core.vdsbroker.storage.StoragePoolDomainHelper] (DefaultQuartzScheduler_Worker-58) [772cb966] Storage Pool 'dbd815c4-9d7f-4dc9-be1a-5818d62b2f6c' - Updating Storage Domain 'd433fe77-d333-4f81-b836-cc18f19a3551' status from 'Active' to 'Unknown', reason: null
2015-06-16 18:12:07,244 WARN  [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] (DefaultQuartzScheduler_Worker-58) [772cb966] Correlation ID: 772cb966, Call Stack: null, Custom Event ID: -1, Message: Invalid status on Data Center dc1. Setting status to Non Responsive.
2015-06-16 18:12:07,245 INFO  [org.ovirt.vdsm.jsonrpc.client.reactors.ReactorClient] (SSL Stomp Reactor) [] Connecting to /10.35.64.12
2015-06-16 18:12:07,246 WARN  [org.ovirt.vdsm.jsonrpc.client.utils.retry.Retryable] (SSL Stomp Reactor) [] Retry failed
2015-06-16 18:12:07,247 ERROR [org.ovirt.vdsm.jsonrpc.client.reactors.ReactorClient] (DefaultQuartzScheduler_Worker-58) [772cb966] Exception during connection
2015-06-16 18:12:07,251 ERROR [org.ovirt.engine.core.vdsbroker.vdsbroker.HSMGetAllTasksStatusesVDSCommand] (DefaultQuartzScheduler_Worker-58) [772cb966] Command 'HSMGetAllTasksStatusesVDSCommand(HostName = blond-vdsh, VdsIdVDSCommandParametersBase:{runAsync='true', hostId='d973d83f-c5c0-4278-b06f-41ab0517fd8a'})' execution failed: null
2015-06-16 18:12:07,253 WARN  [org.ovirt.vdsm.jsonrpc.client.internal.ResponseWorker] (ResponseWorker) [] Exception thrown during message processing
2015-06-16 18:12:07,280 INFO  [org.ovirt.vdsm.jsonrpc.client.reactors.ReactorClient] (SSL Stomp Reactor) [] Connecting to /10.35.64.12
2015-06-16 18:12:07,281 WARN  [org.ovirt.vdsm.jsonrpc.client.utils.retry.Retryable] (SSL Stomp Reactor) [] Retry failed
2015-06-16 18:12:07,282 ERROR [org.ovirt.vdsm.jsonrpc.client.reactors.ReactorClient] (DefaultQuartzScheduler_Worker-58) [772cb966] Exception during connection
2015-06-16 18:12:07,286 ERROR [org.ovirt.engine.core.vdsbroker.vdsbroker.SpmStatusVDSCommand] (DefaultQuartzScheduler_Worker-58) [772cb966] Command 'SpmStatusVDSCommand(HostName = blond-vdsh, SpmStatusVDSCommandParameters:{runAsync='true', hostId='d973d83f-c5c0-4278-b06f-41ab0517fd8a', storagePoolId='dbd815c4-9d7f-4dc9-be1a-5818d62b2f6c'})' execution failed: null
2015-06-16 18:12:07,287 INFO  [org.ovirt.engine.core.vdsbroker.irsbroker.IrsProxyData] (DefaultQuartzScheduler_Worker-58) [772cb966] hostFromVds::selectedVds - 'blond-vdsh', spmStatus returned null!
2015-06-16 18:12:07,938 INFO  [org.ovirt.vdsm.jsonrpc.client.reactors.ReactorClient] (SSL Stomp Reactor) [] Connecting to /10.35.64.12
2015-06-16 18:12:07,939 WARN  [org.ovirt.vdsm.jsonrpc.client.utils.retry.Retryable] (SSL Stomp Reactor) [] Retry failed
2015-06-16 18:12:07,940 ERROR [org.ovirt.vdsm.jsonrpc.client.reactors.ReactorClient] (DefaultQuartzScheduler_Worker-4) [] Exception during connection
2015-06-16 18:12:07,944 ERROR [org.ovirt.engine.core.vdsbroker.vdsbroker.SpmStatusVDSCommand] (DefaultQuartzScheduler_Worker-4) [] Command 'SpmStatusVDSCommand(HostName = blond-vdsh, SpmStatusVDSCommandParameters:{runAsync='true', hostId='d973d83f-c5c0-4278-b06f-41ab0517fd8a', storagePoolId='dbd815c4-9d7f-4dc9-be1a-5818d62b2f6c'})' execution failed: null
2015-06-16 18:12:07,945 INFO  [org.ovirt.engine.core.vdsbroker.irsbroker.IrsProxyData] (DefaultQuartzScheduler_Worker-4) [] hostFromVds::selectedVds - 'blond-vdsh', spmStatus returned null!
2015-06-16 18:12:08,461 INFO  [org.ovirt.vdsm.jsonrpc.client.reactors.ReactorClient] (SSL Stomp Reactor) [] Connecting to /10.35.64.12
2015-06-16 18:12:08,462 WARN  [org.ovirt.vdsm.jsonrpc.client.utils.retry.Retryable] (SSL Stomp Reactor) [] Retry failed
2015-06-16 18:12:08,463 ERROR [org.ovirt.vdsm.jsonrpc.client.reactors.ReactorClient] (DefaultQuartzScheduler_Worker-45) [65c43f48] Exception during connection
2015-06-16 18:12:08,465 WARN  [org.ovirt.engine.core.vdsbroker.VdsManager] (org.ovirt.thread.pool-12-thread-32) [65c43f48] Host 'blond-vdsh' is not responding. It will stay in Connecting state for a grace period of 80 seconds and after that an attempt to fence the host will be issued.
2015-06-16 18:12:08,467 ERROR [org.ovirt.engine.core.vdsbroker.vdsbroker.ListVDSCommand] (DefaultQuartzScheduler_Worker-45) [65c43f48] Command 'ListVDSCommand(HostName = blond-vdsh, VdsIdAndVdsVDSCommandParametersBase:{runAsync='true', hostId='d973d83f-c5c0-4278-b06f-41ab0517fd8a', vds='Host[blond-vdsh,d973d83f-c5c0-4278-b06f-41ab0517fd8a]'})' execution failed: java.net.ConnectException: Connection refused
2015-06-16 18:12:08,482 ERROR [org.ovirt.engine.core.utils.timer.SchedulerUtilQuartzImpl] (DefaultQuartzScheduler_Worker-45) [65c43f48] Failed to invoke scheduled method vmsMonitoring: null
2015-06-16 18:12:08,487 WARN  [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] (org.ovirt.thread.pool-12-thread-32) [65c43f48] Correlation ID: null, Call Stack: null, Custom Event ID: -1, Message: Host blond-vdsh is not responding. It will stay in Connecting state for a grace period of 80 seconds and after that an attempt to fence the host will be issued.


Vdsm.log
-------------
0d3624fa-bbbf-4a4d-8ed4-bf2444818ecd::ERROR::2015-06-16 18:12:04,867::image::820::Storage.Image::(copyCollapsed) Unexpected error
Traceback (most recent call last):
  File "/usr/share/vdsm/storage/image.py", line 815, in copyCollapsed
    dstVol.extend(newsize)
  File "/usr/share/vdsm/storage/blockVolume.py", line 277, in extend
    lvm.extendLV(self.sdUUID, self.volUUID, sizemb)
  File "/usr/share/vdsm/storage/lvm.py", line 1156, in extendLV
    _resizeLV("lvextend", vgName, lvName, size)
  File "/usr/share/vdsm/storage/lvm.py", line 1152, in _resizeLV
    raise se.LogicalVolumeExtendError(vgName, lvName, "%sM" % (size, ))
LogicalVolumeExtendError: Logical Volume extend failed: u'vgname=8d8df86c-1014-4c2c-bbd0-4a2e3c45d9e5 lvname=5502d82a-2790-4a1a-9eb8-bb83b3884281 newsize=2048M'
0d3624fa-bbbf-4a4d-8ed4-bf2444818ecd::INFO::2015-06-16 18:12:04,869::blockVolume::398::Storage.Volume::(teardown) Tearing down volume 8d8df86c-1014-4c2c-bbd0-4a2e3c45d9e5/5853b8da-200d-41f6-b31a-35759147fbbb justme False
0d3624fa-bbbf-4a4d-8ed4-bf2444818ecd::DEBUG::2015-06-16 18:12:04,870::resourceManager::616::Storage.ResourceManager::(releaseResource) Trying to release resource '8d8df86c-1014-4c2c-bbd0-4a2e3c45d9e5_lvmActivationNS.5853b8da-200d-41f6-b3
1a-35759147fbbb'
0d3624fa-bbbf-4a4d-8ed4-bf2444818ecd::ERROR::2015-06-16 18:12:04,870::image::703::Storage.Image::(__cleanupCopy) Unexpected error
Traceback (most recent call last):
  File "/usr/share/vdsm/storage/image.py", line 699, in __cleanupCopy
    srcVol.teardown(sdUUID=srcVol.sdUUID, volUUID=srcVol.volUUID)
  File "/usr/share/vdsm/storage/blockVolume.py", line 401, in teardown
    rmanager.releaseResource(lvmActivationNamespace, volUUID)
  File "/usr/share/vdsm/storage/resourceManager.py", line 631, in releaseResource
    "registered" % (namespace, name))
ValueError: Resource '8d8df86c-1014-4c2c-bbd0-4a2e3c45d9e5_lvmActivationNS.5853b8da-200d-41f6-b31a-35759147fbbb' is not currently registered
0d3624fa-bbbf-4a4d-8ed4-bf2444818ecd::DEBUG::2015-06-16 18:12:04,871::resourceManager::616::Storage.ResourceManager::(releaseResource) Trying to release resource '8d8df86c-1014-4c2c-bbd0-4a2e3c45d9e5_imageNS.1a9509be-4815-427c-91e3-c0d90
219aae5'
0d3624fa-bbbf-4a4d-8ed4-bf2444818ecd::DEBUG::2015-06-16 18:12:04,871::resourceManager::635::Storage.ResourceManager::(releaseResource) Released resource '8d8df86c-1014-4c2c-bbd0-4a2e3c45d9e5_imageNS.1a9509be-4815-427c-91e3-c0d90219aae5' 
(0 active users)
0d3624fa-bbbf-4a4d-8ed4-bf2444818ecd::DEBUG::2015-06-16 18:12:04,872::resourceManager::641::Storage.ResourceManager::(releaseResource) Resource '8d8df86c-1014-4c2c-bbd0-4a2e3c45d9e5_imageNS.1a9509be-4815-427c-91e3-c0d90219aae5' is free, 
finding out if anyone is waiting for it.
0d3624fa-bbbf-4a4d-8ed4-bf2444818ecd::DEBUG::2015-06-16 18:12:04,872::resourceManager::649::Storage.ResourceManager::(releaseResource) No one is waiting for resource '8d8df86c-1014-4c2c-bbd0-4a2e3c45d9e5_imageNS.1a9509be-4815-427c-91e3-c
0d90219aae5', Clearing records.
0d3624fa-bbbf-4a4d-8ed4-bf2444818ecd::DEBUG::2015-06-16 18:12:04,872::resourceManager::616::Storage.ResourceManager::(releaseResource) Trying to release resource '8d8df86c-1014-4c2c-bbd0-4a2e3c45d9e5_imageNS.8157cb7e-2492-4fa1-bddf-88ece
030126a'
0d3624fa-bbbf-4a4d-8ed4-bf2444818ecd::DEBUG::2015-06-16 18:12:04,873::resourceManager::635::Storage.ResourceManager::(releaseResource) Released resource '8d8df86c-1014-4c2c-bbd0-4a2e3c45d9e5_imageNS.8157cb7e-2492-4fa1-bddf-88ece030126a' 
(0 active users)
0d3624fa-bbbf-4a4d-8ed4-bf2444818ecd::DEBUG::2015-06-16 18:12:04,873::resourceManager::641::Storage.ResourceManager::(releaseResource) Resource '8d8df86c-1014-4c2c-bbd0-4a2e3c45d9e5_imageNS.8157cb7e-2492-4fa1-bddf-88ece030126a' is free, 
finding out if anyone is waiting for it.
0d3624fa-bbbf-4a4d-8ed4-bf2444818ecd::DEBUG::2015-06-16 18:12:04,873::resourceManager::616::Storage.ResourceManager::(releaseResource) Trying to release resource '8d8df86c-1014-4c2c-bbd0-4a2e3c45d9e5_volumeNS.5853b8da-200d-41f6-b31a-3575
9147fbbb'
0d3624fa-bbbf-4a4d-8ed4-bf2444818ecd::DEBUG::2015-06-16 18:12:04,874::resourceManager::635::Storage.ResourceManager::(releaseResource) Released resource '8d8df86c-1014-4c2c-bbd0-4a2e3c45d9e5_volumeNS.5853b8da-200d-41f6-b31a-35759147fbbb'
 (0 active users)
0d3624fa-bbbf-4a4d-8ed4-bf2444818ecd::DEBUG::2015-06-16 18:12:04,874::resourceManager::641::Storage.ResourceManager::(releaseResource) Resource '8d8df86c-1014-4c2c-bbd0-4a2e3c45d9e5_volumeNS.5853b8da-200d-41f6-b31a-35759147fbbb' is free, finding out if anyone is waiting for it.
0d3624fa-bbbf-4a4d-8ed4-bf2444818ecd::DEBUG::2015-06-16 18:12:04,874::resourceManager::649::Storage.ResourceManager::(releaseResource) No one is waiting for resource '8d8df86c-1014-4c2c-bbd0-4a2e3c45d9e5_volumeNS.5853b8da-200d-41f6-b31a-35759147fbbb', Clearing records.
0d3624fa-bbbf-4a4d-8ed4-bf2444818ecd::DEBUG::2015-06-16 18:12:04,874::resourceManager::649::Storage.ResourceManager::(releaseResource) No one is waiting for resource '8d8df86c-1014-4c2c-bbd0-4a2e3c45d9e5_imageNS.8157cb7e-2492-4fa1-bddf-88ece030126a', Clearing records.
0d3624fa-bbbf-4a4d-8ed4-bf2444818ecd::ERROR::2015-06-16 18:12:04,875::task::863::Storage.TaskManager.Task::(_setError) Task=`0d3624fa-bbbf-4a4d-8ed4-bf2444818ecd`::Unexpected error
Traceback (most recent call last):
  File "/usr/share/vdsm/storage/task.py", line 870, in _run
    return fn(*args, **kargs)
  File "/usr/share/vdsm/storage/task.py", line 331, in run
    return self.cmd(*self.argslist, **self.argsdict)
  File "/usr/share/vdsm/storage/securable.py", line 77, in wrapper
    return method(self, *args, **kwargs)
  File "/usr/share/vdsm/storage/sp.py", line 1502, in copyImage

Comment 1 Kevin Alon Goldblatt 2015-06-16 16:14:31 UTC
Created attachment 1039565 [details]
engine, vdsm, server logs

added logs

Comment 2 Allon Mureinik 2015-06-17 08:07:24 UTC
Liron, worth taking a look when you rewrite this flow.

Comment 3 Red Hat Bugzilla Rules Engine 2015-10-19 11:02:23 UTC
Target release should be placed once a package build is known to fix a issue. Since this bug is not modified, the target version has been reset. Please use target milestone to plan a fix for a oVirt release.

Comment 4 Yaniv Lavi 2015-10-29 12:18:20 UTC
In oVirt testing is done on single release by default. Therefore I'm removing the 4.0 flag. If you think this bug must be tested in 4.0 as well, please re-add the flag. Please note we might not have testing resources to handle the 4.0 clone.

Comment 5 Yaniv Lavi 2015-11-22 14:23:55 UTC
Can you move this to oVirt?

Comment 6 Tal Nisan 2015-11-23 17:13:19 UTC
It's already on oVirt

Comment 7 Red Hat Bugzilla Rules Engine 2015-11-27 04:37:38 UTC
Bug tickets that are moved to testing must have target release set to make sure tester knows what to test. Please set the correct target release before moving to ON_QA.

Comment 8 Sandro Bonazzola 2015-12-01 15:07:05 UTC
This bug is referenced in git log for ovirt-engine-3.6.1.1.
Please set target release to 3.6.1.1 accordingly unless additional patches are needed.

Comment 9 Elad 2015-12-09 09:05:09 UTC
MoveOrCopyImage rollback for a non template disk (tested with floating and attached VM disks) includes the deletion of the leftover image:

2015-12-09 08:59:10,463 INFO  [org.ovirt.engine.core.bll.tasks.AsyncTaskManager] (org.ovirt.thread.pool-7-thread-15) [34268c23] Discovered 1 tasks on Storage Pool 'Default', 0 added to manager.
2015-12-09 08:59:10,569 ERROR [org.ovirt.engine.core.bll.MoveOrCopyDiskCommand] (org.ovirt.thread.pool-7-thread-26) [72e7aa15] Ending command 'org.ovirt.engine.core.bll.MoveOrCopyDiskCommand' with failure.
2015-12-09 08:59:10,760 ERROR [org.ovirt.engine.core.bll.CopyImageGroupCommand] (org.ovirt.thread.pool-7-thread-26) [5db63470] Ending command 'org.ovirt.engine.core.bll.CopyImageGroupCommand' with failure.
2015-12-09 08:59:10,779 INFO  [org.ovirt.engine.core.bll.RemoveImageCommand] (org.ovirt.thread.pool-7-thread-26) [116f8ea4] Running command: RemoveImageCommand internal: true. Entities affected :  ID: 504c7595-7ca5-459c-be6f-44a3d7f1b5d7 Type: Storage


Verified using 
rhevm-3.6.1.1-0.1.el6.noarch
vdsm-4.17.12-0.el7ev.noarch

Comment 10 Sandro Bonazzola 2015-12-16 12:22:23 UTC
According to verification status and target milestone this issue should be fixed in oVirt 3.6.1. Closing current release.


Note You need to log in before you can comment on or make changes to this bug.