This service will be undergoing maintenance at 00:00 UTC, 2017-10-23 It is expected to last about 30 minutes
Bug 1248670 - 2nd LSM after failed 1st LSM due to inaccessable Storage Domain fails
2nd LSM after failed 1st LSM due to inaccessable Storage Domain fails
Status: CLOSED DUPLICATE of bug 1034856
Product: ovirt-engine
Classification: oVirt
Component: General (Show other bugs)
3.6.0
x86_64 Unspecified
unspecified Severity high (vote)
: ovirt-3.6.1
: 3.6.1
Assigned To: Daniel Erez
Kevin Alon Goldblatt
storage
: Reopened
Depends On:
Blocks:
  Show dependency treegraph
 
Reported: 2015-07-30 10:39 EDT by Kevin Alon Goldblatt
Modified: 2016-03-10 10:05 EST (History)
10 users (show)

See Also:
Fixed In Version:
Doc Type: Bug Fix
Doc Text:
Story Points: ---
Clone Of:
Environment:
Last Closed: 2015-11-10 08:28:33 EST
Type: Bug
Regression: ---
Mount Type: ---
Documentation: ---
CRM:
Verified Versions:
Category: ---
oVirt Team: Storage
RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: ---
ylavi: ovirt‑3.6.z?
ylavi: planning_ack?
ylavi: devel_ack?
ylavi: testing_ack?


Attachments (Terms of Use)
engine, vdsm, server logs (688.62 KB, application/x-gzip)
2015-11-10 08:04 EST, Kevin Alon Goldblatt
no flags Details

  None (edit)
Description Kevin Alon Goldblatt 2015-07-30 10:39:45 EDT
Description of problem:
Failed a LSM operation by blocking access to target storage domain during the LSM session. After unblocking access to the storage domain, ran the LSM again - The LSM operation just hangs

Version-Release number of selected component (if applicable):
vdsm-4.17.0-1054.git562e711.el7.noarch
ovirt-engine-3.6.0-0.0.master.20150627185750.git6f063c1.el6.noarch

How reproducible:
happend once at least

Steps to Reproduce:
1. Created VM with block, nfs and gluster disks
2. Started the VM
3. Ran LSM FROM gluster to block domains - just after the automatic snapshot is created, blocked access to the block storage domain using IPTABLES on the vdsm host >>>>> as expected the LSM operation fails
4. Unblocked access to the block storage domain and ran the LSM again >>>>> the operation just hangs indefinitely

Actual results:
After a planned failed LSM operation by blocking access to the storage domain, then unblocking acesss again and running another LSM operation HANGS indefinitely

Expected results:
Should be able to run the LSM after unblocking the access to the storage domain


Additional info:

Engine log:
-------------------------------------------------------

2015-07-30 10:47:59,287 INFO  [org.ovirt.engine.core.bll.tasks.CommandAsyncTask] (org.ovirt.thread.pool-8-thread-13) [4c00c4c9] CommandAsyncTask::Adding CommandMultiAsyncTasks object for command '11cba6a1-72d9-4ac2-85f5-863a98ad6530'
2015-07-30 10:47:59,287 INFO  [org.ovirt.engine.core.bll.CommandMultiAsyncTasks] (org.ovirt.thread.pool-8-thread-13) [4c00c4c9] CommandMultiAsyncTasks::AttachTask: Attaching task '1e3549fc-b433-4a15-b6bd-9c6fcc1e97a9' to command '11cba6a1-72d9-4ac2-85f5-863a98ad6530'.
2015-07-30 10:47:59,299 INFO  [org.ovirt.engine.core.bll.tasks.AsyncTaskManager] (org.ovirt.thread.pool-8-thread-13) [4c00c4c9] Adding task '1e3549fc-b433-4a15-b6bd-9c6fcc1e97a9' (Parent Command 'LiveMigrateVmDisks', Parameters Type 'org.ovirt.engine.core.common.asynctasks.AsyncTaskParameters'), polling hasn't started yet..
2015-07-30 10:47:59,378 INFO  [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] (org.ovirt.thread.pool-8-thread-13) [4c00c4c9] Correlation ID: 35e3f4d9, Job ID: de363114-8e42-464e-9ced-cf3d4c72e780, Call Stack: null, Custom Event ID: -1, Message: Snapshot 'Auto-generated for Live Storage Migration' creation for VM 'vm16' was initiated by admin@internal.




vdsm.log
--------------------------------------------------------------------

Thread-101419::DEBUG::2015-07-30 14:01:10,016::stompreactor::304::yajsonrpc.StompServer::(send) Sending response
JsonRpc (StompReactor)::DEBUG::2015-07-30 14:01:10,081::stompreactor::235::Broker.StompAdapter::(handle_frame) Handling message <StompFrame command=u'SEND'>
JsonRpcServer::DEBUG::2015-07-30 14:01:10,082::__init__::533::jsonrpc.JsonRpcServer::(serve_requests) Waiting for request
Thread-101420::DEBUG::2015-07-30 14:01:10,082::__init__::496::jsonrpc.JsonRpcServer::(_serveRequest) Calling 'Host.getAllTasksStatuses' in bridge with {}
Thread-101420::DEBUG::2015-07-30 14:01:10,083::task::595::Storage.TaskManager.Task::(_updateState) Task=`27cb5a47-66c7-4f59-8831-ba63a76af31f`::moving from state init -> state preparing
Thread-101420::INFO::2015-07-30 14:01:10,083::logUtils::48::dispatcher::(wrapper) Run and protect: getAllTasksStatuses(spUUID=None, options=None)
Thread-101420::DEBUG::2015-07-30 14:01:10,084::taskManager::112::Storage.TaskManager::(getAllTasksStatuses) Entry.
Thread-101420::DEBUG::2015-07-30 14:01:10,084::taskManager::103::Storage.TaskManager::(getTaskStatus) Entry. taskID: 1e3549fc-b433-4a15-b6bd-9c6fcc1e97a9
Thread-101420::DEBUG::2015-07-30 14:01:10,084::taskManager::106::Storage.TaskManager::(getTaskStatus) Return. Response: {'code': 0, 'message': 'running job 1 of 1', 'taskState': 'running', 'taskResult': '', 'taskID': '1e3549fc-b433-4a15-b6bd-9c6fcc1e97a9'}
Thread-101420::DEBUG::2015-07-30 14:01:10,084::taskManager::123::Storage.TaskManager::(getAllTasksStatuses) Return: {'1e3549fc-b433-4a15-b6bd-9c6fcc1e97a9': {'code': 0, 'message': 'running job 1 of 1', 'taskState': 'running', 'taskResult': '', 'taskID': '1e3549fc-b433-4a15-b6bd-9c6fcc1e97a9'}}
Thread-101420::INFO::2015-07-30 14:01:10,084::logUtils::51::dispatcher::(wrapper) Run and protect: getAllTasksStatuses, Return response: {'allTasksStatus': {'1e3549fc-b433-4a15-b6bd-9c6fcc1e97a9': {'code': 0, 'message': 'running job 1 of 1', 'taskState': 'running', 'taskResult': '', 'taskID': '1e3549fc-b433-4a15-b6bd-9c6fcc1e97a9'}}}
Thread-101420::DEBUG::2015-07-30 14:01:10,084::task::1191::Storage.TaskManager.Task::(prepare) Task=`27cb5a47-66c7-4f59-8831-ba63a76af31f`::finished: {'allTasksStatus': {'1e3549fc-b433-4a15-b6bd-9c6fcc1e97a9': {'code': 0, 'message': 'running job 1 of 1', 'taskState': 'running', 'taskResult': '', 'taskID': '1e3549fc-b433-4a15-b6bd-9c6fcc1e97a9'}}}
Thread-101420::DEBUG::2015-07-30 14:01:10,085::task::595::Storage.TaskManager.Task::(_updateState) Task=`27cb5a47-66c7-4f59-8831-ba63a76af31f`::moving from state preparing -> state finished
Thread-101420::DEBUG::2015-07-30 14:01:10,085::resourceManager::940::Storage.ResourceManager.Owner::(releaseAll) Owner.releaseAll requests {} resources {}
Thread-101420::DEBUG::2015-07-30 14:01:10,085::resourceManager::977::Storage.ResourceManager.Owner::(cancelAll) Owner.cancelAll requests {}
Comment 1 Daniel Erez 2015-09-07 08:27:50 EDT
Hi Kevin,

Can you please attach the full engine and vdsm logs?
Comment 2 Red Hat Bugzilla Rules Engine 2015-10-19 06:51:54 EDT
Target release should be placed once a package build is known to fix a issue. Since this bug is not modified, the target version has been reset. Please use target milestone to plan a fix for a oVirt release.
Comment 3 Yaniv Lavi (Dary) 2015-10-29 08:24:36 EDT
In oVirt testing is done on single release by default. Therefore I'm removing the 4.0 flag. If you think this bug must be tested in 4.0 as well, please re-add the flag. Please note we might not have testing resources to handle the 4.0 clone.
Comment 4 Daniel Erez 2015-11-09 08:28:09 EST
Not enough information to investigate the issue since full logs are missing. Please reopen if able to reproduce. Thanks!
Comment 5 Kevin Alon Goldblatt 2015-11-10 08:01:59 EST
Ran the scenario again with code:
------------------------------------
vdsm-4.17.10.1-0.el7ev.noarch
rhevm-3.6.0.3-0.1.el6.noarch

Libvirt rpms on host:
------------------------------------
libvirt-client-1.2.17-13.el7.x86_64
libvirt-python-1.2.17-2.el7.x86_64
libvirt-daemon-driver-interface-1.2.17-13.el7.x86_64
libvirt-daemon-driver-storage-1.2.17-13.el7.x86_64
libvirt-daemon-1.2.17-13.el7.x86_64
libvirt-daemon-driver-nwfilter-1.2.17-13.el7.x86_64
libvirt-daemon-config-nwfilter-1.2.17-13.el7.x86_64
libvirt-daemon-driver-nodedev-1.2.17-13.el7.x86_64
libvirt-daemon-driver-secret-1.2.17-13.el7.x86_64
libvirt-daemon-driver-qemu-1.2.17-13.el7.x86_64
libvirt-daemon-kvm-1.2.17-13.el7.x86_64
libvirt-daemon-driver-network-1.2.17-13.el7.x86_64
libvirt-lock-sanlock-1.2.17-13.el7.x86_64

qemu rpms on host:
--------------------------------
ipxe-roms-qemu-20130517-7.gitc4bce43.el7.noarch
qemu-kvm-tools-rhev-2.3.0-31.el7.x86_64
qemu-kvm-common-rhev-2.3.0-31.el7.x86_64
libvirt-daemon-driver-qemu-1.2.17-13.el7.x86_64
qemu-img-rhev-2.3.0-31.el7.x86_64
qemu-kvm-rhev-2.3.0-31.el7.x86_64



Ran the following scenario:
------------------------------------
Steps to reproduce:

1. Create a VM with 4 disks (2 block - 1 prealloc + 1 thin on SD block1 and 2 nfs - 1 prealloc + 1 thin on SD nfs1111)
2. Start the VM
3. Start Migration from block disks to nfs1111 and from nfs disks to SD block2
4. After the automatic snapshot is created, I block the path using iptables from the host to the target SD block2 >>>> the block disks on SD block1 to SD nfs1112 are successfully moved and as expected the nfs disks from nfs1111 to SD block2 fail.
5. I unblock the path from the host to SD block2
6. Ran the move again of all disks from SD nfs1111 to target SD block2 >>>> the operation fails 


Engine log:
-------------------------------------------
2015-11-10 14:19:43,978 INFO  [org.ovirt.engine.core.bll.lsm.LiveMigrateDiskCommand] (org.ovirt.thread.pool-7-thread-26) [7f65dd44] Lock freed to object 'EngineLock:{exclusiveLocks='null', sharedLocks='[e4874da8-55ac-4804-b67f-8e43c514fe
9a=<VM, ACTION_TYPE_FAILED_DISK_IS_BEING_MIGRATED$DiskName vmtest1_Disk4>]'}'
2015-11-10 14:19:43,978 ERROR [org.ovirt.engine.core.bll.lsm.LiveMigrateDiskCommand] (org.ovirt.thread.pool-7-thread-19) [43f467e0] Ending command 'org.ovirt.engine.core.bll.lsm.LiveMigrateDiskCommand' with failure.
2015-11-10 14:19:43,988 INFO  [org.ovirt.engine.core.bll.lsm.LiveMigrateDiskCommand] (org.ovirt.thread.pool-7-thread-19) [43f467e0] Lock freed to object 'EngineLock:{exclusiveLocks='null', sharedLocks='[e4874da8-55ac-4804-b67f-8e43c514fe
9a=<VM, ACTION_TYPE_FAILED_DISK_IS_BEING_MIGRATED$DiskName vmtest1_Disk1>]'}'
2015-11-10 14:19:44,011 ERROR [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] (org.ovirt.thread.pool-7-thread-43) [619bad6] Correlation ID: 619bad6, Job ID: d6a0fcbf-16ec-4e93-8498-27da53d6afa3, Call Stack: null, C
ustom Event ID: -1, Message: User admin@internal have failed to move disk vmtest1_Disk2 to domain block22.
2015-11-10 14:19:44,011 INFO  [org.ovirt.engine.core.bll.tasks.CommandAsyncTask] (org.ovirt.thread.pool-7-thread-43) [619bad6] CommandAsyncTask::HandleEndActionResult [within thread]: endAction for action type 'LiveMigrateDisk' completed
, handling the result.
2015-11-10 14:19:44,011 INFO  [org.ovirt.engine.core.bll.tasks.CommandAsyncTask] (org.ovirt.thread.pool-7-thread-43) [619bad6] CommandAsyncTask::HandleEndActionResult [within thread]: endAction for action type 'LiveMigrateDisk' succeeded
, clearing tasks.
2015-11-10 14:19:44,013 ERROR [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] (org.ovirt.thread.pool-7-thread-9) [36e69092] Correlation ID: 36e69092, Job ID: bcc9b073-5c27-4604-ba14-18948a751234, Call Stack: null, 
Custom Event ID: -1, Message: User admin@internal have failed to move disk vmtest1_Disk3 to domain block22.
2015-11-10 14:19:44,013 INFO  [org.ovirt.engine.core.bll.tasks.CommandAsyncTask] (org.ovirt.thread.pool-7-thread-9) [36e69092] CommandAsyncTask::HandleEndActionResult [within thread]: endAction for action type 'LiveMigrateDisk' completed
, handling the result.
2015-11-10 14:19:44,014 INFO  [org.ovirt.engine.core.bll.tasks.CommandAsyncTask] (org.ovirt.thread.pool-7-thread-9) [36e69092] CommandAsyncTask::HandleEndActionResult [within thread]: endAction for action type 'LiveMigrateDisk' succeeded
, clearing tasks.
2015-11-10 14:19:44,014 INFO  [org.ovirt.engine.core.bll.tasks.SPMAsyncTask] (org.ovirt.thread.pool-7-thread-43) [619bad6] SPMAsyncTask::ClearAsyncTask: Attempting to clear task '5f23ff11-fb1c-4cfa-9b75-fc259b1e910b'
2015-11-10 14:19:44,015 INFO  [org.ovirt.engine.core.vdsbroker.irsbroker.SPMClearTaskVDSCommand] (org.ovirt.thread.pool-7-thread-43) [619bad6] START, SPMClearTaskVDSCommand( SPMTaskGuidBaseVDSCommandParameters:{runAsync='true', storagePo
olId='87566c2f-6c56-4b30-9669-0231fa11e957', ignoreFailoverLimit='false', taskId='5f23ff11-fb1c-4cfa-9b75-fc259b1e910b'}), log id: 257fa5aa
2015-11-10 14:19:44,027 INFO  [org.ovirt.engine.core.bll.tasks.SPMAsyncTask] (org.ovirt.thread.pool-7-thread-9) [36e69092] SPMAsyncTask::ClearAsyncTask: Attempting to clear task '541da194-da05-443c-a831-65fe3330231b'
2015-11-10 14:19:44,027 INFO  [org.ovirt.engine.core.vdsbroker.irsbroker.SPMClearTaskVDSCommand] (org.ovirt.thread.pool-7-thread-9) [36e69092] START, SPMClearTaskVDSCommand( SPMTaskGuidBaseVDSCommandParameters:{runAsync='true', storagePo
olId='87566c2f-6c56-4b30-9669-0231fa11e957', ignoreFailoverLimit='false', taskId='541da194-da05-443c-a831-65fe3330231b'}), log id: 728f9ad7
2015-11-10 14:19:44,022 INFO  [org.ovirt.engine.core.vdsbroker.vdsbroker.HSMClearTaskVDSCommand] (org.ovirt.thread.pool-7-thread-43) [619bad6] START, HSMClearTaskVDSCommand(HostName = blond-vdsf, HSMTaskGuidBaseVDSCommandParameters:{runA
sync='true', hostId='e338a2b6-7a21-4a1a-b305-895682b0b7d2', taskId='5f23ff11-fb1c-4cfa-9b75-fc259b1e910b'}), log id: 79ffb18
2015-11-10 14:19:44,041 ERROR [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] (org.ovirt.thread.pool-7-thread-26) [7f65dd44] Correlation ID: 7f65dd44, Job ID: efc4374d-9c23-4858-a321-d8532a709184, Call Stack: null,
 Custom Event ID: -1, Message: User admin@internal have failed to move disk vmtest1_Disk4 to domain block22.
2015-11-10 14:19:44,042 INFO  [org.ovirt.engine.core.bll.tasks.CommandAsyncTask] (org.ovirt.thread.pool-7-thread-26) [7f65dd44] CommandAsyncTask::HandleEndActionResult [within thread]: endAction for action type 'LiveMigrateDisk' complete
d, handling the result.
2015-11-10 14:19:44,042 INFO  [org.ovirt.engine.core.bll.tasks.CommandAsyncTask] (org.ovirt.thread.pool-7-thread-26) [7f65dd44] CommandAsyncTask::HandleEndActionResult [within thread]: endAction for action type 'LiveMigrateDisk' succeede
d, clearing tasks.
2015-11-10 14:19:44,042 ERROR [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] (org.ovirt.thread.pool-7-thread-19) [43f467e0] Correlation ID: 43f467e0, Job ID: 3cebf4ce-762f-4ccb-b2a7-e5ab5ff9874a, Call Stack: null, Custom Event ID: -1, Message: User admin@internal have failed to move disk vmtest1_Disk1 to domain block22.
2015-11-10 14:19:44,042 INFO  [org.ovirt.engine.core.bll.tasks.CommandAsyncTask] (org.ovirt.thread.pool-7-thread-19) [43f467e0] CommandAsyncTask::HandleEndActionResult [within thread]: endAction for action type 'LiveMigrateDisk' completed, handling the result.
2015-11-10 14:19:44,042 INFO  [org.ovirt.engine.core.bll.tasks.CommandAsyncTask] (org.ovirt.thread.pool-7-thread-19) [43f467e0] CommandAsyncTask::HandleEndActionResult [within thread]: endAction for action type 'LiveMigrateDisk' succeeded, clearing tasks.


vdsm logs
----------------------------------------------------
40cc-b081-f2bfd006814a for image's volumes
541da194-da05-443c-a831-65fe3330231b::DEBUG::2015-11-10 14:19:36,497::lvm::463::Storage.LVM::(_reloadlvs) lvs reloaded
Thread-65935::DEBUG::2015-11-10 14:19:36,500::__init__::503::jsonrpc.JsonRpcServer::(_serveRequest) Calling 'Task.clear' in bridge with {u'taskID': u'9b7ab179-4277-4ee8-b64a-8a41838207ea'}
57ca063b-d691-4fef-a539-e04434dad711::DEBUG::2015-11-10 14:19:36,506::task::752::Storage.TaskManager.Task::(_save) Task=`57ca063b-d691-4fef-a539-e04434dad711`::_save: orig /rhev/data-center/87566c2f-6c56-4b30-9669-0231fa11e957/mastersd/m
aster/tasks/57ca063b-d691-4fef-a539-e04434dad711 temp /rhev/data-center/87566c2f-6c56-4b30-9669-0231fa11e957/mastersd/master/tasks/57ca063b-d691-4fef-a539-e04434dad711.temp
541da194-da05-443c-a831-65fe3330231b::DEBUG::2015-11-10 14:19:36,508::lvm::463::Storage.OperationMutex::(_reloadlvs) Operation 'lvm reload operation' released the operation mutex
541da194-da05-443c-a831-65fe3330231b::ERROR::2015-11-10 14:19:36,515::blockVolume::459::Storage.Volume::(validateImagePath) Unexpected error
Traceback (most recent call last):
  File "/usr/share/vdsm/storage/blockVolume.py", line 457, in validateImagePath
    os.mkdir(imageDir, 0o755)
OSError: [Errno 2] No such file or directory: '/rhev/data-center/87566c2f-6c56-4b30-9669-0231fa11e957/a6cfaaeb-b642-4027-9adb-af0b81e45d4d/images/d64e3540-1e5e-4386-b557-0a3cda2b1033'
41784ca0-23ee-4842-b6f0-1c4aa397543b::DEBUG::2015-11-10 14:19:36,522::fileVolume::535::Storage.Volume::(validateVolumePath) validate path for 91b862bc-02e8-4c00-b49d-63b8ad4197f4
541da194-da05-443c-a831-65fe3330231b::WARNING::2015-11-10 14:19:36,522::resourceManager::591::Storage.ResourceManager::(registerResource) Resource factory failed to create resource 'a6cfaaeb-b642-4027-9adb-af0b81e45d4d_imageNS.d64e3540-1e5e-4386-b557-0a3cda2b1033'. Canceling request.
Traceback (most recent call last):
  File "/usr/share/vdsm/storage/resourceManager.py", line 587, in registerResource
    obj = namespaceObj.factory.createResource(name, lockType)
  File "/usr/share/vdsm/storage/resourceFactories.py", line 193, in createResource
    lockType)
  File "/usr/share/vdsm/storage/resourceFactories.py", line 122, in __getResourceCandidatesList
    imgUUID=resourceName)
  File "/usr/share/vdsm/storage/image.py", line 186, in getChain
    srcVol = volclass(self.repoPath, sdUUID, imgUUID, uuidlist[0])
  File "/usr/share/vdsm/storage/blockVolume.py", line 80, in __init__
    volume.Volume.__init__(self, repoPath, sdUUID, imgUUID, volUUID)
  File "/usr/share/vdsm/storage/volume.py", line 181, in __init__
    self.validate()
  File "/usr/share/vdsm/storage/blockVolume.py", line 89, in validate
    volume.Volume.validate(self)
  File "/usr/share/vdsm/storage/volume.py", line 193, in validate
    self.validateImagePath()
  File "/usr/share/vdsm/storage/blockVolume.py", line 460, in validateImagePath
    raise se.ImagePathError(imageDir)
ImagePathError: Image path does not exist or cannot be accessed/created: (u'/rhev/data-center/87566c2f-6c56-4b30-9669-0231fa11e957/a6cfaaeb-b642-4027-9adb-af0b81e45d4d/images/d64e3540-1e5e-4386-b557-0a3cda2b1033',)
Thread-65935::DEBUG::2015-11-10 14:19:36,528::task::595::Storage.TaskManager.Task::(_updateState) Task=`dbfd7724-1b90-44c0-a9d9-cb4a5d657976`::moving from state init -> state preparing
5f23ff11-fb1c-4cfa-9b75-fc259b1e910b::DEBUG::2015-11-10 14:19:36,535::fileVolume::535::Storage.Volume::(validateVolumePath) validate path for 0be134a5-0b82-40cb-be4b-5f69883a95b4
41784ca0-23ee-4842-b6f0-1c4aa397543b::INFO::2015-11-10 14:19:36,545::image::225::Storage.Image::(getChain) sdUUID=95b5247e-6c02-46d8-ab39-77f25d782d7f imgUUID=b16e2bd6-81ce-4538-af41-7fcfa5898e12 chain=[<storage.fileVolume.FileVolume object at 0x7f1ca41940d0>, <storage.fileVolume.FileVolume object at 0x7f1ca42ab650>, <storage.fileVolume.FileVolume object at 0x7f1d3809c310>] 
57ca063b-d691-4fef-a539-e04434dad711::ERROR::2015-11-10 14:19:36,565::image::405::Storage.Image::(_createTargetImage) Unexpected error
Traceback (most recent call last):
  File "/usr/share/vdsm/storage/image.py", line 385, in _createTargetImage
    srcVolUUID=volParams['parent'])
  File "/usr/share/vdsm/storage/sd.py", line 488, in createVolume
    initialSize=initialSize)
  File "/usr/share/vdsm/storage/volume.py", line 415, in create
    imgPath = image.Image(repoPath).create(sdUUID, imgUUID)
  File "/usr/share/vdsm/storage/image.py", line 127, in create
    os.mkdir(imageDir)
Comment 6 Kevin Alon Goldblatt 2015-11-10 08:04 EST
Created attachment 1092262 [details]
engine, vdsm, server logs

Added logs
Comment 7 Daniel Erez 2015-11-10 08:28:33 EST
According to the logs [1], the error has occurred on VmReplicateDiskFinish phase. Hence, we don't perform a cleanup of the destination volumes since we can't determine whether the VM still uses the source images or already migrated to the destination images. I.e. we can't remove the destination nor source volumes. That is done in order to avoid data loss in the cleanup process. So, the destination should be manually cleaned by the user in order to execute another migration. This issue is similar to the one described on 
https://bugzilla.redhat.com/show_bug.cgi?id=1034856#c8. Marking as a duplicate of this bug.

[1] 2015-11-10 12:20:37,429 INFO  [org.ovirt.engine.core.vdsbroker.vdsbroker.VmReplicateDiskFinishVDSCommand] (org.ovirt.thread.pool-7-thread-7) [1e751978] START, VmReplicateDiskFinishVDSCommand(HostName = blond-vdsf, VmReplicateDiskParameters:{runAsync='true', hostId='e338a2b6-7a21-4a1a-b305-895682b0b7d2', vmId='84aea225-3622-4c7f-ab8d-a75d4e0b0020'}), log id: 4f493047
2015-11-10 12:20:37,460 ERROR [org.ovirt.engine.core.bll.lsm.LiveMigrateDiskCommand] (org.ovirt.thread.pool-7-thread-34) [545bba31] Ending command 'org.ovirt.engine.core.bll.lsm.LiveMigrateDiskCommand' with failure.
2015-11-10 12:20:37,461 INFO  [org.ovirt.engine.core.vdsbroker.vdsbroker.VmReplicateDiskFinishVDSCommand] (org.ovirt.thread.pool-7-thread-34) [545bba31] START, VmReplicateDiskFinishVDSCommand(HostName = blond-vdsf, VmReplicateDiskParameters:{runAsync='true', hostId='e338a2b6-7a21-4a1a-b305-895682b0b7d2', vmId='84aea225-3622-4c7f-ab8d-a75d4e0b0020'}), log id: 63b351d5
2015-11-10 12:20:37,709 WARN  [org.ovirt.engine.core.vdsbroker.irsbroker.IrsProxyData] (org.ovirt.thread.pool-7-thread-27) [] domain '2697444f-6159-4c6f-afb4-5bd611a1d10d:block2' in problem. vds: 'blond-vdsf'
2015-11-10 12:20:37,711 WARN  [org.ovirt.engine.core.vdsbroker.irsbroker.IrsProxyData] (org.ovirt.thread.pool-7-thread-27) [] domain '8ac1ea74-091c-478d-96e1-563d44173bb3:block1' in problem. vds: 'blond-vdsf'
2015-11-10 12:20:37,780 ERROR [org.ovirt.engine.core.vdsbroker.vdsbroker.VmReplicateDiskFinishVDSCommand] (org.ovirt.thread.pool-7-thread-34) [545bba31] Failed in 'VmReplicateDiskFinishVDS' method
2015-11-10 12:20:37,786 ERROR [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] (org.ovirt.thread.pool-7-thread-34) [545bba31] Correlation ID: null, Call Stack: null, Custom Event ID: -1, Message: VDSM blond-vdsf command failed: Drive replication error
2015-11-10 12:20:37,786 INFO  [org.ovirt.engine.core.vdsbroker.vdsbroker.VmReplicateDiskFinishVDSCommand] (org.ovirt.thread.pool-7-thread-34) [545bba31] Command 'org.ovirt.engine.core.vdsbroker.vdsbroker.VmReplicateDiskFinishVDSCommand' return value 'StatusOnlyReturnForXmlRpc [status=StatusForXmlRpc [code=55, message=Drive replication error]]'
2015-11-10 12:20:37,786 INFO  [org.ovirt.engine.core.vdsbroker.vdsbroker.VmReplicateDiskFinishVDSCommand] (org.ovirt.thread.pool-7-thread-34) [545bba31] HostName = blond-vdsf
2015-11-10 12:20:37,787 ERROR [org.ovirt.engine.core.vdsbroker.vdsbroker.VmReplicateDiskFinishVDSCommand] (org.ovirt.thread.pool-7-thread-34) [545bba31] Command 'VmReplicateDiskFinishVDSCommand(HostName = blond-vdsf, VmReplicateDiskParameters:{runAsync='true', hostId='e338a2b6-7a21-4a1a-b305-895682b0b7d2', vmId='84aea225-3622-4c7f-ab8d-a75d4e0b0020'})' execution failed: VDSGenericException: VDSErrorException: Failed to VmReplicateDiskFinishVDS, error = Drive replication error, code = 55

*** This bug has been marked as a duplicate of bug 1034856 ***

Note You need to log in before you can comment on or make changes to this bug.