Created attachment 862053 [details] logs Description of problem: Due to the change in Data Center's allowing both nfs and iscsi domains to be under the same pool I have a question regrading move image. We used to block move of image from export domain (which is nfs) to iscsi domains with the raw+sparse combination. This was changed about 2 years ago and we fixed this limitation. if we are able to export the vm with both disks and import the disks back to a new domain (iscsi), can we not use the same for nfs -> iscsi data domains image move instead of blocking the operation with CanDoAction? Version-Release number of selected component (if applicable): tested this in ovirt 3.4 ovirt-engine-backend-3.4.0-0.7.beta2.el6.noarch but opening this as RFE for rhevm How reproducible: 100% Steps to Reproduce: 1. create a pool with two domains, 1 nfs and one iscsi 2. create a vm with thin provision disk on the nfs domain 3. try to move the disk to iscsi domain 4. export the vm to export domain 5. import the vm and change the target domain to iscsi domain Actual results: we fail to move the disk with raw+sparse format error but we are able to export and import the vm and change domains. Expected results: although there is a format change from raw to cow during import (although user will still just see thin provision) we are able to import the image from nfs to iscsi. I think it would be nice to do the same for move image between data domains and when we create a template or do a copy template disk. Additional info: logs this is the disk I created: 014-02-11 16:39:31,629 INFO [org.ovirt.engine.core.vdsbroker.irsbroker.CreateImageVDSCommand] (ajp--127.0.0.1-8702-11) [6a3bf4b5] START, CreateImageVDSCommand( storagePoolId = 53771658-e350-42bc-b505-22d11dca7678, ignoreFailoverLimit = false, storageDomainId = 8c649966-0b58-40ae-a291-e5bdd789107e, imageGroupId = b6a063d9-341b-4171-b339-3249dfeba30f, imageSizeInBytes = 3221225472, volumeFormat = RAW, newImageId = 6fce4cf1-92b8-4eeb-b1ad-b83d9904c052, newImageDescriptio n = ), log id: 3f89669b same disk during import: 2014-02-11 16:48:54,769 INFO [org.ovirt.engine.core.vdsbroker.irsbroker.CopyImageVDSCommand] (org.ovirt.thread.pool-6-thread-39) [16e0ff43] START, CopyImageVDSCommand( storagePoolId = 53771658-e350-42bc-b505-22d11dca7678, ignoreFailoverLimit = false, storageDomainId = ae82505c-54bc-4795-99e3-2cc1b1c7c6b5, imageGroupId = b6a063d9-341b-4171-b339-3249dfeba30f, imageId = 6fce4cf1-92b8-4eeb-b1ad-b83d9904c052, dstImageGroupId = b6a063d9-341b-4171-b339-3249dfeba30f, vmId = c2fb6923-9d0c-40ef-b8b7-92538da945d4, dstImageId = 6fce4cf1-92b8-4eeb-b1ad-b83d9904c052, imageDescription = , dstStorageDomainId = bb179c25-dd30-48ef-81ca-fefd5c5ca094, copyVolumeType = LeafVol, volumeFormat = COW, preallocate = Sparse, postZero = false, force = true), log id: 2d03a46c 2014-02-11 16:48:54,771 INFO [org.ovirt.engine.core.vdsbroker.irsbroker.CopyImageVDSCommand] (org.ovirt.thread.pool-6-thread-39) [16e0ff43] -- executeIrsBrokerCommand: calling 'copyImage' with two new parameters: description and UUID 2014-02-11 16:48:54,771 INFO [org.ovirt.engine.core.vdsbroker.irsbroker.CopyImageVDSCommand] (org.ovirt.thread.pool-6-thread-39) [16e0ff43] -- copyImage parameters: sdUUID=ae82505c-54bc-4795-99e3-2cc1b1c7c6b5 spUUID=53771658-e350-42bc-b505-22d11dca7678 vmGUID=c2fb6923-9d0c-40ef-b8b7-92538da945d4 srcImageGUID=b6a063d9-341b-4171-b339-3249dfeba30f srcVolUUID=6fce4cf1-92b8-4eeb-b1ad-b83d9904c052 dstImageGUID=b6a063d9-341b-4171-b339-3249dfeba30f dstVolUUID=6fce4cf1-92b8-4eeb-b1ad-b83d9904c052 descr= dstSdUUID=bb179c25-dd30-48ef-81ca-fefd5c5ca094
Tal, don't you have a bunch of patches addressing this issue already?
Indeed, updated bug accordingly
Test case was executed: https://tcms.engineering.redhat.com/run/136567/ Verified using av9
following results from 1098258 and our discussion reopening
Moving to MODIFIED as bug 1098258 is MODIFIED too.
verified when moving raw-sparse from file domain to block domain it becomes raw- pre allocated following our discussion this change is o.k. operation works as expected.
Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory, and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. http://rhn.redhat.com/errata/RHSA-2014-0506.html