Bug 1663135 - RFE: importing vm from KVM external provider should work also to block based SD [NEEDINFO]
Summary: RFE: importing vm from KVM external provider should work also to block based SD
Keywords:
Status: ASSIGNED
Alias: None
Product: Red Hat Enterprise Virtualization Manager
Classification: Red Hat
Component: vdsm
Version: 4.2.7
Hardware: Unspecified
OS: Unspecified
unspecified
medium
Target Milestone: ovirt-4.4.0
: ---
Assignee: Steven Rosenberg
QA Contact: Nisim Simsolo
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2019-01-03 09:45 UTC by Marian Jankular
Modified: 2020-01-08 09:01 UTC (History)
17 users (show)

Fixed In Version:
Doc Type: Bug Fix
Doc Text:
Previously, virtual machine (VM) imports from sparse storage assumed the target also used sparse storage. However, block storage does not support sparse allocation. As a result, VM imports from NFS-based storage to block storage failed. The current release fixes this issue: Imports to block storage now use preallocated volumes and work as expected.
Clone Of:
Environment:
Last Closed:
oVirt Team: Virt
Target Upstream Version:
srosenbe: needinfo? (nsoffer)


Attachments (Terms of Use)
reassigned, engine.log (7.71 MB, text/plain)
2019-03-25 09:04 UTC, Nisim Simsolo
no flags Details
reassigned, vdsm.log (10.63 MB, text/plain)
2019-03-25 09:06 UTC, Nisim Simsolo
no flags Details
reassigned engine.log Apr 4 2019 (7.49 MB, text/plain)
2019-04-04 11:08 UTC, Nisim Simsolo
no flags Details
vdsm.log (553.16 KB, application/x-xz)
2019-09-10 14:50 UTC, Nisim Simsolo
no flags Details
engine.log (336.31 KB, application/x-xz)
2019-09-10 14:51 UTC, Nisim Simsolo
no flags Details
supervdsm.log (2.82 MB, text/plain)
2019-09-10 14:52 UTC, Nisim Simsolo
no flags Details


Links
System ID Priority Status Summary Last Updated
oVirt gerrit 97716 'None' ABANDONED kvm2ovirt: Fixed Block Storage Failure 2020-02-25 16:05:39 UTC
oVirt gerrit 98013 'None' MERGED engine: Fixed Block Storage Failure 2020-02-25 16:05:39 UTC

Description Marian Jankular 2019-01-03 09:45:27 UTC
Description of problem:
importing vm from KVM external provider should work also to block based SD 

Version-Release number of selected component (if applicable):
4.2.7

How reproducible:
everytime

Steps to Reproduce:
1. customer would like to have possibility to import vm from kvm to blockbased SD

2. customer is saying it was working in 4.1
3. it is not possible after upgrade to 4.2.7

Actual results:
import fails

Expected results:
import will be successfull

Additional info:

Comment 2 Michal Skrivanek 2019-01-03 10:59:32 UTC
please add more details

Comment 8 Steven Rosenberg 2019-01-07 17:07:52 UTC
Could we obtain the logs for this issue, especially the vdsm and kvm logs so that we can see what is actually happening at the client's site?

Thank you.

Comment 9 Marian Jankular 2019-01-09 15:09:16 UTC
hello,

it is the same as bug 1661070

~~~ 2018-12-19 13:49:39.602+00 | 2aeb6d21-7465-411e-8f97-6dc4307fb4ba | Failed to import Vm vredminedev to Data Center Default, Cluster Default 2018-12-19 13:49:30.158+00 | bd2ceaf6-64b5-4eb1-a655-88a3b9212181 | VM vprotectnode was started by admin@internal-authz (Host: rhev2). 2018-12-19 13:49:21.528+00 | 2aeb6d21-7465-411e-8f97-6dc4307fb4ba | The disk vredminedev was successfully added to VM vredminedev. 2018-12-19 13:49:13.488+00 | 2aeb6d21-7465-411e-8f97-6dc4307fb4ba | Starting to import Vm vredminedev to Data Center Default, Cluster Default 2018-12-19 13:49:13.443+00 | 2aeb6d21-7465-411e-8f97-6dc4307fb4ba | Add-Disk operation of 'vredminedev' was initiated by the system. 2018-12-19 13:49:12.961+00 | | VM vredminedev has MAC address(es) 52:54:00:85:a5:f6, which is/are out of its MAC pool definitions. ~~~ $ egrep '2aeb6d21-7465-411e-8f97-6dc4307fb4ba' var/log/ovirt-engine/engine.log 2018-12-19 14:49:12,679+01 INFO [org.ovirt.engine.core.bll.exportimport.ImportVmFromExternalProviderCommand] (default task-32) [2aeb6d21-7465-411e-8f97-6dc4307fb4ba] Lock Acquired to object 'EngineLock:{exclusiveLocks='[vredminedev=VM_NAME, b51e67e8-bf6f-4295-b341-556e5c498191=VM]', sharedLocks=''}' 2018-12-19 14:49:12,904+01 INFO [org.ovirt.engine.core.bll.exportimport.ImportVmFromExternalProviderCommand] (EE-ManagedThreadFactory-engine-Thread-80) [2aeb6d21-7465-411e-8f97-6dc4307fb4ba] Running command: ImportVmFromExternalProviderCommand internal: false. Entities affected : ID: 86f315b0-7080-4e05-be9e-b5f6c862600e Type: StorageAction group IMPORT_EXPORT_VM with role type ADMIN 2018-12-19 14:49:12,966+01 WARN [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] (EE-ManagedThreadFactory-engine-Thread-80) [2aeb6d21-7465-411e-8f97-6dc4307fb4ba] EVENT_ID: MAC_ADDRESS_IS_EXTERNAL(925), VM vredminedev has MAC address(es) 52:54:00:85:a5:f6, which is/are out of its MAC pool definitions. 2018-12-19 14:49:13,080+01 INFO [org.ovirt.engine.core.bll.storage.disk.AddDiskCommand] (EE-ManagedThreadFactory-engine-Thread-80) [2aeb6d21-7465-411e-8f97-6dc4307fb4ba] Lock Acquired to object 'EngineLock:{exclusiveLocks='[b51e67e8-bf6f-4295-b341-556e5c498191=VM_DISK_BOOT]', sharedLocks=''}' 2018-12-19 14:49:13,153+01 INFO [org.ovirt.engine.core.bll.storage.disk.AddDiskCommand] (EE-ManagedThreadFactory-engine-Thread-80) [2aeb6d21-7465-411e-8f97-6dc4307fb4ba] Running command: AddDiskCommand internal: true. Entities affected : ID: b51e67e8-bf6f-4295-b341-556e5c498191 Type: VMAction group CONFIGURE_VM_STORAGE with role type USER, ID: 86f315b0-7080-4e05-be9e-b5f6c862600e Type: StorageAction group CREATE_DISK with role type USER 2018-12-19 14:49:13,191+01 INFO [org.ovirt.engine.core.bll.storage.disk.image.AddImageFromScratchCommand] (EE-ManagedThreadFactory-engine-Thread-80) [2aeb6d21-7465-411e-8f97-6dc4307fb4ba] Running command: AddImageFromScratchCommand internal: true. Entities affected : ID: 86f315b0-7080-4e05-be9e-b5f6c862600e Type: Storage 2018-12-19 14:49:13,216+01 INFO [org.ovirt.engine.core.bll.storage.disk.image.AddImageFromScratchCommand] (EE-ManagedThreadFactory-engine-Thread-80) [2aeb6d21-7465-411e-8f97-6dc4307fb4ba] Lock freed to object 'EngineLock:{exclusiveLocks='[b51e67e8-bf6f-4295-b341-556e5c498191=VM_DISK_BOOT]', sharedLocks=''}' 2018-12-19 14:49:13,245+01 INFO [org.ovirt.engine.core.vdsbroker.irsbroker.CreateImageVDSCommand] (EE-ManagedThreadFactory-engine-Thread-80) [2aeb6d21-7465-411e-8f97-6dc4307fb4ba] START, CreateImageVDSCommand( CreateImageVDSCommandParameters:{storagePoolId='589b3a5e-0325-01ae-03ba-0000000002cc', ignoreFailoverLimit='false', storageDomainId='86f315b0-7080-4e05-be9e-b5f6c862600e', imageGroupId='cd9125d4-dbe6-4939-885e-0c3d98b52c8d', imageSizeInBytes='53687091200', volumeFormat='COW', newImageId='74cca120-7abe-4978-a4a3-82cd83470cb8', imageType='Sparse', newImageDescription='{"DiskAlias":"vredminedev","DiskDescription":""}', imageInitialSizeInBytes='53687091200'}), log id: 812d567 2018-12-19 14:49:13,247+01 INFO [org.ovirt.engine.core.vdsbroker.irsbroker.CreateImageVDSCommand] (EE-ManagedThreadFactory-engine-Thread-80) [2aeb6d21-7465-411e-8f97-6dc4307fb4ba] -- executeIrsBrokerCommand: calling 'createVolume' with two new parameters: description and UUID 2018-12-19 14:49:13,294+01 INFO [org.ovirt.engine.core.vdsbroker.irsbroker.CreateImageVDSCommand] (EE-ManagedThreadFactory-engine-Thread-80) [2aeb6d21-7465-411e-8f97-6dc4307fb4ba] FINISH, CreateImageVDSCommand, return: 74cca120-7abe-4978-a4a3-82cd83470cb8, log id: 812d567 2018-12-19 14:49:13,304+01 INFO [org.ovirt.engine.core.bll.tasks.CommandAsyncTask] (EE-ManagedThreadFactory-engine-Thread-80) [2aeb6d21-7465-411e-8f97-6dc4307fb4ba] CommandAsyncTask::Adding CommandMultiAsyncTasks object for command 'fb1f38f7-3a66-4ab6-b01f-23bc811f7590' 2018-12-19 14:49:13,304+01 INFO [org.ovirt.engine.core.bll.CommandMultiAsyncTasks] (EE-ManagedThreadFactory-engine-Thread-80) [2aeb6d21-7465-411e-8f97-6dc4307fb4ba] CommandMultiAsyncTasks::attachTask: Attaching task 'd5a221bc-a9bc-49da-adda-6dfa751424ab' to command 'fb1f38f7-3a66-4ab6-b01f-23bc811f7590'. 2018-12-19 14:49:13,339+01 INFO [org.ovirt.engine.core.bll.tasks.AsyncTaskManager] (EE-ManagedThreadFactory-engine-Thread-80) [2aeb6d21-7465-411e-8f97-6dc4307fb4ba] Adding task 'd5a221bc-a9bc-49da-adda-6dfa751424ab' (Parent Command 'AddImageFromScratch', Parameters Type 'org.ovirt.engine.core.common.asynctasks.AsyncTaskParameters'), polling hasn't started yet.. 2018-12-19 14:49:13,404+01 INFO [org.ovirt.engine.core.bll.tasks.SPMAsyncTask] (EE-ManagedThreadFactory-engine-Thread-80) [2aeb6d21-7465-411e-8f97-6dc4307fb4ba] BaseAsyncTask::startPollingTask: Starting to poll task 'd5a221bc-a9bc-49da-adda-6dfa751424ab'. 2018-12-19 14:49:13,448+01 INFO [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] (EE-ManagedThreadFactory-engine-Thread-80) [2aeb6d21-7465-411e-8f97-6dc4307fb4ba] EVENT_ID: ADD_DISK_INTERNAL(2,036), Add-Disk operation of 'vredminedev' was initiated by the system. 2018-12-19 14:49:13,491+01 INFO [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] (EE-ManagedThreadFactory-engine-Thread-80) [2aeb6d21-7465-411e-8f97-6dc4307fb4ba] EVENT_ID: IMPORTEXPORT_STARTING_IMPORT_VM(1,165), Starting to import Vm vredminedev to Data Center Default, Cluster Default 2018-12-19 14:49:15,421+01 INFO [org.ovirt.engine.core.bll.ConcurrentChildCommandsExecutionCallback] (EE-ManagedThreadFactory-engineScheduled-Thread-43) [2aeb6d21-7465-411e-8f97-6dc4307fb4ba] Command 'AddDisk' (id: '23eeaf0f-2b76-41e0-a9d0-57917c1c70ba') waiting on child command id: 'fb1f38f7-3a66-4ab6-b01f-23bc811f7590' type:'AddImageFromScratch' to complete 2018-12-19 14:49:15,442+01 INFO [org.ovirt.engine.core.bll.SerialChildCommandsExecutionCallback] (EE-ManagedThreadFactory-engineScheduled-Thread-43) [2aeb6d21-7465-411e-8f97-6dc4307fb4ba] Command 'ImportVmFromExternalProvider' (id: '6831f041-7294-4a93-b334-09672786d95d') waiting on child command id: '23eeaf0f-2b76-41e0-a9d0-57917c1c70ba' type:'AddDisk' to complete 2018-12-19 14:49:18,361+01 INFO [org.ovirt.engine.core.bll.storage.disk.image.AddImageFromScratchCommand] (EE-ManagedThreadFactory-engine-Thread-85) [2aeb6d21-7465-411e-8f97-6dc4307fb4ba] Command [id=fb1f38f7-3a66-4ab6-b01f-23bc811f7590]: Updating status to 'SUCCEEDED', The command end method logic will be executed by one of its parent commands. 2018-12-19 14:49:18,362+01 INFO [org.ovirt.engine.core.bll.tasks.CommandAsyncTask] (EE-ManagedThreadFactory-engine-Thread-85) [2aeb6d21-7465-411e-8f97-6dc4307fb4ba] CommandAsyncTask::HandleEndActionResult [within thread]: endAction for action type 'AddImageFromScratch' completed, handling the result. 2018-12-19 14:49:18,362+01 INFO [org.ovirt.engine.core.bll.tasks.CommandAsyncTask] (EE-ManagedThreadFactory-engine-Thread-85) [2aeb6d21-7465-411e-8f97-6dc4307fb4ba] CommandAsyncTask::HandleEndActionResult [within thread]: endAction for action type 'AddImageFromScratch' succeeded, clearing tasks. 2018-12-19 14:49:18,362+01 INFO [org.ovirt.engine.core.bll.tasks.SPMAsyncTask] (EE-ManagedThreadFactory-engine-Thread-85) [2aeb6d21-7465-411e-8f97-6dc4307fb4ba] SPMAsyncTask::ClearAsyncTask: Attempting to clear task 'd5a221bc-a9bc-49da-adda-6dfa751424ab' 2018-12-19 14:49:18,365+01 INFO [org.ovirt.engine.core.vdsbroker.irsbroker.SPMClearTaskVDSCommand] (EE-ManagedThreadFactory-engine-Thread-85) [2aeb6d21-7465-411e-8f97-6dc4307fb4ba] START, SPMClearTaskVDSCommand( SPMTaskGuidBaseVDSCommandParameters:{storagePoolId='589b3a5e-0325-01ae-03ba-0000000002cc', ignoreFailoverLimit='false', taskId='d5a221bc-a9bc-49da-adda-6dfa751424ab'}), log id: 4d220165 2018-12-19 14:49:18,366+01 INFO [org.ovirt.engine.core.vdsbroker.vdsbroker.HSMClearTaskVDSCommand] (EE-ManagedThreadFactory-engine-Thread-85) [2aeb6d21-7465-411e-8f97-6dc4307fb4ba] START, HSMClearTaskVDSCommand(HostName = rhev1, HSMTaskGuidBaseVDSCommandParameters:{hostId='613153ce-185a-4129-b4bf-5e711790ed8d', taskId='d5a221bc-a9bc-49da-adda-6dfa751424ab'}), log id: 19157a23 2018-12-19 14:49:18,375+01 INFO [org.ovirt.engine.core.vdsbroker.vdsbroker.HSMClearTaskVDSCommand] (EE-ManagedThreadFactory-engine-Thread-85) [2aeb6d21-7465-411e-8f97-6dc4307fb4ba] FINISH, HSMClearTaskVDSCommand, log id: 19157a23 2018-12-19 14:49:18,375+01 INFO [org.ovirt.engine.core.vdsbroker.irsbroker.SPMClearTaskVDSCommand] (EE-ManagedThreadFactory-engine-Thread-85) [2aeb6d21-7465-411e-8f97-6dc4307fb4ba] FINISH, SPMClearTaskVDSCommand, log id: 4d220165 2018-12-19 14:49:18,384+01 INFO [org.ovirt.engine.core.bll.tasks.SPMAsyncTask] (EE-ManagedThreadFactory-engine-Thread-85) [2aeb6d21-7465-411e-8f97-6dc4307fb4ba] BaseAsyncTask::removeTaskFromDB: Removed task 'd5a221bc-a9bc-49da-adda-6dfa751424ab' from DataBase 2018-12-19 14:49:18,384+01 INFO [org.ovirt.engine.core.bll.tasks.CommandAsyncTask] (EE-ManagedThreadFactory-engine-Thread-85) [2aeb6d21-7465-411e-8f97-6dc4307fb4ba] CommandAsyncTask::HandleEndActionResult [within thread]: Removing CommandMultiAsyncTasks object for entity 'fb1f38f7-3a66-4ab6-b01f-23bc811f7590' 2018-12-19 14:49:19,449+01 INFO [org.ovirt.engine.core.bll.ConcurrentChildCommandsExecutionCallback] (EE-ManagedThreadFactory-engineScheduled-Thread-45) [2aeb6d21-7465-411e-8f97-6dc4307fb4ba] Command 'AddDisk' id: '23eeaf0f-2b76-41e0-a9d0-57917c1c70ba' child commands '[fb1f38f7-3a66-4ab6-b01f-23bc811f7590]' executions were completed, status 'SUCCEEDED' 2018-12-19 14:49:19,469+01 INFO [org.ovirt.engine.core.bll.SerialChildCommandsExecutionCallback] (EE-ManagedThreadFactory-engineScheduled-Thread-45) [2aeb6d21-7465-411e-8f97-6dc4307fb4ba] Command 'ImportVmFromExternalProvider' (id: '6831f041-7294-4a93-b334-09672786d95d') waiting on child command id: '23eeaf0f-2b76-41e0-a9d0-57917c1c70ba' type:'AddDisk' to complete 2018-12-19 14:49:20,474+01 INFO [org.ovirt.engine.core.bll.storage.disk.AddDiskCommand] (EE-ManagedThreadFactory-engineScheduled-Thread-94) [2aeb6d21-7465-411e-8f97-6dc4307fb4ba] Ending command 'org.ovirt.engine.core.bll.storage.disk.AddDiskCommand' successfully. 2018-12-19 14:49:20,482+01 INFO [org.ovirt.engine.core.bll.storage.disk.image.AddImageFromScratchCommand] (EE-ManagedThreadFactory-engineScheduled-Thread-94) [2aeb6d21-7465-411e-8f97-6dc4307fb4ba] Ending command 'org.ovirt.engine.core.bll.storage.disk.image.AddImageFromScratchCommand' successfully. 2018-12-19 14:49:20,494+01 INFO [org.ovirt.engine.core.vdsbroker.irsbroker.GetImageInfoVDSCommand] (EE-ManagedThreadFactory-engineScheduled-Thread-94) [2aeb6d21-7465-411e-8f97-6dc4307fb4ba] START, GetImageInfoVDSCommand( GetImageInfoVDSCommandParameters:{storagePoolId='589b3a5e-0325-01ae-03ba-0000000002cc', ignoreFailoverLimit='false', storageDomainId='86f315b0-7080-4e05-be9e-b5f6c862600e', imageGroupId='cd9125d4-dbe6-4939-885e-0c3d98b52c8d', imageId='74cca120-7abe-4978-a4a3-82cd83470cb8'}), log id: 19bc6ba5 2018-12-19 14:49:20,497+01 INFO [org.ovirt.engine.core.vdsbroker.vdsbroker.GetVolumeInfoVDSCommand] (EE-ManagedThreadFactory-engineScheduled-Thread-94) [2aeb6d21-7465-411e-8f97-6dc4307fb4ba] START, GetVolumeInfoVDSCommand(HostName = rhev1, GetVolumeInfoVDSCommandParameters:{hostId='613153ce-185a-4129-b4bf-5e711790ed8d', storagePoolId='589b3a5e-0325-01ae-03ba-0000000002cc', storageDomainId='86f315b0-7080-4e05-be9e-b5f6c862600e', imageGroupId='cd9125d4-dbe6-4939-885e-0c3d98b52c8d', imageId='74cca120-7abe-4978-a4a3-82cd83470cb8'}), log id: 3b09d6a4 2018-12-19 14:49:20,660+01 INFO [org.ovirt.engine.core.vdsbroker.vdsbroker.GetVolumeInfoVDSCommand] (EE-ManagedThreadFactory-engineScheduled-Thread-94) [2aeb6d21-7465-411e-8f97-6dc4307fb4ba] FINISH, GetVolumeInfoVDSCommand, return: org.ovirt.engine.core.common.businessentities.storage.DiskImage@124eb40a, log id: 3b09d6a4 2018-12-19 14:49:20,661+01 INFO [org.ovirt.engine.core.vdsbroker.irsbroker.GetImageInfoVDSCommand] (EE-ManagedThreadFactory-engineScheduled-Thread-94) [2aeb6d21-7465-411e-8f97-6dc4307fb4ba] FINISH, GetImageInfoVDSCommand, return: org.ovirt.engine.core.common.businessentities.storage.DiskImage@124eb40a, log id: 19bc6ba5 2018-12-19 14:49:20,723+01 INFO [org.ovirt.engine.core.vdsbroker.vdsbroker.PrepareImageVDSCommand] (EE-ManagedThreadFactory-engineScheduled-Thread-94) [2aeb6d21-7465-411e-8f97-6dc4307fb4ba] START, PrepareImageVDSCommand(HostName = rhev2, PrepareImageVDSCommandParameters:{hostId='0817d68f-8880-4677-83d5-dfcbb12bf90b'}), log id: 2ba0601c 2018-12-19 14:49:21,113+01 INFO [org.ovirt.engine.core.vdsbroker.vdsbroker.PrepareImageVDSCommand] (EE-ManagedThreadFactory-engineScheduled-Thread-94) [2aeb6d21-7465-411e-8f97-6dc4307fb4ba] FINISH, PrepareImageVDSCommand, return: PrepareImageReturn:{status='Status [code=0, message=Done]'}, log id: 2ba0601c 2018-12-19 14:49:21,116+01 INFO [org.ovirt.engine.core.vdsbroker.vdsbroker.GetQemuImageInfoVDSCommand] (EE-ManagedThreadFactory-engineScheduled-Thread-94) [2aeb6d21-7465-411e-8f97-6dc4307fb4ba] START, GetQemuImageInfoVDSCommand(HostName = rhev2, GetVolumeInfoVDSCommandParameters:{hostId='0817d68f-8880-4677-83d5-dfcbb12bf90b', storagePoolId='589b3a5e-0325-01ae-03ba-0000000002cc', storageDomainId='86f315b0-7080-4e05-be9e-b5f6c862600e', imageGroupId='cd9125d4-dbe6-4939-885e-0c3d98b52c8d', imageId='74cca120-7abe-4978-a4a3-82cd83470cb8'}), log id: 7d2834c6 2018-12-19 14:49:21,211+01 INFO [org.ovirt.engine.core.vdsbroker.vdsbroker.GetQemuImageInfoVDSCommand] (EE-ManagedThreadFactory-engineScheduled-Thread-94) [2aeb6d21-7465-411e-8f97-6dc4307fb4ba] FINISH, GetQemuImageInfoVDSCommand, return: org.ovirt.engine.core.common.businessentities.storage.QemuImageInfo@9c9776e, log id: 7d2834c6 2018-12-19 14:49:21,213+01 INFO [org.ovirt.engine.core.vdsbroker.vdsbroker.TeardownImageVDSCommand] (EE-ManagedThreadFactory-engineScheduled-Thread-94) [2aeb6d21-7465-411e-8f97-6dc4307fb4ba] START, TeardownImageVDSCommand(HostName = rhev2, ImageActionsVDSCommandParameters:{hostId='0817d68f-8880-4677-83d5-dfcbb12bf90b'}), log id: 1c9ef41a 2018-12-19 14:49:21,440+01 INFO [org.ovirt.engine.core.vdsbroker.vdsbroker.TeardownImageVDSCommand] (EE-ManagedThreadFactory-engineScheduled-Thread-94) [2aeb6d21-7465-411e-8f97-6dc4307fb4ba] FINISH, TeardownImageVDSCommand, return: StatusReturn:{status='Status [code=0, message=Done]'}, log id: 1c9ef41a 2018-12-19 14:49:21,651+01 INFO [org.ovirt.engine.core.bll.exportimport.ConvertVmCommand] (EE-ManagedThreadFactory-engineScheduled-Thread-94) [2aeb6d21-7465-411e-8f97-6dc4307fb4ba] Running command: ConvertVmCommand internal: true. Entities affected : ID: b51e67e8-bf6f-4295-b341-556e5c498191 Type: VM 2018-12-19 14:49:21,653+01 INFO [org.ovirt.engine.core.vdsbroker.vdsbroker.IsoPrefixVDSCommand] (EE-ManagedThreadFactory-engineScheduled-Thread-94) [2aeb6d21-7465-411e-8f97-6dc4307fb4ba] START, IsoPrefixVDSCommand(HostName = rhev1, VdsAndPoolIDVDSParametersBase:{hostId='613153ce-185a-4129-b4bf-5e711790ed8d', storagePoolId='589b3a5e-0325-01ae-03ba-0000000002cc'}), log id: 52575142 2018-12-19 14:49:21,661+01 INFO [org.ovirt.engine.core.vdsbroker.vdsbroker.IsoPrefixVDSCommand] (EE-ManagedThreadFactory-engineScheduled-Thread-94) [2aeb6d21-7465-411e-8f97-6dc4307fb4ba] FINISH, IsoPrefixVDSCommand, return: /rhev/data-center/mnt/myszkow.aegonpolska.pl:_nfs_lv__rhev__iso/93983820-9947-4dcb-ba6d-514ee7ccccc1/images/11111111-1111-1111-1111-111111111111, log id: 52575142 2018-12-19 14:49:21,663+01 INFO [org.ovirt.engine.core.vdsbroker.vdsbroker.ConvertVmVDSCommand] (EE-ManagedThreadFactory-engineScheduled-Thread-94) [2aeb6d21-7465-411e-8f97-6dc4307fb4ba] START, ConvertVmVDSCommand(HostName = rhev1, ConvertVmVDSParameters:{hostId='613153ce-185a-4129-b4bf-5e711790ed8d', url='qemu+ssh://root@elblag.aegonpolska.pl/system', username='null', vmId='b51e67e8-bf6f-4295-b341-556e5c498191', vmName='vredminedev', storageDomainId='86f315b0-7080-4e05-be9e-b5f6c862600e', storagePoolId='589b3a5e-0325-01ae-03ba-0000000002cc', virtioIsoPath='/rhev/data-center/mnt/myszkow.aegonpolska.pl:_nfs_lv__rhev__iso/93983820-9947-4dcb-ba6d-514ee7ccccc1/images/11111111-1111-1111-1111-111111111111/CentOS-7-x86_64-Minimal-1611.iso', compatVersion='null', Disk0='cd9125d4-dbe6-4939-885e-0c3d98b52c8d'}), log id: 7de2625c 2018-12-19 14:49:21,791+01 INFO [org.ovirt.engine.core.vdsbroker.vdsbroker.ConvertVmVDSCommand] (EE-ManagedThreadFactory-engineScheduled-Thread-94) [2aeb6d21-7465-411e-8f97-6dc4307fb4ba] FINISH, ConvertVmVDSCommand, return: b51e67e8-bf6f-4295-b341-556e5c498191, log id: 7de2625c 2018-12-19 14:49:23,820+01 INFO [org.ovirt.engine.core.bll.SerialChildCommandsExecutionCallback] (EE-ManagedThreadFactory-engineScheduled-Thread-46) [2aeb6d21-7465-411e-8f97-6dc4307fb4ba] Command 'ImportVmFromExternalProvider' (id: '6831f041-7294-4a93-b334-09672786d95d') waiting on child command id: '3d026219-a699-44dc-9e9a-57398ca28dd5' type:'ConvertVm' to complete 2018-12-19 14:49:27,845+01 INFO [org.ovirt.engine.core.bll.SerialChildCommandsExecutionCallback] (EE-ManagedThreadFactory-engineScheduled-Thread-62) [2aeb6d21-7465-411e-8f97-6dc4307fb4ba] Command 'ImportVmFromExternalProvider' (id: '6831f041-7294-4a93-b334-09672786d95d') waiting on child command id: '3d026219-a699-44dc-9e9a-57398ca28dd5' type:'ConvertVm' to complete 2018-12-19 14:49:35,859+01 INFO [org.ovirt.engine.core.bll.SerialChildCommandsExecutionCallback] (EE-ManagedThreadFactory-engineScheduled-Thread-76) [2aeb6d21-7465-411e-8f97-6dc4307fb4ba] Command 'ImportVmFromExternalProvider' (id: '6831f041-7294-4a93-b334-09672786d95d') waiting on child command id: '3d026219-a699-44dc-9e9a-57398ca28dd5' type:'ConvertVm' to complete 2018-12-19 14:49:36,860+01 INFO [org.ovirt.engine.core.bll.exportimport.ConvertVmCallback] (EE-ManagedThreadFactory-engineScheduled-Thread-78) [2aeb6d21-7465-411e-8f97-6dc4307fb4ba] Conversion of VM from external environment failed: Job u'b51e67e8-bf6f-4295-b341-556e5c498191' process failed exit-code: 1 2018-12-19 14:49:37,873+01 ERROR [org.ovirt.engine.core.bll.exportimport.ConvertVmCommand] (EE-ManagedThreadFactory-engineScheduled-Thread-82) [2aeb6d21-7465-411e-8f97-6dc4307fb4ba] Ending command 'org.ovirt.engine.core.bll.exportimport.ConvertVmCommand' with failure. 2018-12-19 14:49:37,876+01 INFO [org.ovirt.engine.core.vdsbroker.vdsbroker.DeleteV2VJobVDSCommand] (EE-ManagedThreadFactory-engineScheduled-Thread-82) [2aeb6d21-7465-411e-8f97-6dc4307fb4ba] START, DeleteV2VJobVDSCommand(HostName = rhev1, VdsAndVmIDVDSParametersBase:{hostId='613153ce-185a-4129-b4bf-5e711790ed8d', vmId='b51e67e8-bf6f-4295-b341-556e5c498191'}), log id: 43d8bd56 2018-12-19 14:49:37,880+01 INFO [org.ovirt.engine.core.vdsbroker.vdsbroker.DeleteV2VJobVDSCommand] (EE-ManagedThreadFactory-engineScheduled-Thread-82) [2aeb6d21-7465-411e-8f97-6dc4307fb4ba] FINISH, DeleteV2VJobVDSCommand, log id: 43d8bd56 2018-12-19 14:49:37,907+01 INFO [org.ovirt.engine.core.bll.SerialChildCommandsExecutionCallback] (EE-ManagedThreadFactory-engineScheduled-Thread-82) [2aeb6d21-7465-411e-8f97-6dc4307fb4ba] Command 'ImportVmFromExternalProvider' id: '6831f041-7294-4a93-b334-09672786d95d' child commands '[23eeaf0f-2b76-41e0-a9d0-57917c1c70ba, 3d026219-a699-44dc-9e9a-57398ca28dd5]' executions were completed, status 'FAILED' 2018-12-19 14:49:38,931+01 ERROR [org.ovirt.engine.core.bll.exportimport.ImportVmFromExternalProviderCommand] (EE-ManagedThreadFactory-engineScheduled-Thread-44) [2aeb6d21-7465-411e-8f97-6dc4307fb4ba] Ending command 'org.ovirt.engine.core.bll.exportimport.ImportVmFromExternalProviderCommand' with failure.

Comment 13 Ryan Barry 2019-01-21 14:53:29 UTC
Re-targeting to 4.3.1 since it is missing a patch, an acked blocker flag, or both

Comment 30 Nisim Simsolo 2019-03-25 08:53:51 UTC
Reassigning this RFE, Currently, using latest oVirt, it's impossible to import VMs with any storage domain:
It's impossible to import from NFSv4.2 to NFSv4.2, from NFSv4.2 to iSCSI and from NFSv4.1 to NFSv4.1
for comparison, it's possible to import KVM VM using RHV 4.3

Verification builds:
ovirt-engine-4.4.0-0.0.master.20190323132107.git181b8fe.el7
vdsm-4.40.0-114.git8e34445.el7.x86_64
libvirt-client-4.5.0-10.el7_6.6.x86_64
qemu-kvm-ev-2.12.0-18.el7_6.3.1.x86_64
sanlock-3.6.0-1.el7.x86_64

vdsm.log and engine.log attached

Comment 31 Nisim Simsolo 2019-03-25 09:04:37 UTC
Created attachment 1547648 [details]
reassigned, engine.log

Comment 32 Nisim Simsolo 2019-03-25 09:06:30 UTC
Created attachment 1547649 [details]
reassigned, vdsm.log

Comment 33 Nir Soffer 2019-03-31 13:55:27 UTC
Note that converting image to block device, you cannot use truncate() to
to resize the image, or seek() to skip zero or unallocated areas.

Block device content is not guaranteed to be zero when you create a new LV.
It is likely to contain junk data from previous user of the that LUN with
some storage server, and this is likely to corrupt the file system copied
to the device.

For detecting that image is on a block device, the simplest way is to use:
https://docs.python.org/2/library/stat.html#stat.S_ISBLK

To support import to block device you have several options:

1. zero the entire device before the copying data from libvirt.

We have a wrapper for calling blkdicard for this, see vdsm.storage.blkdiscard.
After that you can copy the data from libvirt, skipping the unallocated
or zero areas.

The disadvantage slower import (up to 100% slower) if zeroing out data is slow
and most of the image is allocated. In this case you pay for zeroing the entire
device, and then writing the data over the zeroed areas.

2. Use imageio to upload the data.

You can start an image transfer, and upload the data using imageio random I/O API.
This give you support sparseness and any disk type.

For examples how to perform a transfer see:
https://github.com/oVirt/ovirt-engine-sdk/blob/master/sdk/examples/upload_disk.py

For example how to use the imageio random I/O API see:
http://ovirt.github.io/ovirt-imageio/random-io.html

Since you upload locally, you want to communicate with imageio using unix socket,
see this for more info:
http://ovirt.github.io/ovirt-imageio/unix-socket.html

For example upload code supporting sparseness and efficient zero, see imageio
client:
https://github.com/oVirt/ovirt-imageio/blob/master/common/ovirt_imageio_common/client.py

You can also use virt-v2v rhv-upload-plugin code as example for using imageio API:
https://github.com/libguestfs/libguestfs/blob/master/v2v/rhv-upload-plugin.py

It would be best to use client.upload() which does everything for you, but you
are using the libvirt API. I think the best way would be to create an adapter that
can be used by libvirt API to call the proper imageio API to write data or zeros.

For example, if libvirt is doing:

    seek(offset)
    write(data)

You want to send a PUT request with the offset:
https://github.com/oVirt/ovirt-imageio/blob/master/common/ovirt_imageio_common/client.py#L189

When libvirt uses seek() to skip zero areas, you want to send a PATCH/zero request:
https://github.com/oVirt/ovirt-imageio/blob/2a74f0a9b23720c6f218cebda3e09b7f12f7073f/common/ovirt_imageio_common/client.py#L235

Using imageio NBD support in 4.3, you will also be able to upload raw data to 
qcow2 image if the user wants to import VM to thin disk on block storage.
This feature is expected to land in 4.3.z.

3. Use qemu-img convert

If libvirt can support one of the protocols supported by qemu-img like NBD, you
can convert the image directly using qemu-img convert.

Comment 34 Ryan Barry 2019-04-02 11:53:47 UTC
Imports are unblocked. Moving the state back

Comment 35 Nisim Simsolo 2019-04-04 11:06:45 UTC
Reassigning the bug. none of the imports to block device is working.

1. Import from KVM block device to RHV block device failed with:
2019-04-04 13:57:13,342+03 ERROR [org.ovirt.engine.core.bll.exportimport.ImportVmFromExternalProviderCommand] (EE-ManagedThreadFactory-engine-Thread-26966) [1afc8f26-c273-46d6
-9c6e-6c28c1830273] Exception: org.springframework.dao.DuplicateKeyException: CallableStatementCallback; SQL [{call insertvmstatic(?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?,
 ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, 
?)}ERROR: duplicate key value violates unique constraint "pk_vm_static"
  Detail: Key (vm_guid)=(6c6a2287-c1fd-42b3-95c0-4d6220ec35ee) already exists.
  Where: SQL statement "INSERT INTO vm_static(description,
                      free_text_comment,
                      mem_size_mb,
                      max_memory_size_mb,
                      num_of_io_threads,
.
.
.

2. Import from KVM NVSv4.1 to RHV block device failed with:
2019-04-04 14:00:34,031+03 ERROR [org.ovirt.engine.core.bll.exportimport.ImportVmFromExternalProviderCommand] (EE-ManagedThreadFactory-engine-Thread-26994) [c0fc6cdf-a53b-43be
-b3e9-3449b23f2062] Command 'org.ovirt.engine.core.bll.exportimport.ImportVmFromExternalProviderCommand' failed: EngineException: Failed to create disk! (Failed with error ENG
INE and code 5001)

3. Import from KVM NFSv4.2 to RHV block device, also failed with "EngineException: Failed to create disk! (Failed with error ENGINE and code 5001)"

Verification builds:
vdsm-4.40.0-138.git00b6143.el7.x86_64
sanlock-3.6.0-1.el7.x86_64
libvirt-client-4.5.0-10.el7_6.6.x86_64
qemu-kvm-ev-2.12.0-18.el7_6.3.1.x86_64

Engine.log attached

Comment 36 Nisim Simsolo 2019-04-04 11:08:53 UTC
Created attachment 1551801 [details]
reassigned engine.log Apr 4 2019

Comment 52 Ryan Barry 2019-05-01 11:29:25 UTC
Per comment#24, let's use DiskFormat.RAW

Comment 53 RHV bug bot 2019-08-02 17:21:23 UTC
WARN: Bug status wasn't changed from MODIFIED to ON_QA due to the following reason:

[Found non-acked flags: '{}', ]

For more info please contact: rhv-devops@redhat.comINFO: Bug status wasn't changed from MODIFIED to ON_QA due to the following reason:

[Found non-acked flags: '{}', ]

For more info please contact: rhv-devops@redhat.com

Comment 54 RHV bug bot 2019-08-08 13:17:47 UTC
WARN: Bug status wasn't changed from MODIFIED to ON_QA due to the following reason:

[Found non-acked flags: '{}', ]

For more info please contact: rhv-devops@redhat.comINFO: Bug status wasn't changed from MODIFIED to ON_QA due to the following reason:

[Found non-acked flags: '{}', ]

For more info please contact: rhv-devops@redhat.com

Comment 55 RHV bug bot 2019-08-15 14:05:14 UTC
WARN: Bug status wasn't changed from MODIFIED to ON_QA due to the following reason:

[Found non-acked flags: '{}', ]

For more info please contact: rhv-devops@redhat.comINFO: Bug status wasn't changed from MODIFIED to ON_QA due to the following reason:

[Found non-acked flags: '{}', ]

For more info please contact: rhv-devops@redhat.com

Comment 56 RHV bug bot 2019-09-05 13:34:28 UTC
WARN: Bug status wasn't changed from MODIFIED to ON_QA due to the following reason:

[Found non-acked flags: '{'rhevm-4.3.z': '?'}', ]

For more info please contact: rhv-devops@redhat.comINFO: Bug status wasn't changed from MODIFIED to ON_QA due to the following reason:

[Found non-acked flags: '{'rhevm-4.3.z': '?'}', ]

For more info please contact: rhv-devops@redhat.com

Comment 57 Nisim Simsolo 2019-09-10 14:48:33 UTC
Reassigning, import VM from KVM with NFS (v4.1 or v4.2) to RHV block SD failed with the next engine.log:
2019-09-10 17:37:15,961+03 ERROR [org.ovirt.engine.core.bll.exportimport.ImportVmFromExternalProviderCommand] (EE-ManagedThreadFactory-engine-Thread-3160) [c99ae0bf-a949-4d9f-9200-f5526b17958b] Command 'org.ovirt.engine.
core.bll.exportimport.ImportVmFromExternalProviderCommand' failed: EngineException: Failed to create disk! (Failed with error ENGINE and code 5001)

other of the import variations are working as expected:
KVM block SD to RHV block SD
KVM  NFSv4.1 to RHV NFSv4.1
KVM NFSv4.2 to RHV NFSv4.1
KVM block SD to RHV NFSv4.1

Reassigning builds:
rhvm-4.3.6.5-0.1.el7
vdsm-4.30.30-1.el7ev.x86_64
libvirt-4.5.0-23.el7_7.1.x86_64
qemu-kvm-rhev-2.12.0-33.el7_7.3.x86_64
sanlock-3.7.3-1.el7.x86_64

vdsm.log, engine.log and supervdsm.log attached

Comment 58 Nisim Simsolo 2019-09-10 14:50:24 UTC
Created attachment 1613624 [details]
vdsm.log

Comment 59 Nisim Simsolo 2019-09-10 14:51:01 UTC
Created attachment 1613625 [details]
engine.log

Comment 60 Nisim Simsolo 2019-09-10 14:52:05 UTC
Created attachment 1613626 [details]
supervdsm.log

Comment 63 Steven Rosenberg 2019-10-10 15:28:48 UTC
Please retest. There are many errors in the engine log that implies there are registration, networking errors and incompatible host errors that should not be related to specifically this issue (Block Storage) such as:

1. “Cannot upload enabled repos report, is this client registered?”

2. “VDS_BROKER_COMMAND_FAILURE(10,802), VDSM amd-vfio.tlv.redhat.com command SpmStatusVDS failed: Broken pipe”

3. “Invalid status on Data Center Default. Setting Data Center status to Non Responsive (On host amd-vfio.tlv.redhat.com, Error: Network error during communication with the Host.).”

4. “Can not run fence action on host 'amd-vfio.tlv.redhat.com', no suitable proxy host was found.”

Comment 65 Steven Rosenberg 2019-10-22 16:47:47 UTC
(In reply to Nisim Simsolo from comment #57)
> Reassigning, import VM from KVM with NFS (v4.1 or v4.2) to RHV block SD
> failed with the next engine.log:
> 2019-09-10 17:37:15,961+03 ERROR
> [org.ovirt.engine.core.bll.exportimport.ImportVmFromExternalProviderCommand]
> (EE-ManagedThreadFactory-engine-Thread-3160)
> [c99ae0bf-a949-4d9f-9200-f5526b17958b] Command 'org.ovirt.engine.
> core.bll.exportimport.ImportVmFromExternalProviderCommand' failed:
> EngineException: Failed to create disk! (Failed with error ENGINE and code
> 5001)
> 
> other of the import variations are working as expected:
> KVM block SD to RHV block SD
> KVM  NFSv4.1 to RHV NFSv4.1
> KVM NFSv4.2 to RHV NFSv4.1
> KVM block SD to RHV NFSv4.1
> 
> Reassigning builds:
> rhvm-4.3.6.5-0.1.el7
> vdsm-4.30.30-1.el7ev.x86_64
> libvirt-4.5.0-23.el7_7.1.x86_64
> qemu-kvm-rhev-2.12.0-33.el7_7.3.x86_64
> sanlock-3.7.3-1.el7.x86_64
> 
> vdsm.log, engine.log and supervdsm.log attached

From the testing, it seems the combination that fails is NFS to Block Storage. In review of this issue, I see that the failure from the log is here:

Validation of action 'AddDisk' failed for user admin@internal-authz. Reasons: VAR__ACTION__ADD,VAR__TYPE__DISK,ACTION_TYPE_FAILED_DISK_CONFIGURATION_NOT_SUPPORTED,$volumeFormat COW,$volumeType Preallocated
2019-09-10 16:15:28,979+03 INFO  [org.ovirt.engine.core.bll.storage.disk.AddDiskCommand] (EE-ManagedThreadFactory-engine-Thread-2452) [b17ebc88-4d70-4ece-8a2d-267da5d70096] Lock freed to object 'EngineLock:{exclusiveLocks='[4be7996e-125a-4895-af03-afd3efa0d753=VM_DISK_BOOT]', sharedLocks=''}'

This fails because the Volume Type (Allocation Policy) is Preallocated and the Volume Format is COW.

The logic is in the ImageHandler.java code in the checkImageConfiguration() function.

According to the documentation, NFS and SAN only support Sparse for COW [1]

See Chart: Table 10.1. Permitted Storage Combinations

I did turn the checking off in the checkImageConfiguration() function and it did succeed, but much more testing would need to be done in order to ensure we do not have side affects from all of the combinations (including types of importing such as VMWare, KVM, etc.) and the documentation would need to change accordingly. Also, flavors such as ova hard code the Volume Type to Sparse even when the user chose Preallocated which may be another design issue.

Another factor is the disk backup and why we are considering that. 

The complete function is here:

    public static boolean checkImageConfiguration(StorageDomainStatic storageDomain, VolumeType volumeType, VolumeFormat volumeFormat, DiskBackup diskBackup) {
        return !((volumeType == VolumeType.Preallocated && volumeFormat == VolumeFormat.COW && diskBackup != DiskBackup.Incremental)
                || (storageDomain.getStorageType().isBlockDomain() && volumeType == VolumeType.Sparse && volumeFormat == VolumeFormat.RAW)
                || volumeFormat == VolumeFormat.Unassigned
                || volumeType == VolumeType.Unassigned);
    }


Basically this says that only one of these combinations will cause the function to fail: 

1. Preallocated Volume Type, COW Volume Format and Non-Incremental Disk Backup
2. Block Storage, Sparse Volume Type, Raw Volume Format

The testing fails on item 1 in the scenario in question.

Please advise accordingly.

[1] https://access.redhat.com/documentation/en-us/red_hat_virtualization/4.3/html-single/administration_guide/index#Understanding_virtual_disks

Comment 66 Ryan Barry 2019-10-22 17:10:50 UTC
Well, it's an RFE, so we'd expect the functionality to change, including docs and additional testing.

For this RFE, I'd worry less about OVA imports and other cases, and all about v2v imports.

Comment 67 Steven Rosenberg 2019-10-23 12:10:34 UTC
(In reply to Ryan Barry from comment #66)
> Well, it's an RFE, so we'd expect the functionality to change, including
> docs and additional testing.
> 
> For this RFE, I'd worry less about OVA imports and other cases, and all
> about v2v imports.

I removed the checking that prevents Volume Type Preallocated and Format: COW within the engine. Though this succeeded for a v2v kvm import with an NFS target and OVA Block Storage (modifying the Volume type to be preallocated), the kvm v2v fails when the Target is Block storage (Volume Type Preallocated, format COW).

It fails in the kvm2ovirt.py module with the following within the log:


[root@sla-sheldon ~]# tail -f /var/log/vdsm/import/import-f8e450d5-7c1f-4149-8925-c8cde4d88a61-20191023T143416.log
[    0.5] preparing for copy
[    0.5] Copying disk 1/1 to /rhev/data-center/mnt/blockSD/86ba27ee-ab8c-41a4-b00e-0f13e658cc2c/images/ce08f494-a6e4-4302-a9ab-0945c0f2d5e0/b731ccd2-0420-492d-900c-19c8dbcc51ce
Traceback (most recent call last):
  File "/usr/libexec/vdsm/kvm2ovirt", line 23, in <module>
    kvm2ovirt.main()
  File "/usr/lib/python2.7/site-packages/vdsm/kvm2ovirt.py", line 255, in main
    handle_volume(con, diskno, src, dst, options)
  File "/usr/lib/python2.7/site-packages/vdsm/kvm2ovirt.py", line 206, in handle_volume
    download_disk(sr, estimated_size, None, dst, options.bufsize)
  File "/usr/lib/python2.7/site-packages/vdsm/kvm2ovirt.py", line 151, in download_disk
    op.run()
  File "/usr/lib64/python2.7/site-packages/ovirt_imageio_common/ops.py", line 62, in run
    self._run()
  File "/usr/lib64/python2.7/site-packages/ovirt_imageio_common/ops.py", line 144, in _run
    self._receive_chunk(buf, count)
  File "/usr/lib64/python2.7/site-packages/ovirt_imageio_common/ops.py", line 169, in _receive_chunk
    written = self._dst.write(wbuf)
  File "/usr/lib64/python2.7/site-packages/ovirt_imageio_common/backends/file.py", line 86, in write
    return util.uninterruptible(self._fio.write, buf)
  File "/usr/lib64/python2.7/site-packages/ovirt_imageio_common/util.py", line 25, in uninterruptible
    return func(*args)
IOError: [Errno 28] No space left on device



It is not clear if the error "No space left on device" is valid being that the Block Storage has 74 GiB free and the VM has an actual size of 8 GiB (virtual is 20 GiB).

The failure is in the ovirt_imageio_common. Maybe Nir can advise on what the actual issue is (based upon annotations of imageio). The Host has on the root:

Filesystem               Size  Used Avail Use% Mounted on
/dev/mapper/vg0-lv_root  435G   48G  366G  12% /


Either it is using a separate intermediate storage that needs to be increased or the error may be erroneous.


Note You need to log in before you can comment on or make changes to this bug.