Bug 1749234

Summary: Can't import guest from export domain to data domain on rhv4.3 due to error "Invalid parameter: 'DiskType=1'"
Product: Red Hat Enterprise Linux Advanced Virtualization Reporter: mxie <mxie>
Component: libguestfsAssignee: Richard W.M. Jones <rjones>
Status: CLOSED ERRATA QA Contact: Virtualization Bugs <virt-bugs>
Severity: high Docs Contact:
Priority: high    
Version: 8.1CC: juzhou, knoel, mzhan, ptoscano, rjones, tzheng, xiaodwan, zili
Target Milestone: rc   
Target Release: 8.1   
Hardware: x86_64   
OS: Unspecified   
Whiteboard: V2V
Fixed In Version: libguestfs-1.40.2-14.module+el8.1.0+4230+0b6e3259 Doc Type: If docs needed, set a value
Doc Text:
Story Points: ---
Clone Of: 1746699 Environment:
Last Closed: 2019-11-06 07:19:21 UTC Type: Bug
Regression: --- Mount Type: ---
Documentation: --- CRM:
Verified Versions: Category: ---
oVirt Team: --- RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: --- Target Upstream Version:
Embargoed:
Bug Depends On: 1746699    
Bug Blocks:    
Attachments:
Description Flags
import successfully none

Description mxie@redhat.com 2019-09-05 08:00:41 UTC
+++ This bug was initially created as a clone of Bug #1746699 +++

Description of problem:
Can't import guest from export domain to data domain on rhv4.3 due to error "Invalid parameter: 'DiskType=1'"

Version-Release number of selected component (if applicable):
vdsm-4.30.27-1.el7ev.x86_64
RHV:4.3.4.3-0.1.el7

Steps to Reproduce:
1.Convert a guests from VMware to RHV's export domain by virt-v2v
# virt-v2v -i ova esx6_7-rhel7.7-x86_64 -o rhv -os 10.73.224.199:/home/p2v_export -of qcow2 -b ovirtmgmt
[   0.0] Opening the source -i ova esx6_7-rhel7.7-x86_64
[   8.7] Creating an overlay to protect the source from being modified
[   8.9] Opening the overlay
[  13.2] Inspecting the overlay
[  37.9] Checking for sufficient free disk space in the guest
[  37.9] Estimating space required on target for each disk
[  37.9] Converting Red Hat Enterprise Linux Server 7.7 Beta (Maipo) to run on KVM
virt-v2v: warning: guest tools directory ‘linux/el7’ is missing from 
the virtio-win directory or ISO.

Guest tools are only provided in the RHV Guest Tools ISO, so this can 
happen if you are using the version of virtio-win which contains just the 
virtio drivers.  In this case only virtio drivers can be installed in the 
guest, and installation of Guest Tools will be skipped.
virt-v2v: This guest has virtio drivers installed.
[ 184.2] Mapping filesystem data to avoid copying unused and blank areas
[ 184.9] Closing the overlay
[ 185.0] Assigning disks to buses
[ 185.0] Checking if the guest needs BIOS or UEFI to boot
[ 185.0] Initializing the target -o rhv -os 10.73.224.199:/home/p2v_export
[ 185.4] Copying disk 1/2 to /tmp/v2v.43WPcK/e7cd32d9-6b7d-4be9-ad0f-3fb7cfeeea3b/images/c2b64a63-85ca-402f-a775-391849776152/4344f61d-5a07-45ec-a3c6-e0b5041f9b8e (qcow2)
    (100.00/100%)
[ 438.6] Copying disk 2/2 to /tmp/v2v.43WPcK/e7cd32d9-6b7d-4be9-ad0f-3fb7cfeeea3b/images/0569bfe8-3857-4997-9c06-93248e809ab3/e8f2ad4d-adc4-4d10-bd46-92cd545e1b12 (qcow2)
    (100.00/100%)
[ 439.4] Creating output metadata
[ 439.5] Finishing off


2.Try to import the guest from export domain to data domain, but failed to import guest with below error

VDSM p2v command HSMGetAllTasksStatusesVDS failed: low level Image copy failed: (u"Destination volume 4344f61d-5a07-45ec-a3c6-e0b5041f9b8e error: Invalid parameter: 'DiskType=1'",)

Additional info:
Can't reproduce the bug with vdsm-4.30.12-1.el7ev.x86_64

--- Additional comment from RHEL Product and Program Management on 2019-08-29 06:52:53 UTC ---

This bug report has Keywords: Regression or TestBlocker.

Since no regressions or test blockers are allowed between releases, it is also being identified as a blocker for this release.

Please resolve ASAP.

--- Additional comment from RHEL Product and Program Management on 2019-08-29 06:52:53 UTC ---

This request has been proposed as a blocker, but a release flag has not been requested. Please set a release flag to ? to ensure we may track this bug against the appropriate upcoming release, and reset the blocker flag to ?.

--- Additional comment from mxie on 2019-08-29 06:53:13 UTC ---



--- Additional comment from mxie on 2019-08-29 06:59:04 UTC ---



--- Additional comment from RHEL Product and Program Management on 2019-08-29 06:59:37 UTC ---

This bug report has Keywords: Regression or TestBlocker.

Since no regressions or test blockers are allowed between releases, it is also being identified as a blocker for this release.

Please resolve ASAP.

--- Additional comment from RHEL Product and Program Management on 2019-08-29 06:59:37 UTC ---

This request has been proposed as a blocker, but a release flag has not been requested. Please set a release flag to ? to ensure we may track this bug against the appropriate upcoming release, and reset the blocker flag to ?.

--- Additional comment from RHEL Product and Program Management on 2019-08-29 07:01:23 UTC ---

This bug report has Keywords: Regression or TestBlocker.

Since no regressions or test blockers are allowed between releases, it is also being identified as a blocker for this release.

Please resolve ASAP.

--- Additional comment from RHEL Product and Program Management on 2019-08-29 07:01:23 UTC ---

This request has been proposed as a blocker, but a release flag has not been requested. Please set a release flag to ? to ensure we may track this bug against the appropriate upcoming release, and reset the blocker flag to ?.

--- Additional comment from RHEL Product and Program Management on 2019-08-29 07:01:34 UTC ---

This bug report has Keywords: Regression or TestBlocker.

Since no regressions or test blockers are allowed between releases, it is also being identified as a blocker for this release.

Please resolve ASAP.

--- Additional comment from RHEL Product and Program Management on 2019-08-29 07:01:34 UTC ---

This request has been proposed as a blocker, but a release flag has not been requested. Please set a release flag to ? to ensure we may track this bug against the appropriate upcoming release, and reset the blocker flag to ?.

--- Additional comment from Pawan kumar Vilayatkar on 2019-08-30 16:53:14 UTC ---

Hello Team,

I have a case where a customer when trying to create a VM from the snapshot is failing with error message "InvalidParameterException:  ". 
Customer is having a backup script which creates a snapshot for backup and from that it creates a VM. 

The command "AddVmFromSnapshot" fails with the error message "Invalid parameter: 'DiskType=1'" while copying the source image to the destination image. 

engine=> select log_time, correlation_id, vm_name, vm_id, message from audit_log where vm_name = 'vmssbkup_cs-ldap-03_20190828_004334' order by log_time desc;
          log_time          |            correlation_id            |               vm_name               |                vm_id                 |                                     message                                      
----------------------------+--------------------------------------+-------------------------------------+--------------------------------------+----------------------------------------------------------------------------------
 2019-08-28 04:43:58.947+00 | 8083a5f8-54f7-496d-ac96-bbc108fd2546 | vmssbkup_cs-ldap-03_20190828_004334 | a34a3427-983c-4465-bfef-67fbe0da0808 | Failed to complete VM vmssbkup_cs-ldap-03_20190828_004334 creation.
 2019-08-28 04:43:56.384+00 | 8083a5f8-54f7-496d-ac96-bbc108fd2546 | vmssbkup_cs-ldap-03_20190828_004334 | a34a3427-983c-4465-bfef-67fbe0da0808 | VM vmssbkup_cs-ldap-03_20190828_004334 creation was initiated by admin@internal.


engine=> select image_guid, active, size, image_group_id, parentid,  imagestatus from images where image_group_id = '96f22485-6755-458a-b284-7b62469d42ba';
              image_guid              | active |    size     |            image_group_id            |               parentid               | imagestatus 
--------------------------------------+--------+-------------+--------------------------------------+--------------------------------------+-------------
 1f15a960-99c0-4130-8883-d3443bbdd988 | f      | 53687091200 | 96f22485-6755-458a-b284-7b62469d42ba | 00000000-0000-0000-0000-000000000000 |           1
 a1980993-5a7f-495a-90c1-9574c1ea29a1 | t      | 53687091200 | 96f22485-6755-458a-b284-7b62469d42ba | 1f15a960-99c0-4130-8883-d3443bbdd988 |           1



# cat /rhev/data-center/mnt/glusterSD/cs-fs1.bu.edu:_vm/1f48f887-dd49-4363-9e5c-603c007a9baf/images/96f22485-6755-458a-b284-7b62469d42ba/1f15a960-99c0-4130-8883-d3443bbdd988.meta
CAP=53687091200
CTIME=1455743703
DESCRIPTION=_ActiveImage_cs-ldap-03_Thu Aug 16 11:59:29 EDT 2012
DISKTYPE=1
DOMAIN=1f48f887-dd49-4363-9e5c-603c007a9baf
FORMAT=RAW
GEN=0
IMAGE=96f22485-6755-458a-b284-7b62469d42ba
LEGALITY=LEGAL
PUUID=00000000-0000-0000-0000-000000000000
TYPE=SPARSE
VOLTYPE=INTERNAL
EOF

>>> engine.log
2019-08-28 00:43:55,447-04 INFO  [org.ovirt.engine.core.bll.AddVmFromSnapshotCommand] (default task-66) [8083a5f8-54f7-496d-ac96-bbc108fd2546] Lock Acquired to object 'EngineLock:{exclusiveLocks='[vmssbkup_cs-ldap-03_20190828_004334=VM_NAME, ef795de6-6638-4c42-bdaa-cf8653b9f5dd=VM]', sharedLocks=''}'
2019-08-28 00:43:55,632-04 INFO  [org.ovirt.engine.core.bll.AddVmFromSnapshotCommand] (default task-66) [] Running command: AddVmFromSnapshotCommand internal: false. Entities affected :  ID: d20f1baf-958e-482b-bea2-c19fc0e6528f Type: ClusterAction group CREATE_VM with role type USER,  ID: 1f48f887-dd49-4363-9e5c-603c007a9baf Type: StorageAction group CREATE_DISK with role type USER,  ID: ef795de6-6638-4c42-bdaa-cf8653b9f5dd Type: VMAction group CREATE_VM with role type USER
2019-08-28 00:43:55,690-04 INFO  [org.ovirt.engine.core.bll.AddVmFromSnapshotCommand] (default task-66) [] Locking VM(id = 'a34a3427-983c-4465-bfef-67fbe0da0808') with compensation.
2019-08-28 00:43:55,691-04 INFO  [org.ovirt.engine.core.vdsbroker.SetVmStatusVDSCommand] (default task-66) [] START, SetVmStatusVDSCommand( SetVmStatusVDSCommandParameters:{vmId='a34a3427-983c-4465-bfef-67fbe0da0808', status='ImageLocked', exitStatus='Normal'}), log id: 47b3e53e
2019-08-28 00:43:55,697-04 INFO  [org.ovirt.engine.core.vdsbroker.SetVmStatusVDSCommand] (default task-66) [] FINISH, SetVmStatusVDSCommand, return: , log id: 47b3e53e
2019-08-28 00:43:55,705-04 INFO  [org.ovirt.engine.core.bll.AddVmFromSnapshotCommand] (default task-66) [] Lock freed to object 'EngineLock:{exclusiveLocks='[vmssbkup_cs-ldap-03_20190828_004334=VM_NAME, ef795de6-6638-4c42-bdaa-cf8653b9f5dd=VM]', sharedLocks=''}'
2019-08-28 00:43:55,730-04 INFO  [org.ovirt.engine.core.bll.storage.disk.image.CopyImageGroupCommand] (default task-66) [8083a5f8-54f7-496d-ac96-bbc108fd2546] Running command: CopyImageGroupCommand internal: true. Entities affected :  ID: 1f48f887-dd49-4363-9e5c-603c007a9baf Type: Storage
2019-08-28 00:43:55,782-04 INFO  [org.ovirt.engine.core.vdsbroker.irsbroker.CopyImageVDSCommand] (default task-66) [8083a5f8-54f7-496d-ac96-bbc108fd2546] START, CopyImageVDSCommand( CopyImageVDSCommandParameters:{storagePoolId='941567b0-8a4e-11e1-aff8-777f93db9152', ignoreFailoverLimit='false', storageDomainId='1f48f887-dd49-4363-9e5c-603c007a9baf', imageGroupId='96f22485-6755-458a-b284-7b62469d42ba', imageId='1f15a960-99c0-4130-8883-d3443bbdd988', dstImageGroupId='b6dc0afb-33c7-4a83-b51b-16825b78a73b', vmId='a34a3427-983c-4465-bfef-67fbe0da0808', dstImageId='36cb62b7-8cfb-4304-b56a-c9557e540dd5', imageDescription='', dstStorageDomainId='1f48f887-dd49-4363-9e5c-603c007a9baf', copyVolumeType='LeafVol', volumeFormat='RAW', preallocate='Sparse', postZero='false', discard='false', force='false'}), log id: 6519f1b6
2019-08-28 00:43:55,782-04 INFO  [org.ovirt.engine.core.vdsbroker.irsbroker.CopyImageVDSCommand] (default task-66) [8083a5f8-54f7-496d-ac96-bbc108fd2546] -- executeIrsBrokerCommand: calling 'copyImage' with two new parameters: description and UUID. Parameters:
2019-08-28 00:43:55,782-04 INFO  [org.ovirt.engine.core.vdsbroker.irsbroker.CopyImageVDSCommand] (default task-66) [8083a5f8-54f7-496d-ac96-bbc108fd2546] ++ sdUUID=1f48f887-dd49-4363-9e5c-603c007a9baf
2019-08-28 00:43:55,782-04 INFO  [org.ovirt.engine.core.vdsbroker.irsbroker.CopyImageVDSCommand] (default task-66) [8083a5f8-54f7-496d-ac96-bbc108fd2546] ++ spUUID=941567b0-8a4e-11e1-aff8-777f93db9152
2019-08-28 00:43:55,782-04 INFO  [org.ovirt.engine.core.vdsbroker.irsbroker.CopyImageVDSCommand] (default task-66) [8083a5f8-54f7-496d-ac96-bbc108fd2546] ++ vmGUID=a34a3427-983c-4465-bfef-67fbe0da0808
2019-08-28 00:43:55,782-04 INFO  [org.ovirt.engine.core.vdsbroker.irsbroker.CopyImageVDSCommand] (default task-66) [8083a5f8-54f7-496d-ac96-bbc108fd2546] ++ srcImageGUID=96f22485-6755-458a-b284-7b62469d42ba
2019-08-28 00:43:55,782-04 INFO  [org.ovirt.engine.core.vdsbroker.irsbroker.CopyImageVDSCommand] (default task-66) [8083a5f8-54f7-496d-ac96-bbc108fd2546] ++ srcVolUUID=1f15a960-99c0-4130-8883-d3443bbdd988
2019-08-28 00:43:55,782-04 INFO  [org.ovirt.engine.core.vdsbroker.irsbroker.CopyImageVDSCommand] (default task-66) [8083a5f8-54f7-496d-ac96-bbc108fd2546] ++ dstImageGUID=b6dc0afb-33c7-4a83-b51b-16825b78a73b
2019-08-28 00:43:55,782-04 INFO  [org.ovirt.engine.core.vdsbroker.irsbroker.CopyImageVDSCommand] (default task-66) [8083a5f8-54f7-496d-ac96-bbc108fd2546] ++ dstVolUUID=36cb62b7-8cfb-4304-b56a-c9557e540dd5
2019-08-28 00:43:55,782-04 INFO  [org.ovirt.engine.core.vdsbroker.irsbroker.CopyImageVDSCommand] (default task-66) [8083a5f8-54f7-496d-ac96-bbc108fd2546] ++ descr=
2019-08-28 00:43:55,782-04 INFO  [org.ovirt.engine.core.vdsbroker.irsbroker.CopyImageVDSCommand] (default task-66) [8083a5f8-54f7-496d-ac96-bbc108fd2546] ++ dstSdUUID=1f48f887-dd49-4363-9e5c-603c007a9baf
2019-08-28 00:43:56,296-04 INFO  [org.ovirt.engine.core.vdsbroker.irsbroker.CopyImageVDSCommand] (default task-66) [8083a5f8-54f7-496d-ac96-bbc108fd2546] FINISH, CopyImageVDSCommand, return: bacc68e6-1957-41b6-b3d1-66cf2205bed0, log id: 6519f1b6
2019-08-28 00:43:56,302-04 INFO  [org.ovirt.engine.core.bll.tasks.CommandAsyncTask] (default task-66) [8083a5f8-54f7-496d-ac96-bbc108fd2546] CommandAsyncTask::Adding CommandMultiAsyncTasks object for command 'c7f332a3-ab86-413e-89bf-3a0398752cae'
2019-08-28 00:43:56,302-04 INFO  [org.ovirt.engine.core.bll.CommandMultiAsyncTasks] (default task-66) [8083a5f8-54f7-496d-ac96-bbc108fd2546] CommandMultiAsyncTasks::attachTask: Attaching task 'bacc68e6-1957-41b6-b3d1-66cf2205bed0' to command 'c7f332a3-ab86-413e-89bf-3a0398752cae'.
2019-08-28 00:43:56,318-04 INFO  [org.ovirt.engine.core.bll.tasks.AsyncTaskManager] (default task-66) [8083a5f8-54f7-496d-ac96-bbc108fd2546] Adding task 'bacc68e6-1957-41b6-b3d1-66cf2205bed0' (Parent Command 'AddVmFromSnapshot', Parameters Type 'org.ovirt.engine.core.common.asynctasks.AsyncTaskParameters'), polling hasn't started yet..
2019-08-28 00:43:56,390-04 INFO  [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] (default task-66) [8083a5f8-54f7-496d-ac96-bbc108fd2546] EVENT_ID: USER_ADD_VM_STARTED(37), VM vmssbkup_cs-ldap-03_20190828_004334 creation was initiated by admin@internal.
2019-08-28 00:43:56,391-04 INFO  [org.ovirt.engine.core.bll.tasks.SPMAsyncTask] (default task-66) [8083a5f8-54f7-496d-ac96-bbc108fd2546] BaseAsyncTask::startPollingTask: Starting to poll task 'bacc68e6-1957-41b6-b3d1-66cf2205bed0'.
2019-08-28 00:43:58,508-04 INFO  [org.ovirt.engine.core.vdsbroker.vdsbroker.DumpXmlsVDSCommand] (EE-ManagedThreadFactory-engineScheduled-Thread-61) [] START, DumpXmlsVDSCommand(HostName = cs-virt1, Params:{hostId='4653ba39-cb4d-4c35-9c2019-08-28 00:43:55,447-04 INFO  [org.ovirt.engine.core.bll.AddVmFromSnapshotCommand] (default task-66) [8083a5f8-54f7-496d-ac96-bbc108fd2546] Lock Acquired to object 'EngineLock:{exclusiveLocks='[vmssbkup_cs-ldap-03_20190828_004334=VM_NAME, ef795de6-6638-4c42-bdaa-cf8653b9f5dd=VM]', sharedLocks=''}'
2019-08-28 00:43:55,632-04 INFO  [org.ovirt.engine.core.bll.AddVmFromSnapshotCommand] (default task-66) [] Running command: AddVmFromSnapshotCommand internal: false. Entities affected :  ID: d20f1baf-958e-482b-bea2-c19fc0e6528f Type: ClusterAction group CREATE_VM with role type USER,  ID: 1f48f887-dd49-4363-9e5c-603c007a9baf Type: StorageAction group CREATE_DISK with role type USER,  ID: ef795de6-6638-4c42-bdaa-cf8653b9f5dd Type: VMAction group CREATE_VM with role type USER
2019-08-28 00:43:55,690-04 INFO  [org.ovirt.engine.core.bll.AddVmFromSnapshotCommand] (default task-66) [] Locking VM(id = 'a34a3427-983c-4465-bfef-67fbe0da0808') with compensation.
2019-08-28 00:43:55,691-04 INFO  [org.ovirt.engine.core.vdsbroker.SetVmStatusVDSCommand] (default task-66) [] START, SetVmStatusVDSCommand( SetVmStatusVDSCommandParameters:{vmId='a34a3427-983c-4465-bfef-67fbe0da0808', status='ImageLocked', exitStatus='Normal'}), log id: 47b3e53e
2019-08-28 00:43:55,697-04 INFO  [org.ovirt.engine.core.vdsbroker.SetVmStatusVDSCommand] (default task-66) [] FINISH, SetVmStatusVDSCommand, return: , log id: 47b3e53e
2019-08-28 00:43:55,705-04 INFO  [org.ovirt.engine.core.bll.AddVmFromSnapshotCommand] (default task-66) [] Lock freed to object 'EngineLock:{exclusiveLocks='[vmssbkup_cs-ldap-03_20190828_004334=VM_NAME, ef795de6-6638-4c42-bdaa-cf8653b9f5dd=VM]', sharedLocks=''}'
2019-08-28 00:43:55,730-04 INFO  [org.ovirt.engine.core.bll.storage.disk.image.CopyImageGroupCommand] (default task-66) [8083a5f8-54f7-496d-ac96-bbc108fd2546] Running command: CopyImageGroupCommand internal: true. Entities affected :  ID: 1f48f887-dd49-4363-9e5c-603c007a9baf Type: Storage
2019-08-28 00:43:55,782-04 INFO  [org.ovirt.engine.core.vdsbroker.irsbroker.CopyImageVDSCommand] (default task-66) [8083a5f8-54f7-496d-ac96-bbc108fd2546] START, CopyImageVDSCommand( CopyImageVDSCommandParameters:{storagePoolId='941567b0-8a4e-11e1-aff8-777f93db9152', ignoreFailoverLimit='false', storageDomainId='1f48f887-dd49-4363-9e5c-603c007a9baf', imageGroupId='96f22485-6755-458a-b284-7b62469d42ba', imageId='1f15a960-99c0-4130-8883-d3443bbdd988', dstImageGroupId='b6dc0afb-33c7-4a83-b51b-16825b78a73b', vmId='a34a3427-983c-4465-bfef-67fbe0da0808', dstImageId='36cb62b7-8cfb-4304-b56a-c9557e540dd5', imageDescription='', dstStorageDomainId='1f48f887-dd49-4363-9e5c-603c007a9baf', copyVolumeType='LeafVol', volumeFormat='RAW', preallocate='Sparse', postZero='false', discard='false', force='false'}), log id: 6519f1b6
2019-08-28 00:43:55,782-04 INFO  [org.ovirt.engine.core.vdsbroker.irsbroker.CopyImageVDSCommand] (default task-66) [8083a5f8-54f7-496d-ac96-bbc108fd2546] -- executeIrsBrokerCommand: calling 'copyImage' with two new parameters: description and UUID. Parameters:
2019-08-28 00:43:55,782-04 INFO  [org.ovirt.engine.core.vdsbroker.irsbroker.CopyImageVDSCommand] (default task-66) [8083a5f8-54f7-496d-ac96-bbc108fd2546] ++ sdUUID=1f48f887-dd49-4363-9e5c-603c007a9baf
2019-08-28 00:43:55,782-04 INFO  [org.ovirt.engine.core.vdsbroker.irsbroker.CopyImageVDSCommand] (default task-66) [8083a5f8-54f7-496d-ac96-bbc108fd2546] ++ spUUID=941567b0-8a4e-11e1-aff8-777f93db9152
2019-08-28 00:43:55,782-04 INFO  [org.ovirt.engine.core.vdsbroker.irsbroker.CopyImageVDSCommand] (default task-66) [8083a5f8-54f7-496d-ac96-bbc108fd2546] ++ vmGUID=a34a3427-983c-4465-bfef-67fbe0da0808
2019-08-28 00:43:55,782-04 INFO  [org.ovirt.engine.core.vdsbroker.irsbroker.CopyImageVDSCommand] (default task-66) [8083a5f8-54f7-496d-ac96-bbc108fd2546] ++ srcImageGUID=96f22485-6755-458a-b284-7b62469d42ba
2019-08-28 00:43:55,782-04 INFO  [org.ovirt.engine.core.vdsbroker.irsbroker.CopyImageVDSCommand] (default task-66) [8083a5f8-54f7-496d-ac96-bbc108fd2546] ++ srcVolUUID=1f15a960-99c0-4130-8883-d3443bbdd988
2019-08-28 00:43:55,782-04 INFO  [org.ovirt.engine.core.vdsbroker.irsbroker.CopyImageVDSCommand] (default task-66) [8083a5f8-54f7-496d-ac96-bbc108fd2546] ++ dstImageGUID=b6dc0afb-33c7-4a83-b51b-16825b78a73b
2019-08-28 00:43:55,782-04 INFO  [org.ovirt.engine.core.vdsbroker.irsbroker.CopyImageVDSCommand] (default task-66) [8083a5f8-54f7-496d-ac96-bbc108fd2546] ++ dstVolUUID=36cb62b7-8cfb-4304-b56a-c9557e540dd5
2019-08-28 00:43:55,782-04 INFO  [org.ovirt.engine.core.vdsbroker.irsbroker.CopyImageVDSCommand] (default task-66) [8083a5f8-54f7-496d-ac96-bbc108fd2546] ++ descr=
2019-08-28 00:43:55,782-04 INFO  [org.ovirt.engine.core.vdsbroker.irsbroker.CopyImageVDSCommand] (default task-66) [8083a5f8-54f7-496d-ac96-bbc108fd2546] ++ dstSdUUID=1f48f887-dd49-4363-9e5c-603c007a9baf
2019-08-28 00:43:56,296-04 INFO  [org.ovirt.engine.core.vdsbroker.irsbroker.CopyImageVDSCommand] (default task-66) [8083a5f8-54f7-496d-ac96-bbc108fd2546] FINISH, CopyImageVDSCommand, return: bacc68e6-1957-41b6-b3d1-66cf2205bed0, log id: 6519f1b6
2019-08-28 00:43:56,302-04 INFO  [org.ovirt.engine.core.bll.tasks.CommandAsyncTask] (default task-66) [8083a5f8-54f7-496d-ac96-bbc108fd2546] CommandAsyncTask::Adding CommandMultiAsyncTasks object for command 'c7f332a3-ab86-413e-89bf-3a0398752cae'
2019-08-28 00:43:56,302-04 INFO  [org.ovirt.engine.core.bll.CommandMultiAsyncTasks] (default task-66) [8083a5f8-54f7-496d-ac96-bbc108fd2546] CommandMultiAsyncTasks::attachTask: Attaching task 'bacc68e6-1957-41b6-b3d1-66cf2205bed0' to command 'c7f332a3-ab86-413e-89bf-3a0398752cae'.
2019-08-28 00:43:56,318-04 INFO  [org.ovirt.engine.core.bll.tasks.AsyncTaskManager] (default task-66) [8083a5f8-54f7-496d-ac96-bbc108fd2546] Adding task 'bacc68e6-1957-41b6-b3d1-66cf2205bed0' (Parent Command 'AddVmFromSnapshot', Parameters Type 'org.ovirt.engine.core.common.asynctasks.AsyncTaskParameters'), polling hasn't started yet..
2019-08-28 00:43:56,390-04 INFO  [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] (default task-66) [8083a5f8-54f7-496d-ac96-bbc108fd2546] EVENT_ID: USER_ADD_VM_STARTED(37), VM vmssbkup_cs-ldap-03_20190828_004334 creation was initiated by admin@internal.
2019-08-28 00:43:56,391-04 INFO  [org.ovirt.engine.core.bll.tasks.SPMAsyncTask] (default task-66) [8083a5f8-54f7-496d-ac96-bbc108fd2546] BaseAsyncTask::startPollingTask: Starting to poll task 'bacc68e6-1957-41b6-b3d1-66cf2205bed0'.
2019-08-28 00:43:58,508-04 INFO  [org.ovirt.engine.core.vdsbroker.vdsbroker.DumpXmlsVDSCommand] (EE-ManagedThreadFactory-engineScheduled-Thread-61) [] START, DumpXmlsVDSCommand(HostName = cs-virt1, Params:{hostId='4653ba39-cb4d-4c35-9caa-0fa38b583792', vmIds='[ef795de6-6638-4c42-bdaa-cf8653b9f5dd]'}), log id: 2a22d12c
2019-08-28 00:43:58,637-04 INFO  [org.ovirt.engine.core.bll.tasks.AsyncTaskManager] (EE-ManagedThreadFactory-engineScheduled-Thread-3) [] Polling and updating Async Tasks: 5 tasks, 1 tasks to poll now
2019-08-28 00:43:58,642-04 ERROR [org.ovirt.engine.core.vdsbroker.vdsbroker.HSMGetAllTasksStatusesVDSCommand] (EE-ManagedThreadFactory-engineScheduled-Thread-3) [] Failed in 'HSMGetAllTasksStatusesVDS' method
2019-08-28 00:43:58,648-04 ERROR [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] (EE-ManagedThreadFactory-engineScheduled-Thread-3) [] EVENT_ID: VDS_BROKER_COMMAND_FAILURE(10,802), VDSM cs-virt1 command HSMGetAllTasksStatusesVDS failed: low level Image copy failed: (u"Destination volume 36cb62b7-8cfb-4304-b56a-c9557e540dd5 error: Invalid parameter: 'DiskType=1'",)
2019-08-28 00:43:58,648-04 INFO  [org.ovirt.engine.core.bll.tasks.SPMAsyncTask] (EE-ManagedThreadFactory-engineScheduled-Thread-3) [] SPMAsyncTask::PollTask: Polling task 'bacc68e6-1957-41b6-b3d1-66cf2205bed0' (Parent Command 'AddVmFromSnapshot', Parameters Type 'org.ovirt.engine.core.common.asynctasks.AsyncTaskParameters') returned status 'finished', result 'cleanSuccess'.
2019-08-28 00:43:58,652-04 ERROR [org.ovirt.engine.core.bll.tasks.SPMAsyncTask] (EE-ManagedThreadFactory-engineScheduled-Thread-3) [] BaseAsyncTask::logEndTaskFailure: Task 'bacc68e6-1957-41b6-b3d1-66cf2205bed0' (Parent Command 'AddVmFromSnapshot', Parameters Type 'org.ovirt.engine.core.common.asynctasks.AsyncTaskParameters') ended with failure:
-- Result: 'cleanSuccess'
-- Message: 'VDSGenericException: VDSErrorException: Failed to HSMGetAllTasksStatusesVDS, error = low level Image copy failed: (u"Destination volume 36cb62b7-8cfb-4304-b56a-c9557e540dd5 error: Invalid parameter: 'DiskType=1'",), code = 261',
-- Exception: 'VDSGenericException: VDSErrorException: Failed to HSMGetAllTasksStatusesVDS, error = low level Image copy failed: (u"Destination volume 36cb62b7-8cfb-4304-b56a-c9557e540dd5 error: Invalid parameter: 'DiskType=1'",), code = 261'
2019-08-28 00:43:58,741-04 INFO  [org.ovirt.engine.core.vdsbroker.irsbroker.DeleteImageGroupVDSCommand] (EE-ManagedThreadFactory-engine-Thread-50527) [8083a5f8-54f7-496d-ac96-bbc108fd2546] START, DeleteImageGroupVDSCommand( DeleteImageGroupVDSCommandParameters:{storagePoolId='941567b0-8a4e-11e1-aff8-777f93db9152', ignoreFailoverLimit='false', storageDomainId='1f48f887-dd49-4363-9e5c-603c007a9baf', imageGroupId='b6dc0afb-33c7-4a83-b51b-16825b78a73b', postZeros='false', discard='false', forceDelete='false'}), log id: 2b8cd942
2019-08-28 00:43:58,856-04 ERROR [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] (EE-ManagedThreadFactory-engine-Thread-50527) [8083a5f8-54f7-496d-ac96-bbc108fd2546] EVENT_ID: IRS_BROKER_COMMAND_FAILURE(10,803), VDSM command DeleteImageGroupVDS failed: Image does not exist in domain: u'image=b6dc0afb-33c7-4a83-b51b-16825b78a73b, domain=1f48f887-dd49-4363-9e5c-603c007a9baf'
2019-08-28 00:43:58,857-04 ERROR [org.ovirt.engine.core.vdsbroker.irsbroker.DeleteImageGroupVDSCommand] (EE-ManagedThreadFactory-engine-Thread-50527) [8083a5f8-54f7-496d-ac96-bbc108fd2546] Command 'DeleteImageGroupVDSCommand( DeleteImageGroupVDSCommandParameters:{storagePoolId='941567b0-8a4e-11e1-aff8-777f93db9152', ignoreFailoverLimit='false', storageDomainId='1f48f887-dd49-4363-9e5c-603c007a9baf', imageGroupId='b6dc0afb-33c7-4a83-b51b-16825b78a73b', postZeros='false', discard='false', forceDelete='false'})' execution failed: IRSGenericException: IRSErrorException: Image does not exist in domain: u'image=b6dc0afb-33c7-4a83-b51b-16825b78a73b, domain=1f48f887-dd49-4363-9e5c-603c007a9baf'
2019-08-28 00:43:58,857-04 INFO  [org.ovirt.engine.core.vdsbroker.irsbroker.DeleteImageGroupVDSCommand] (EE-ManagedThreadFactory-engine-Thread-50527) [8083a5f8-54f7-496d-ac96-bbc108fd2546] FINISH, DeleteImageGroupVDSCommand, return: , log id: 2b8cd942
2019-08-28 00:43:58,857-04 INFO  [org.ovirt.engine.core.bll.storage.disk.image.RemoveImageCommand] (EE-ManagedThreadFactory-engine-Thread-50527) [8083a5f8-54f7-496d-ac96-bbc108fd2546] Disk 'b6dc0afb-33c7-4a83-b51b-16825b78a73b' doesn't exist on storage domain '1f48f887-dd49-4363-9e5c-603c007a9baf', rolling forward
2019-08-28 00:43:58,926-04 INFO  [org.ovirt.engine.core.bll.tasks.AsyncTaskManager] (EE-ManagedThreadFactory-engine-Thread-50527) [8083a5f8-54f7-496d-ac96-bbc108fd2546] Removed task '738fd2a2-b5ae-4d75-b436-1d5acfc95be9' from DataBase
2019-08-28 00:43:58,953-04 ERROR [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] (EE-ManagedThreadFactory-engine-Thread-50527) [] EVENT_ID: USER_ADD_VM_FINISHED_FAILURE(60), Failed to complete VM vmssbkup_cs-ldap-03_20190828_004334 creation.



>>> vdsm.log.9.xz
2019-08-28 00:43:55,784-0400 INFO  (jsonrpc/7) [vdsm.api] START copyImage(sdUUID=u'1f48f887-dd49-4363-9e5c-603c007a9baf', spUUID=u'941567b0-8a4e-11e1-aff8-777f93db9152', vmUUID='', srcImgUUID=u'96f22485-6755-458a-b284-7b62469d42ba', srcVolUUID=u'1f15a960-99c0-4130-8883-d3443bbdd988', dstImgUUID=u'b6dc0afb-33c7-4a83-b51b-16825b78a73b', dstVolUUID=u'36cb62b7-8cfb-4304-b56a-c9557e540dd5', description=u'', dstSdUUID=u'1f48f887-dd49-4363-9e5c-603c007a9baf', volType=8, volFormat=5, preallocate=2, postZero=u'false', force=u'false', discard=False) from=::ffff:128.197.11.212,44002, flow_id=8083a5f8-54f7-496d-ac96-bbc108fd2546, task_id=bacc68e6-1957-41b6-b3d1-66cf2205bed0 (api:48)
2019-08-28 00:43:55,794-0400 INFO  (jsonrpc/7) [storage.Image] image 96f22485-6755-458a-b284-7b62469d42ba in domain 1f48f887-dd49-4363-9e5c-603c007a9baf has vollist [u'1f15a960-99c0-4130-8883-d3443bbdd988', u'a1980993-5a7f-495a-90c1-9574c1ea29a1'] (image:298)
2019-08-28 00:43:55,818-0400 INFO  (jsonrpc/7) [storage.Image] Current chain=1f15a960-99c0-4130-8883-d3443bbdd988 < a1980993-5a7f-495a-90c1-9574c1ea29a1 (top)  (image:687)
2019-08-28 00:43:55,894-0400 INFO  (jsonrpc/7) [vdsm.api] FINISH copyImage return=None from=::ffff:128.197.11.212,44002, flow_id=8083a5f8-54f7-496d-ac96-bbc108fd2546, task_id=bacc68e6-1957-41b6-b3d1-66cf2205bed0 (api:54)
2019-08-28 00:43:56,394-0400 INFO  (tasks/8) [storage.Image] sdUUID=1f48f887-dd49-4363-9e5c-603c007a9baf vmUUID= srcImgUUID=96f22485-6755-458a-b284-7b62469d42ba srcVolUUID=1f15a960-99c0-4130-8883-d3443bbdd988 dstImgUUID=b6dc0afb-33c7-4a83-b51b-16825b78a73b dstVolUUID=36cb62b7-8cfb-4304-b56a-c9557e540dd5 dstSdUUID=1f48f887-dd49-4363-9e5c-603c007a9baf volType=8 volFormat=RAW preallocate=SPARSE force=False postZero=False discard=False (image:709)
2019-08-28 00:43:56,395-0400 INFO  (tasks/8) [storage.VolumeManifest] Volume: preparing volume 1f48f887-dd49-4363-9e5c-603c007a9baf/1f15a960-99c0-4130-8883-d3443bbdd988 (volume:590)
2019-08-28 00:43:56,482-0400 INFO  (tasks/8) [storage.Image] Copy source 1f48f887-dd49-4363-9e5c-603c007a9baf:96f22485-6755-458a-b284-7b62469d42ba:1f15a960-99c0-4130-8883-d3443bbdd988 to destination 1f48f887-dd49-4363-9e5c-603c007a9baf:b6dc0afb-33c7-4a83-b51b-16825b78a73b:36cb62b7-8cfb-4304-b56a-c9557e540dd5 size=104857600 blocks, initial size=None blocks (image:763)
2019-08-28 00:43:56,483-0400 INFO  (tasks/8) [storage.Image] image b6dc0afb-33c7-4a83-b51b-16825b78a73b in domain 1f48f887-dd49-4363-9e5c-603c007a9baf has vollist [] (image:298)
2019-08-28 00:43:56,483-0400 ERROR (tasks/8) [storage.Image] Unexpected error (image:797)
Traceback (most recent call last):
  File "/usr/lib/python2.7/site-packages/vdsm/storage/image.py", line 788, in copyCollapsed
    initialSize=initialSizeBlk)
  File "/usr/lib/python2.7/site-packages/vdsm/storage/sd.py", line 912, in createVolume
    initialSize=initialSize)
  File "/usr/lib/python2.7/site-packages/vdsm/storage/volume.py", line 1154, in create
    volFormat, srcVolUUID, diskType=diskType, preallocate=preallocate)
  File "/usr/lib/python2.7/site-packages/vdsm/storage/sd.py", line 902, in validateCreateVolumeParams
    volFormat, srcVolUUID, diskType=diskType, preallocate=preallocate)
  File "/usr/lib/python2.7/site-packages/vdsm/storage/sd.py", line 632, in validateCreateVolumeParams
    raise se.InvalidParameterException("DiskType", diskType)
InvalidParameterException: Invalid parameter: 'DiskType=1'
2019-08-28 00:43:56,484-0400 ERROR (tasks/8) [storage.TaskManager.Task] (Task='bacc68e6-1957-41b6-b3d1-66cf2205bed0') Unexpected error (task:875)
Traceback (most recent call last):
  File "/usr/lib/python2.7/site-packages/vdsm/storage/task.py", line 882, in _run
    return fn(*args, **kargs)
  File "/usr/lib/python2.7/site-packages/vdsm/storage/task.py", line 336, in run
    return self.cmd(*self.argslist, **self.argsdict)
  File "/usr/lib/python2.7/site-packages/vdsm/storage/securable.py", line 79, in wrapper
    return method(self, *args, **kwargs)
  File "/usr/lib/python2.7/site-packages/vdsm/storage/sp.py", line 1622, in copyImage
    postZero, force, discard)
  File "/usr/lib/python2.7/site-packages/vdsm/storage/image.py", line 799, in copyCollapsed
    (dstVolUUID, str(e)))
CopyImageError: low level Image copy failed: (u"Destination volume 36cb62b7-8cfb-4304-b56a-c9557e540dd5 error: Invalid parameter: 'DiskType=1'",)

2019-08-28 00:43:58,849-0400 INFO  (jsonrpc/3) [vdsm.api] FINISH deleteImage error=Image does not exist in domain: u'image=b6dc0afb-33c7-4a83-b51b-16825b78a73b, domain=1f48f887-dd49-4363-9e5c-603c007a9baf' from=::ffff:128.197.11.212,44002, flow_id=8083a5f8-54f7-496d-ac96-bbc108fd2546, task_id=07ad6c15-5d75-4029-9d8e-568fe455d273 (api:52)
2019-08-28 00:43:58,849-0400 ERROR (jsonrpc/3) [storage.TaskManager.Task] (Task='07ad6c15-5d75-4029-9d8e-568fe455d273') Unexpected error (task:875)
Traceback (most recent call last):
  File "/usr/lib/python2.7/site-packages/vdsm/storage/task.py", line 882, in _run
    return fn(*args, **kargs)
  File "<string>", line 2, in deleteImage
  File "/usr/lib/python2.7/site-packages/vdsm/common/api.py", line 50, in method
    ret = func(*args, **kwargs)
  File "/usr/lib/python2.7/site-packages/vdsm/storage/hsm.py", line 1507, in deleteImage
    raise se.ImageDoesNotExistInSD(imgUUID, sdUUID)
ImageDoesNotExistInSD: Image does not exist in domain: u'image=b6dc0afb-33c7-4a83-b51b-16825b78a73b, domain=1f48f887-dd49-4363-9e5c-603c007a9baf'
2019-08-28 00:43:58,849-0400 INFO  (jsonrpc/3) [storage.TaskManager.Task] (Task='07ad6c15-5d75-4029-9d8e-568fe455d273') aborting: Task is aborted: "Image does not exist in domain: u'image=b6dc0afb-33c7-4a83-b51b-16825b78a73b, domain=1f48f887-dd49-4363-9e5c-603c007a9baf'" - code 268 (task:1181)
2019-08-28 00:43:58,849-0400 ERROR (jsonrpc/3) [storage.Dispatcher] FINISH deleteImage error=Image does not exist in domain: u'image=b6dc0afb-33c7-4a83-b51b-16825b78a73b, domain=1f48f887-dd49-4363-9e5c-603c007a9baf' (dispatcher:83)
2019-08-28 00:43:58,849-0400 INFO  (jsonrpc/3) [jsonrpc.JsonRpcServer] RPC call Image.delete failed (error 268) in 0.11 seconds (__init__:312)

--- Additional comment from RHEL Product and Program Management on 2019-08-30 16:53:21 UTC ---

This bug report has Keywords: Regression or TestBlocker.

Since no regressions or test blockers are allowed between releases, it is also being identified as a blocker for this release.

Please resolve ASAP.

--- Additional comment from RHEL Product and Program Management on 2019-08-30 16:53:21 UTC ---

This request has been proposed as a blocker, but a release flag has not been requested. Please set a release flag to ? to ensure we may track this bug against the appropriate upcoming release, and reset the blocker flag to ?.

--- Additional comment from RHEL Product and Program Management on 2019-08-30 16:56:21 UTC ---

This bug report has Keywords: Regression or TestBlocker.

Since no regressions or test blockers are allowed between releases, it is also being identified as a blocker for this release.

Please resolve ASAP.

--- Additional comment from RHEL Product and Program Management on 2019-08-30 16:56:21 UTC ---

This request has been proposed as a blocker, but a release flag has not been requested. Please set a release flag to ? to ensure we may track this bug against the appropriate upcoming release, and reset the blocker flag to ?.

--- Additional comment from RHEL Product and Program Management on 2019-08-30 17:01:25 UTC ---

This bug report has Keywords: Regression or TestBlocker.

Since no regressions or test blockers are allowed between releases, it is also being identified as a blocker for this release.

Please resolve ASAP.

--- Additional comment from RHEL Product and Program Management on 2019-08-30 17:01:25 UTC ---

This request has been proposed as a blocker, but a release flag has not been requested. Please set a release flag to ? to ensure we may track this bug against the appropriate upcoming release, and reset the blocker flag to ?.

--- Additional comment from RHEL Product and Program Management on 2019-09-02 12:01:29 UTC ---

This bug report has Keywords: Regression or TestBlocker.

Since no regressions or test blockers are allowed between releases, it is also being identified as a blocker for this release.

Please resolve ASAP.

--- Additional comment from RHEL Product and Program Management on 2019-09-02 12:01:29 UTC ---

This request has been proposed as a blocker, but a release flag has not been requested. Please set a release flag to ? to ensure we may track this bug against the appropriate upcoming release, and reset the blocker flag to ?.

--- Additional comment from Nir Soffer on 2019-09-02 14:34:31 UTC ---

(In reply to mxie from comment #0)
...
> 1.Convert a guests from VMware to RHV's export domain by virt-v2v
> # virt-v2v -i ova esx6_7-rhel7.7-x86_64 -o rhv -os

OK, the bug seems to be in virt-v2 then.

Looking in v2v/create_ovf.ml

 497       let buf = Buffer.create 256 in
 498       let bpf fs = bprintf buf fs in
 499       bpf "DOMAIN=%s\n" sd_uuid; (* "Domain" as in Storage Domain *)
 500       bpf "VOLTYPE=LEAF\n";
 501       bpf "CTIME=%.0f\n" time;
 502       bpf "MTIME=%.0f\n" time;
 503       bpf "IMAGE=%s\n" image_uuid;
 504       bpf "DISKTYPE=1\n";
 505       bpf "PUUID=00000000-0000-0000-0000-000000000000\n";
 506       bpf "LEGALITY=LEGAL\n";
 507       bpf "POOL_UUID=\n";
 508       bpf "SIZE=%Ld\n" size_in_sectors;
 509       bpf "FORMAT=%s\n" format_for_rhv;
 510       bpf "TYPE=%s\n" output_alloc_for_rhv;
 511       bpf "DESCRIPTION=%s\n" (String.replace generated_by "=" "_");
 512       bpf "EOF\n";

Looks like the .meta file is generated by virt-v2v. This usage is not supported
by RHV since the .meta files are not part of RHV API.

This means also that rhv output does not support block storage (no .meta file)
and will create incorrect .meta files when using storage format v5.

I think this bug should move to virt-v2v. RHV should not support corrupted meta
data created by external tools bypassing RHV API.

Richard, what do you think?

--- Additional comment from RHEL Product and Program Management on 2019-09-02 14:34:40 UTC ---

This bug report has Keywords: Regression or TestBlocker.

Since no regressions or test blockers are allowed between releases, it is also being identified as a blocker for this release.

Please resolve ASAP.

--- Additional comment from RHEL Product and Program Management on 2019-09-02 14:34:40 UTC ---

This request has been proposed as a blocker, but a release flag has not been requested. Please set a release flag to ? to ensure we may track this bug against the appropriate upcoming release, and reset the blocker flag to ?.

--- Additional comment from Richard W.M. Jones on 2019-09-02 15:49:08 UTC ---

This is the old -o rhv mode which doesn't do via the RHV API at all.  It's also
a deprecated mode in virt-v2v.  And AIUI the Export Storage Domain which it uses
is also deprecated in RHV.

As for why this error has suddenly appeared, I'm not sure why but it has
to be because of some change in RHV to do with handling of ESDs.

--- Additional comment from RHEL Product and Program Management on 2019-09-02 15:49:16 UTC ---

This bug report has Keywords: Regression or TestBlocker.

Since no regressions or test blockers are allowed between releases, it is also being identified as a blocker for this release.

Please resolve ASAP.

--- Additional comment from RHEL Product and Program Management on 2019-09-02 15:49:16 UTC ---

This request has been proposed as a blocker, but a release flag has not been requested. Please set a release flag to ? to ensure we may track this bug against the appropriate upcoming release, and reset the blocker flag to ?.

--- Additional comment from Richard W.M. Jones on 2019-09-02 15:52:01 UTC ---

Of historical note, the DISKTYPE=1 was copied from the old Perl virt-v2v.
I've no idea what that did since I didn't write it.

That git repo is not actually online any longer but the code was:

lib/Sys/VirtConvert/Connection/RHEVTarget.pm:    print $meta "DISKTYPE=1\n";

--- Additional comment from RHEL Product and Program Management on 2019-09-02 15:52:09 UTC ---

This bug report has Keywords: Regression or TestBlocker.

Since no regressions or test blockers are allowed between releases, it is also being identified as a blocker for this release.

Please resolve ASAP.

--- Additional comment from RHEL Product and Program Management on 2019-09-02 15:52:09 UTC ---

This request has been proposed as a blocker, but a release flag has not been requested. Please set a release flag to ? to ensure we may track this bug against the appropriate upcoming release, and reset the blocker flag to ?.

--- Additional comment from Nir Soffer on 2019-09-02 15:59:25 UTC ---

Removing Keywords: Regression or TestBlocker since this cause bugzilla scripts
to spam the bug whenever the bug is edited, and this is not helpful.

--- Additional comment from Nir Soffer on 2019-09-02 16:09:47 UTC ---

(In reply to Richard W.M. Jones from comment #23)
> This is the old -o rhv mode which doesn't do via the RHV API at all.  It's
> also
> a deprecated mode in virt-v2v.  And AIUI the Export Storage Domain which it
> uses
> is also deprecated in RHV.

I guess there is no point in fixing this code to use the correct value at
this point.

> As for why this error has suddenly appeared, I'm not sure why but it has
> to be because of some change in RHV to do with handling of ESDs.

The error was exposed in 4.3 since we started to validate the disk type 
when creating new volumes. Older versions of vdsm were writing the value
as is to storage without any validation.

Since we have corrupted metadata files in existing export domains, I think
we can workaround this issue by accepting also DISKTYPE=1.

--- Additional comment from Nir Soffer on 2019-09-02 16:13:03 UTC ---

Tal, this can be fixed with a trivial patch, targeting to 4.3.6.

--- Additional comment from Richard W.M. Jones on 2019-09-02 16:18:29 UTC ---

(In reply to Nir Soffer from comment #30)
> Since we have corrupted metadata files in existing export domains, I think
> we can workaround this issue by accepting also DISKTYPE=1.

I should say that the way -o rhv works is it copies the disks to
the ESD, and then you're supposed to soon afterwards import them
into RHV.  (This of course long predates RHV even having an API).

So the disks shouldn't exist in the ESD for very long.  It may
therefore not be necessary to work around this in RHV.

My question is what should the DISKTYPE field actually contain?  Maybe
we can put the proper data into the .meta file or remove this field
entirely?

--- Additional comment from Nir Soffer on 2019-09-02 16:38:18 UTC ---

(In reply to Richard W.M. Jones from comment #32)
> (In reply to Nir Soffer from comment #30)
> > Since we have corrupted metadata files in existing export domains, I think
> > we can workaround this issue by accepting also DISKTYPE=1.
> 
> I should say that the way -o rhv works is it copies the disks to
> the ESD, and then you're supposed to soon afterwards import them
> into RHV.  (This of course long predates RHV even having an API).
> 
> So the disks shouldn't exist in the ESD for very long.  It may
> therefore not be necessary to work around this in RHV.

It depends on engine, if it deletes the exported vm right after the import,
but based on reports from other users I suspect that the vms are not deleted.
 
> My question is what should the DISKTYPE field actually contain?  Maybe
> we can put the proper data into the .meta file or remove this field
> entirely?

The correct value is "DISKTYPE=2", so this should fix the issue:

diff --git a/v2v/create_ovf.ml b/v2v/create_ovf.ml
index 91ff5198d..9aad5dd15 100644
--- a/v2v/create_ovf.ml
+++ b/v2v/create_ovf.ml
@@ -501,7 +501,7 @@ let create_meta_files output_alloc sd_uuid image_uuids overlays =
       bpf "CTIME=%.0f\n" time;
       bpf "MTIME=%.0f\n" time;
       bpf "IMAGE=%s\n" image_uuid;
-      bpf "DISKTYPE=1\n";
+      bpf "DISKTYPE=2\n";
       bpf "PUUID=00000000-0000-0000-0000-000000000000\n";
       bpf "LEGALITY=LEGAL\n";
       bpf "POOL_UUID=\n";

But it will not help with existing images, or with engine database containing
the invalid value "1" for imported disks.

--- Additional comment from Richard W.M. Jones on 2019-09-02 20:22:08 UTC ---

Thanks.  Whether or not we also need a fix in RHV, this is now fixed in
virt-v2v in commit fcfdbc9420b07e3003df38481afb9ccd22045e1a (virt-v2v >= 1.41.5).

--- Additional comment from RHV Bugzilla Automation and Verification Bot on 2019-09-03 14:01:09 UTC ---

This bug has been cloned to the z-stream bug#1748395

--- Additional comment from RHEL Product and Program Management on 2019-09-03 15:41:45 UTC ---

This bug report has Keywords: Regression or TestBlocker.

Since no regressions or test blockers are allowed between releases, it is also being identified as a blocker for this release.

Please resolve ASAP.

--- Additional comment from Nir Soffer on 2019-09-03 16:29:11 UTC ---

Ming, can you verify the fix for this bug?

--- Additional comment from mxie on 2019-09-04 06:40:31 UTC ---

(In reply to Nir Soffer from comment #37)
> Ming, can you verify the fix for this bug?

Hi Nir,

   I have no idea where is the fix? I updated rhv env and rhv node to latest version(rhv:4.3.6.4-0.1.el7, vdsm:4.30.29-2) but the bug still exists. Do you want me to verify the bug on v2v side with commit of commet34 ? If yes, I didn't get the scratch v2v build from rjones yet and I think we should open a new bug on libguestfs to verify the fix from virt-v2v,thanks!

--- Additional comment from Nir Soffer on 2019-09-04 09:31:12 UTC ---

(In reply to mxie from comment #38)
> (In reply to Nir Soffer from comment #37)
> > Ming, can you verify the fix for this bug?

The fix will be included in the next RHV 4.3.6 build (hopefully today).

If you want to do early testing you can install vdsm from this repo:
https://jenkins.ovirt.org/job/vdsm_standard-check-patch/10965/artifact/build-artifacts.el7.x86_64/

--- Additional comment from Avihai on 2019-09-05 05:20:30 UTC ---

Shani/Richard , There are 2 fixes to this bug(libguestfs and VDSM), can you please add this bug's links?

1) V2V/libguestfs for new V2V's:
If I'm not mistaken on guthub it's the following:
https://github.com/libguestfs/libguestfs/commit/fcfdbc9420b07e3003df38481afb9ccd22045e1a

2) In VDSM for allowing existing images with this disk type:
https://gerrit.ovirt.org/gitweb?p=vdsm.git;a=commit;h=32a24bbd07d3288a152c0f95add9238eeb1c028f

Comment 1 Richard W.M. Jones 2019-09-05 09:02:48 UTC
Trivial upstream fix:
https://github.com/libguestfs/libguestfs/commit/fcfdbc9420b07e3003df38481afb9ccd22045e1a

Comment 3 liuzi 2019-09-19 06:50:50 UTC
Verify bug with builds:
libguestfs-1.40.2-14.module+el8.1.0+4230+0b6e3259.x86_64
RHV:4.3.6.5-0.1.el7

Steps:
1.Convert a guests from VMware to RHV's export domain by virt-v2v
#  virt-v2v  -ic vpx://root.73.141/data/10.73.75.219/?no_verify=1 esx6.7-rhel5.11-epoch -o rhev -os 10.66.144.40:/home/nfs_export --password-file /home/passwd -b ovirtmgmt
[   0.0] Opening the source -i libvirt -ic vpx://root.73.141/data/10.73.75.219/?no_verify=1 esx6.7-rhel5.11-epoch
[   2.1] Creating an overlay to protect the source from being modified
[   2.6] Opening the overlay
[  25.6] Inspecting the overlay
[ 110.5] Checking for sufficient free disk space in the guest
[ 110.5] Estimating space required on target for each disk
[ 110.5] Converting Red Hat Enterprise Linux Server release 5.11 (Tikanga) to run on KVM
virt-v2v: warning: don't know how to install guest tools on rhel-5
virt-v2v: This guest has virtio drivers installed.
[3650.7] Mapping filesystem data to avoid copying unused and blank areas
[3658.2] Closing the overlay
[3658.4] Assigning disks to buses
[3658.4] Checking if the guest needs BIOS or UEFI to boot
[3658.4] Initializing the target -o rhv -os 10.66.144.40:/home/nfs_export
[3658.7] Copying disk 1/1 to /tmp/v2v.Y5c4aB/3844ca07-5011-47e1-bb56-a6fda80bec48/images/50cd48e1-4d24-49c7-88ea-617b796069d4/f583987a-5bed-4494-9d56-905a2466cd80 (raw)
    (100.00/100%)
[4093.4] Creating output metadata
[4093.5] Finishing off

2.After conversion,try to import the guest from export domain to data domain.

3.The guest can be imported to the data domain,and can passed all common checkpoints.pls refer to screenshot.

Result:The guest can be imported to data domain without error info,so change the bug from ON_QA to VERIFIED.

Comment 4 liuzi 2019-09-19 06:51:54 UTC
Created attachment 1616577 [details]
import successfully

Comment 6 errata-xmlrpc 2019-11-06 07:19:21 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHBA-2019:3723