Bug 1789279

Summary: virt-v2v should give more clear error info when use some special invalid uuids for disks
Product: Red Hat Enterprise Linux 9 Reporter: liuzi <zili>
Component: virt-v2vAssignee: Richard W.M. Jones <rjones>
Status: CLOSED CURRENTRELEASE QA Contact: Vera <vwu>
Severity: medium Docs Contact:
Priority: medium    
Version: 9.0CC: jsuchane, juzhou, kkiwi, mkletzan, mxie, mzhan, ptoscano, rjones, tzheng, xiaodwan
Target Milestone: betaKeywords: Triaged
Target Release: ---   
Hardware: Unspecified   
OS: Unspecified   
Whiteboard: V2V
Fixed In Version: virt-v2v-1.45.1-1.el9 Doc Type: If docs needed, set a value
Doc Text:
Story Points: ---
Clone Of: Environment:
Last Closed: 2021-12-07 21:35:16 UTC Type: Bug
Regression: --- Mount Type: ---
Documentation: --- CRM:
Verified Versions: Category: ---
oVirt Team: --- RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: --- Target Upstream Version:
Embargoed:

Description liuzi 2020-01-09 09:13:40 UTC
Description of problem:
virt-v2v should give more clear error info when used some special invalid uuids for disks

Version-Release number of selected component (if applicable):
virt-v2v-1.40.2-16.module+el8.1.1+5309+6d656f05.x86_64
libguestfs-1.40.2-16.module+el8.1.1+5309+6d656f05.x86_64

How reproducible:
100%

Steps to Reproduce:
1.Check the v2v man page about options: -oo rhv-disk-uuid=UUID and --no-copy
# man virt-v2v-output-rhv |grep disk-uuid -A6
       -oo rhv-disk-uuid="UUID"
           This option can used to manually specify UUIDs for the disks when creating the virtual machine.  If not specified, the oVirt engine will generate random UUIDs for the disks.
           Please note that:

           ·   you must pass as many -oo rhv-disk-uuid=UUID options as the amount of disks in the guest

           ·   the specified UUIDs are used as they are, without checking whether they are already used by other disks

           This option is considered advanced, and to be used mostly in combination with --no-copy.

2.Use virt-v2v to convert a guest from VMware to RHV and specify the disk's uuid

scenario 1 Specify a valid uuid and make sure the uuid not used by other disks in rhv
# virt-v2v  -ic vpx://root.73.141/data/10.73.196.89/?no_verify=1 -o rhv-upload -os nfs_data -of raw -b ovirtmgmt -n ovirtmgmt esx6.5-rhel8.1-x86_64 -it vddk -io vddk-libdir=/home/vmware-vix-disklib-distrib -io vddk-thumbprint=1F:97:34:5F:B6:C2:BA:66:46:CB:1A:71:76:7D:6B:50:1E:03:00:EA -oc https://ibm-x3250m5-03.rhts.eng.pek2.redhat.com/ovirt-engine/api -op /home/rhvpasswd -oo rhv-cluster=Default -oo rhv-direct -ip /home/passwd  -oo rhv-disk-uuid=2b20cbc1-9b8e-4e96-ad12-2b293d03a504
[   0.4] Opening the source -i libvirt -ic vpx://root.73.141/data/10.73.196.89/?no_verify=1 esx6.5-rhel8.1-x86_64 -it vddk  -io vddk-libdir=/home/vmware-vix-disklib-distrib -io vddk-thumbprint=1F:97:34:5F:B6:C2:BA:66:46:CB:1A:71:76:7D:6B:50:1E:03:00:EA
[   2.1] Creating an overlay to protect the source from being modified
[   5.1] Opening the overlay
[  12.0] Inspecting the overlay
[  18.2] Checking for sufficient free disk space in the guest
[  18.2] Estimating space required on target for each disk
[  18.2] Converting Red Hat Enterprise Linux 8.1 (Ootpa) to run on KVM
virt-v2v: warning: don't know how to install guest tools on rhel-8
virt-v2v: This guest has virtio drivers installed.
[  50.0] Mapping filesystem data to avoid copying unused and blank areas
[  50.4] Closing the overlay
[  50.7] Assigning disks to buses
[  50.7] Checking if the guest needs BIOS or UEFI to boot
[  50.7] Initializing the target -o rhv-upload -oc https://ibm-x3250m5-03.rhts.eng.pek2.redhat.com/ovirt-engine/api -op /home/rhvpasswd -os nfs_data
[  51.9] Copying disk 1/1 to qemu URI json:{ "file.driver": "nbd", "file.path": "/var/tmp/rhvupload.at3G4z/nbdkit0.sock", "file.export": "/" } (raw)
    (100.00/100%)
[ 644.1] Creating output metadata
[ 645.6] Finishing off

Result 1:The conversion can be finished successfully and the new guest's disk-uuid is right.

scenario 2 Specify a valid uuid but which already used by other disks in rhv
virt-v2v  -ic vpx://root.73.141/data/10.73.196.89/?no_verify=1 -o rhv-upload -os nfs_data -of raw -b ovirtmgmt -n ovirtmgmt esx6.5-rhel7.7-x86_64 -it vddk -io vddk-libdir=/home/vmware-vix-disklib-distrib -io vddk-thumbprint=1F:97:34:5F:B6:C2:BA:66:46:CB:1A:71:76:7D:6B:50:1E:03:00:EA -oc https://ibm-x3250m5-03.rhts.eng.pek2.redhat.com/ovirt-engine/api -op /home/rhvpasswd -oo rhv-cluster=Default -oo rhv-direct -ip /home/passwd  -oo rhv-disk-uuid=2b20cbc1-9b8e-4e96-ad12-2b293d03a504
[   0.8] Opening the source -i libvirt -ic vpx://root.73.141/data/10.73.196.89/?no_verify=1 esx6.5-rhel7.7-x86_64 -it vddk  -io vddk-libdir=/home/vmware-vix-disklib-distrib -io vddk-thumbprint=1F:97:34:5F:B6:C2:BA:66:46:CB:1A:71:76:7D:6B:50:1E:03:00:EA
[   2.5] Creating an overlay to protect the source from being modified
[   5.6] Opening the overlay
[  12.7] Inspecting the overlay
[  27.0] Checking for sufficient free disk space in the guest
[  27.0] Estimating space required on target for each disk
[  27.0] Converting Red Hat Enterprise Linux Server 7.7 (Maipo) to run on KVM
virt-v2v: warning: guest tools directory ‘linux/el7’ is missing from 
the virtio-win directory or ISO.

Guest tools are only provided in the RHV Guest Tools ISO, so this can 
happen if you are using the version of virtio-win which contains just the 
virtio drivers.  In this case only virtio drivers can be installed in the 
guest, and installation of Guest Tools will be skipped.
virt-v2v: This guest has virtio drivers installed.
[ 103.6] Mapping filesystem data to avoid copying unused and blank areas
[ 104.0] Closing the overlay
[ 104.3] Assigning disks to buses
[ 104.3] Checking if the guest needs BIOS or UEFI to boot
[ 104.3] Initializing the target -o rhv-upload -oc https://ibm-x3250m5-03.rhts.eng.pek2.redhat.com/ovirt-engine/api -op /home/rhvpasswd -os nfs_data
[ 105.4] Copying disk 1/1 to qemu URI json:{ "file.driver": "nbd", "file.path": "/var/tmp/rhvupload.yMprA3/nbdkit0.sock", "file.export": "/" } (raw)
nbdkit: python[1]: error: /var/tmp/v2v.EEP69n/rhv-upload-plugin.py: open: error: ['Traceback (most recent call last):\n', '  File "/var/tmp/v2v.EEP69n/rhv-upload-plugin.py", line 150, in open\n    name = params[\'output_storage\'],\n', '  File "/usr/lib64/python3.6/site-packages/ovirtsdk4/services.py", line 6794, in add\n    return self._internal_add(disk, headers, query, wait)\n', '  File "/usr/lib64/python3.6/site-packages/ovirtsdk4/service.py", line 232, in _internal_add\n    return future.wait() if wait else future\n', '  File "/usr/lib64/python3.6/site-packages/ovirtsdk4/service.py", line 55, in wait\n    return self._code(response)\n', '  File "/usr/lib64/python3.6/site-packages/ovirtsdk4/service.py", line 229, in callback\n    self._check_fault(response)\n', '  File "/usr/lib64/python3.6/site-packages/ovirtsdk4/service.py", line 132, in _check_fault\n    self._raise_error(response, body)\n', '  File "/usr/lib64/python3.6/site-packages/ovirtsdk4/service.py", line 118, in _raise_error\n    raise error\n', 'ovirtsdk4.Error: Fault reason is "Operation Failed". Fault detail is "[Internal Engine Error]". HTTP response code is 400.\n']
qemu-img: Could not open 'json:{ "file.driver": "nbd", "file.path": "/var/tmp/rhvupload.yMprA3/nbdkit0.sock", "file.export": "/" }': Failed to read option reply: Unexpected end-of-file before all bytes were read

virt-v2v: error: qemu-img command failed, see earlier errors

If reporting bugs, run virt-v2v with debugging enabled and include the 
complete output:

  virt-v2v -v -x [...]

Result 2:virt-v2v gave an unclear error info.

scenario 3 Specify a invalid uuid which not conform to the format
#  virt-v2v  -ic vpx://root.73.141/data/10.73.196.89/?no_verify=1 -o rhv-upload -os nfs_data -of raw -b ovirtmgmt -n ovirtmgmt esx6.5-rhel8.0-x86_64 -it vddk -io vddk-libdir=/home/vmware-vix-disklib-distrib -io vddk-thumbprint=1F:97:34:5F:B6:C2:BA:66:46:CB:1A:71:76:7D:6B:50:1E:03:00:EA -oc https://ibm-x3250m5-03.rhts.eng.pek2.redhat.com/ovirt-engine/api -op /tmp/rhvpasswd -oo rhv-cluster=Default -oo rhv-direct -ip /tmp/passwd  -oo rhv-disk-uuid=abcdefghg
[   0.4] Opening the source -i libvirt -ic vpx://root.73.141/data/10.73.196.89/?no_verify=1 esx6.5-rhel8.0-x86_64 -it vddk  -io vddk-libdir=/home/vmware-vix-disklib-distrib -io vddk-thumbprint=1F:97:34:5F:B6:C2:BA:66:46:CB:1A:71:76:7D:6B:50:1E:03:00:EA
[   1.9] Creating an overlay to protect the source from being modified
[   4.9] Opening the overlay
[  12.0] Inspecting the overlay
[  19.0] Checking for sufficient free disk space in the guest
[  19.0] Estimating space required on target for each disk
[  19.0] Converting Red Hat Enterprise Linux 8.0 Beta (Ootpa) to run on KVM
virt-v2v: warning: ignoring kernel /vmlinuz-4.18.0-40.el8.x86_64 in 
bootloader, as it does not exist.
virt-v2v: warning: don't know how to install guest tools on rhel-8
virt-v2v: This guest has virtio drivers installed.
[  64.7] Mapping filesystem data to avoid copying unused and blank areas
[  65.2] Closing the overlay
[  65.5] Assigning disks to buses
[  65.5] Checking if the guest needs BIOS or UEFI to boot
[  65.5] Initializing the target -o rhv-upload -oc https://ibm-x3250m5-03.rhts.eng.pek2.redhat.com/ovirt-engine/api -op /tmp/rhvpasswd -os nfs_data
[  66.7] Copying disk 1/1 to qemu URI json:{ "file.driver": "nbd", "file.path": "/var/tmp/rhvupload.9BQ5ox/nbdkit0.sock", "file.export": "/" } (raw)
nbdkit: python[1]: error: /var/tmp/v2v.lk6G3H/rhv-upload-plugin.py: open: error: ['Traceback (most recent call last):\n', '  File "/var/tmp/v2v.lk6G3H/rhv-upload-plugin.py", line 150, in open\n    name = params[\'output_storage\'],\n', '  File "/usr/lib64/python3.6/site-packages/ovirtsdk4/services.py", line 6794, in add\n    return self._internal_add(disk, headers, query, wait)\n', '  File "/usr/lib64/python3.6/site-packages/ovirtsdk4/service.py", line 232, in _internal_add\n    return future.wait() if wait else future\n', '  File "/usr/lib64/python3.6/site-packages/ovirtsdk4/service.py", line 55, in wait\n    return self._code(response)\n', '  File "/usr/lib64/python3.6/site-packages/ovirtsdk4/service.py", line 229, in callback\n    self._check_fault(response)\n', '  File "/usr/lib64/python3.6/site-packages/ovirtsdk4/service.py", line 132, in _check_fault\n    self._raise_error(response, body)\n', '  File "/usr/lib64/python3.6/site-packages/ovirtsdk4/service.py", line 118, in _raise_error\n    raise error\n', 'ovirtsdk4.Error: Fault reason is "Operation failed". Fault detail is "Invalid UUID string: abcdefghg". HTTP response code is 400.\n']
qemu-img: Could not open 'json:{ "file.driver": "nbd", "file.path": "/var/tmp/rhvupload.9BQ5ox/nbdkit0.sock", "file.export": "/" }': Failed to read option reply: Unexpected end-of-file before all bytes were read

virt-v2v: error: qemu-img command failed, see earlier errors

If reporting bugs, run virt-v2v with debugging enabled and include the 
complete output:

  virt-v2v -v -x [...]

Result 3:virt-v2v will give an expected error info.

scenario 4 Use unassigned rhv-disk-uuid parameter 
#  virt-v2v  -ic vpx://root.73.141/data/10.73.196.89/?no_verify=1 -o rhv-upload -os nfs_data -of raw -b ovirtmgmt -n ovirtmgmt esx6.5-rhel7.7-x86_64 -it vddk -io vddk-libdir=/home/vmware-vix-disklib-distrib -io vddk-thumbprint=1F:97:34:5F:B6:C2:BA:66:46:CB:1A:71:76:7D:6B:50:1E:03:00:EA -oc https://ibm-x3250m5-03.rhts.eng.pek2.redhat.com/ovirt-engine/api -op /home/rhvpasswd -oo rhv-cluster=Default -oo rhv-direct -ip /home/passwd  -oo rhv-disk-uuid=
[   0.9] Opening the source -i libvirt -ic vpx://root.73.141/data/10.73.196.89/?no_verify=1 esx6.5-rhel7.7-x86_64 -it vddk  -io vddk-libdir=/home/vmware-vix-disklib-distrib -io vddk-thumbprint=1F:97:34:5F:B6:C2:BA:66:46:CB:1A:71:76:7D:6B:50:1E:03:00:EA
[   2.7] Creating an overlay to protect the source from being modified
[   6.1] Opening the overlay
[  13.1] Inspecting the overlay
[  27.3] Checking for sufficient free disk space in the guest
[  27.3] Estimating space required on target for each disk
[  27.3] Converting Red Hat Enterprise Linux Server 7.7 (Maipo) to run on KVM
virt-v2v: warning: guest tools directory ‘linux/el7’ is missing from 
the virtio-win directory or ISO.

Guest tools are only provided in the RHV Guest Tools ISO, so this can 
happen if you are using the version of virtio-win which contains just the 
virtio drivers.  In this case only virtio drivers can be installed in the 
guest, and installation of Guest Tools will be skipped.
virt-v2v: This guest has virtio drivers installed.
[ 103.1] Mapping filesystem data to avoid copying unused and blank areas
[ 103.6] Closing the overlay
[ 103.8] Assigning disks to buses
[ 103.8] Checking if the guest needs BIOS or UEFI to boot
[ 103.8] Initializing the target -o rhv-upload -oc https://ibm-x3250m5-03.rhts.eng.pek2.redhat.com/ovirt-engine/api -op /home/rhvpasswd -os nfs_data
[ 105.0] Copying disk 1/1 to qemu URI json:{ "file.driver": "nbd", "file.path": "/var/tmp/rhvupload.hiBldS/nbdkit0.sock", "file.export": "/" } (raw)
nbdkit: python[1]: error: /var/tmp/v2v.5yVQ8U/rhv-upload-plugin.py: open: error: ['Traceback (most recent call last):\n', '  File "/var/tmp/v2v.5yVQ8U/rhv-upload-plugin.py", line 150, in open\n    name = params[\'output_storage\'],\n', '  File "/usr/lib64/python3.6/site-packages/ovirtsdk4/services.py", line 6794, in add\n    return self._internal_add(disk, headers, query, wait)\n', '  File "/usr/lib64/python3.6/site-packages/ovirtsdk4/service.py", line 232, in _internal_add\n    return future.wait() if wait else future\n', '  File "/usr/lib64/python3.6/site-packages/ovirtsdk4/service.py", line 55, in wait\n    return self._code(response)\n', '  File "/usr/lib64/python3.6/site-packages/ovirtsdk4/service.py", line 229, in callback\n    self._check_fault(response)\n', '  File "/usr/lib64/python3.6/site-packages/ovirtsdk4/service.py", line 132, in _check_fault\n    self._raise_error(response, body)\n', '  File "/usr/lib64/python3.6/site-packages/ovirtsdk4/service.py", line 118, in _raise_error\n    raise error\n', 'ovirtsdk4.Error: Fault reason is "Operation Failed". Fault detail is "[]". HTTP response code is 400.\n']
qemu-img: Could not open 'json:{ "file.driver": "nbd", "file.path": "/var/tmp/rhvupload.hiBldS/nbdkit0.sock", "file.export": "/" }': Failed to read option reply: Unexpected end-of-file before all bytes were read

virt-v2v: error: qemu-img command failed, see earlier errors

If reporting bugs, run virt-v2v with debugging enabled and include the 
complete output:

  virt-v2v -v -x [...]

Result 4: The conversion was failed and virt-v2v gave an unclear error info

scenario 5 Specify rhv-disk-uuid=00000000-0000-0000-0000-000000000000 which conform to the format
#  virt-v2v  -ic vpx://root.73.141/data/10.73.196.89/?no_verify=1 -o rhv-upload -os nfs_data -of raw -b ovirtmgmt -n ovirtmgmt esx6.5-rhel8.0-x86_64 -it vddk -io vddk-libdir=/home/vmware-vix-disklib-distrib -io vddk-thumbprint=1F:97:34:5F:B6:C2:BA:66:46:CB:1A:71:76:7D:6B:50:1E:03:00:EA -oc https://ibm-x3250m5-03.rhts.eng.pek2.redhat.com/ovirt-engine/api -op /tmp/rhvpasswd -oo rhv-cluster=Default -oo rhv-direct -ip /tmp/passwd  -oo rhv-disk-uuid=00000000-0000-0000-0000-000000000000
[   0.4] Opening the source -i libvirt -ic vpx://root.73.141/data/10.73.196.89/?no_verify=1 esx6.5-rhel8.0-x86_64 -it vddk  -io vddk-libdir=/home/vmware-vix-disklib-distrib -io vddk-thumbprint=1F:97:34:5F:B6:C2:BA:66:46:CB:1A:71:76:7D:6B:50:1E:03:00:EA
[   2.0] Creating an overlay to protect the source from being modified
[   5.2] Opening the overlay
[  12.2] Inspecting the overlay
[  19.5] Checking for sufficient free disk space in the guest
[  19.5] Estimating space required on target for each disk
[  19.5] Converting Red Hat Enterprise Linux 8.0 Beta (Ootpa) to run on KVM
virt-v2v: warning: ignoring kernel /vmlinuz-4.18.0-40.el8.x86_64 in 
bootloader, as it does not exist.
virt-v2v: warning: don't know how to install guest tools on rhel-8
virt-v2v: This guest has virtio drivers installed.
[  65.4] Mapping filesystem data to avoid copying unused and blank areas
[  65.9] Closing the overlay
[  66.2] Assigning disks to buses
[  66.2] Checking if the guest needs BIOS or UEFI to boot
[  66.2] Initializing the target -o rhv-upload -oc https://ibm-x3250m5-03.rhts.eng.pek2.redhat.com/ovirt-engine/api -op /tmp/rhvpasswd -os nfs_data
[  67.8] Copying disk 1/1 to qemu URI json:{ "file.driver": "nbd", "file.path": "/var/tmp/rhvupload.YXqIs3/nbdkit0.sock", "file.export": "/" } (raw)
nbdkit: python[1]: error: /var/tmp/v2v.eewoD3/rhv-upload-plugin.py: open: error: ['Traceback (most recent call last):\n', '  File "/var/tmp/v2v.eewoD3/rhv-upload-plugin.py", line 150, in open\n    name = params[\'output_storage\'],\n', '  File "/usr/lib64/python3.6/site-packages/ovirtsdk4/services.py", line 6794, in add\n    return self._internal_add(disk, headers, query, wait)\n', '  File "/usr/lib64/python3.6/site-packages/ovirtsdk4/service.py", line 232, in _internal_add\n    return future.wait() if wait else future\n', '  File "/usr/lib64/python3.6/site-packages/ovirtsdk4/service.py", line 55, in wait\n    return self._code(response)\n', '  File "/usr/lib64/python3.6/site-packages/ovirtsdk4/service.py", line 229, in callback\n    self._check_fault(response)\n', '  File "/usr/lib64/python3.6/site-packages/ovirtsdk4/service.py", line 132, in _check_fault\n    self._raise_error(response, body)\n', '  File "/usr/lib64/python3.6/site-packages/ovirtsdk4/service.py", line 118, in _raise_error\n    raise error\n', 'ovirtsdk4.Error: Fault reason is "Operation Failed". Fault detail is "[]". HTTP response code is 400.\n']
qemu-img: Could not open 'json:{ "file.driver": "nbd", "file.path": "/var/tmp/rhvupload.YXqIs3/nbdkit0.sock", "file.export": "/" }': Failed to read option reply: Unexpected end-of-file before all bytes were read

virt-v2v: error: qemu-img command failed, see earlier errors

If reporting bugs, run virt-v2v with debugging enabled and include the 
complete output:

  virt-v2v -v -x [...]

Result 5: The conversion was failed and virt-v2v gave an unclear error info

Scenario 6:Specify rhv-disk-uuid=fc236597-1041-4885-bec9-1d71257c256u, which is illegal but conform to the format 
#  virt-v2v  -ic vpx://root.73.141/data/10.73.196.89/?no_verify=1 -o rhv-upload -os nfs_data -of raw -b ovirtmgmt -n ovirtmgmt esx6.5-rhel8.1-x86_64 -it vddk -io vddk-libdir=/home/vmware-vix-disklib-distrib -io vddk-thumbprint=1F:97:34:5F:B6:C2:BA:66:46:CB:1A:71:76:7D:6B:50:1E:03:00:EA -oc https://ibm-x3250m5-03.rhts.eng.pek2.redhat.com/ovirt-engine/api -op /home/rhvpasswd -oo rhv-cluster=Default -oo rhv-direct -ip /home/passwd  -oo rhv-disk-uuid=fc236597-1041-4885-bec9-1d71257c256u
[   0.9] Opening the source -i libvirt -ic vpx://root.73.141/data/10.73.196.89/?no_verify=1 esx6.5-rhel8.1-x86_64 -it vddk  -io vddk-libdir=/home/vmware-vix-disklib-distrib -io vddk-thumbprint=1F:97:34:5F:B6:C2:BA:66:46:CB:1A:71:76:7D:6B:50:1E:03:00:EA
[   2.8] Creating an overlay to protect the source from being modified
[   6.2] Opening the overlay
[  13.2] Inspecting the overlay
[  19.8] Checking for sufficient free disk space in the guest
[  19.8] Estimating space required on target for each disk
[  19.8] Converting Red Hat Enterprise Linux 8.1 (Ootpa) to run on KVM
virt-v2v: warning: don't know how to install guest tools on rhel-8
virt-v2v: This guest has virtio drivers installed.
[  52.0] Mapping filesystem data to avoid copying unused and blank areas
[  52.5] Closing the overlay
[  52.7] Assigning disks to buses
[  52.7] Checking if the guest needs BIOS or UEFI to boot
[  52.7] Initializing the target -o rhv-upload -oc https://ibm-x3250m5-03.rhts.eng.pek2.redhat.com/ovirt-engine/api -op /home/rhvpasswd -os nfs_data
[  53.9] Copying disk 1/1 to qemu URI json:{ "file.driver": "nbd", "file.path": "/var/tmp/rhvupload.sMxEQ2/nbdkit0.sock", "file.export": "/" } (raw)
nbdkit: python[1]: error: /var/tmp/v2v.7V0gov/rhv-upload-plugin.py: open: error: ['Traceback (most recent call last):\n', '  File "/var/tmp/v2v.7V0gov/rhv-upload-plugin.py", line 150, in open\n    name = params[\'output_storage\'],\n', '  File "/usr/lib64/python3.6/site-packages/ovirtsdk4/services.py", line 6794, in add\n    return self._internal_add(disk, headers, query, wait)\n', '  File "/usr/lib64/python3.6/site-packages/ovirtsdk4/service.py", line 232, in _internal_add\n    return future.wait() if wait else future\n', '  File "/usr/lib64/python3.6/site-packages/ovirtsdk4/service.py", line 55, in wait\n    return self._code(response)\n', '  File "/usr/lib64/python3.6/site-packages/ovirtsdk4/service.py", line 229, in callback\n    self._check_fault(response)\n', '  File "/usr/lib64/python3.6/site-packages/ovirtsdk4/service.py", line 132, in _check_fault\n    self._raise_error(response, body)\n', '  File "/usr/lib64/python3.6/site-packages/ovirtsdk4/service.py", line 118, in _raise_error\n    raise error\n', 'ovirtsdk4.Error: Fault reason is "Operation failed". Fault detail is "For input string: "1d71257c256u"". HTTP response code is 400.\n']
qemu-img: Could not open 'json:{ "file.driver": "nbd", "file.path": "/var/tmp/rhvupload.sMxEQ2/nbdkit0.sock", "file.export": "/" }': Failed to read option reply: Unexpected end-of-file before all bytes were read

virt-v2v: error: qemu-img command failed, see earlier errors

If reporting bugs, run virt-v2v with debugging enabled and include the 
complete output:

  virt-v2v -v -x [...]

Result 6: The conversion was failed and virt-v2v gave an unclear error info


Actual results:
As above

Expected results:
For scenario2,4,5 and 6,virt-v2v should give more clear error info.

Additional info:

Comment 1 Martin Kletzander 2020-01-23 22:26:00 UTC
Patches proposed upstream:

https://www.redhat.com/archives/libguestfs/2020-January/msg00184.html

Comment 3 Richard W.M. Jones 2021-04-27 15:50:08 UTC
Martin, this patch series got positive reviews upstream, but
unfortunately there was no follow-on after version 3:

https://listman.redhat.com/archives/libguestfs/2020-March/thread.html#00084

Would you mind taking a look at this series again?

Comment 4 Martin Kletzander 2021-05-06 11:33:19 UTC
(In reply to Richard W.M. Jones from comment #3)
I guess not much happens if this one does not get fixed and that's probably why I forgot about it.  I'll try to send a new version so that we can get it through.  Thanks for the reminder.

Comment 5 Richard W.M. Jones 2021-06-03 11:06:09 UTC
Fixed upstream in:
https://github.com/libguestfs/virt-v2v/commit/5a0662b1d85ef4cd07a0da937730f4b08f539989

Moving to RHEL 9 in the interests of not doing too much work
in RHEL AV, and this is just an input validation change so
not an important fix for AV.

Comment 10 Vera 2021-06-17 11:01:31 UTC
Verify with below builds:
Virt-v2v-1.45.1-1.el9.1.x86_64
libguestfs-1.45.6-6.el9.x86_64
libvirt-7.4.0-1.el9.x86_64
nbdkit-1.26.1-1.el9.x86_64
virtio-win-1.9.15-3.el9.noarch

Steps:
1.Check the v2v man page about options: -oo rhv-disk-uuid=UUID and --no-copy
# man virt-v2v-output-rhv |grep disk-uuid -A6

--
       -oo rhv-disk-uuid="UUID"
           This option can used to manually specify UUIDs for the disks when creating the virtual machine.  If not
           specified, the oVirt engine will generate random UUIDs for the disks.  Please note that:

           •   you must pass as many -oo rhv-disk-uuid=UUID options as the amount of disks in the guest

           •   the specified UUIDs are used as they are, without checking whether they are already used by other
               disks

           This option is considered advanced, and to be used mostly in combination with --no-copy.



2.Use virt-v2v to convert a guest from VMware to RHV and specify the disk's uuid

scenario 1 Specify a valid uuid and make sure the uuid not used by other disks in rhv
 # virt-v2v -ic vpx://root.198.169/data/10.73.199.217/?no_verify=1 esx7.0-win2019-x86_64 -ip /home/esx_passwd -o rhv-upload -os nfs_data -of raw -b ovirtmgmt -n ovirtmgmt  -it vddk -io vddk-libdir=/vddktest -io vddk-thumbprint=B5:52:1F:B4:21:09:45:24:51:32:56:F6:63:6A:93:5D:54:08:2D:78 -oc https://dell-per740-22.lab.eng.pek2.redhat.com/ovirt-engine/api -op /home/rhv_passwd -oo rhv-cluster=Default -oo rhv-direct -on esx7.0-win2019-x86_64_rhel9 -oo rhv-disk-uuid=2b20cbc1-9b8e-4e96-ad12-2b293d03a504
[   1.1] Opening the source -i libvirt -ic vpx://root.198.169/data/10.73.199.217/?no_verify=1 esx7.0-win2019-x86_64 -it vddk  -io vddk-libdir=/vddktest -io vddk-thumbprint=B5:52:1F:B4:21:09:45:24:51:32:56:F6:63:6A:93:5D:54:08:2D:78
[   2.8] Creating an overlay to protect the source from being modified
[   4.6] Opening the overlay
[  10.1] Inspecting the overlay
[  15.9] Checking for sufficient free disk space in the guest
[  15.9] Estimating space required on target for each disk
[  15.9] Converting Windows Server 2019 Standard to run on KVM
virt-v2v: warning: /usr/share/virt-tools/rhev-apt.exe is missing, but the 
output hypervisor is oVirt or RHV.  Installing RHEV-APT in the guest would 
mean the guest is automatically updated with new drivers etc.  You may wish 
to install RHEV-APT manually after conversion.
virt-v2v: warning: there is no QXL driver for this version of Windows (10.0 
x86_64).  virt-v2v looks for this driver in 
/usr/share/virtio-win/virtio-win.iso

The guest will be configured to use a basic VGA display driver.
virt-v2v: This guest has virtio drivers installed.
[  34.7] Mapping filesystem data to avoid copying unused and blank areas
[  36.2] Closing the overlay
[  36.3] Assigning disks to buses
[  36.3] Checking if the guest needs BIOS or UEFI to boot
[  36.3] Initializing the target -o rhv-upload -oc https://dell-per740-22.lab.eng.pek2.redhat.com/ovirt-engine/api -op /home/rhv_passwd -os nfs_data
[  37.8] Copying disk 1/1 to qemu URI json:{ "file.driver": "nbd", "file.path": "/tmp/v2vnbdkit.iMNK2Y/nbdkit4.sock", "file.export": "/" } (raw)
    (100.00/100%)
[ 264.0] Creating output metadata
[ 265.2] Finishing off
[root@hp-dl380eg8-01 ~]# 

Result 1:The conversion can be finished successfully and on the rhv, the new guest's disk-uuid is correct.


scenario 2 Specify a valid uuid but which already used by other disks in rhv

# virt-v2v -ic vpx://root.198.169/data/10.73.199.217/?no_verify=1 esx7.0-win2016-x86_64 -ip /home/esx_passwd -o rhv-upload -os nfs_data -of raw -b ovirtmgmt -n ovirtmgmt  -it vddk -io vddk-libdir=/vddktest -io vddk-thumbprint=B5:52:1F:B4:21:09:45:24:51:32:56:F6:63:6A:93:5D:54:08:2D:78 -oc https://dell-per740-22.lab.eng.pek2.redhat.com/ovirt-engine/api -op /home/rhv_passwd -oo rhv-cluster=Default -oo rhv-direct -on esx7.0-win2016-x86_64_rhel9 -oo rhv-disk-uuid=2b20cbc1-9b8e-4e96-ad12-2b293d03a504
Traceback (most recent call last):
  File "/tmp/v2v.Umsan1/rhv-upload-precheck.py", line 107, in <module>
    raise RuntimeError("Disk with the UUID '%s' already exists" % uuid)
RuntimeError: Disk with the UUID '2b20cbc1-9b8e-4e96-ad12-2b293d03a504' already exists
virt-v2v: error: failed server prechecks, see earlier errors

If reporting bugs, run virt-v2v with debugging enabled and include the 
complete output:

  virt-v2v -v -x [...]
# 

Result 2:The conversion was failed and virt-v2v gave a clear error info.

scenario 3 Specify a invalid uuid which not conform to the format

# virt-v2v -ic vpx://root.198.169/data/10.73.199.217/?no_verify=1 esx7.0-win2016-x86_64 -ip /home/esx_passwd -o rhv-upload -os nfs_data -of raw -b ovirtmgmt -n ovirtmgmt  -it vddk -io vddk-libdir=/vddktest -io vddk-thumbprint=B5:52:1F:B4:21:09:45:24:51:32:56:F6:63:6A:93:5D:54:08:2D:78 -oc https://dell-per740-22.lab.eng.pek2.redhat.com/ovirt-engine/api -op /home/rhv_passwd -oo rhv-cluster=Default -oo rhv-direct -on esx7.0-win2016-x86_64_rhel9 -oo rhv-disk-uuid=abcdefghg
virt-v2v: error: -o rhv-upload: invalid UUID for -oo rhv-disk-uuid

If reporting bugs, run virt-v2v with debugging enabled and include the 
complete output:

  virt-v2v -v -x [...]

Result 3:The conversion was failed and virt-v2v gave an expected error info.


scenario 4 Use unassigned rhv-disk-uuid parameter 

# virt-v2v -ic vpx://root.198.169/data/10.73.199.217/?no_verify=1 esx7.0-win2016-x86_64 -ip /home/esx_passwd -o rhv-upload -os nfs_data -of raw -b ovirtmgmt -n ovirtmgmt  -it vddk -io vddk-libdir=/vddktest -io vddk-thumbprint=B5:52:1F:B4:21:09:45:24:51:32:56:F6:63:6A:93:5D:54:08:2D:78 -oc https://dell-per740-22.lab.eng.pek2.redhat.com/ovirt-engine/api -op /home/rhv_passwd -oo rhv-cluster=Default -oo rhv-direct -on esx7.0-win2016-x86_64_rhel9 -oo rhv-disk-uuid=
virt-v2v: error: -o rhv-upload: invalid UUID for -oo rhv-disk-uuid

If reporting bugs, run virt-v2v with debugging enabled and include the 
complete output:

  virt-v2v -v -x [...]
#

Result 4: The conversion was failed and virt-v2v gave an expected error info

scenario 5 Specify rhv-disk-uuid=00000000-0000-0000-0000-000000000000 which conform to the format
# virt-v2v -ic vpx://root.198.169/data/10.73.199.217/?no_verify=1 esx7.0-win2016-x86_64 -ip /home/esx_passwd -o rhv-upload -os nfs_data -of raw -b ovirtmgmt -n ovirtmgmt  -it vddk -io vddk-libdir=/vddktest -io vddk-thumbprint=B5:52:1F:B4:21:09:45:24:51:32:56:F6:63:6A:93:5D:54:08:2D:78 -oc https://dell-per740-22.lab.eng.pek2.redhat.com/ovirt-engine/api -op /home/rhv_passwd -oo rhv-cluster=Default -oo rhv-direct -on esx7.0-win2016-x86_64_rhel9 -oo rhv-disk-uuid=00000000-0000-0000-0000-000000000000
virt-v2v: error: -o rhv-upload: invalid UUID for -oo rhv-disk-uuid

If reporting bugs, run virt-v2v with debugging enabled and include the 
complete output:

  virt-v2v -v -x [...]
# 


Result 5:  The conversion was failed and virt-v2v gave an expected error info

Scenario 6:Specify rhv-disk-uuid=fc236597-1041-4885-bec9-1d71257c256u, which is illegal but conform to the format 
# virt-v2v -ic vpx://root.198.169/data/10.73.199.217/?no_verify=1 esx7.0-win2016-x86_64 -ip /home/esx_passwd -o rhv-upload -os nfs_data -of raw -b ovirtmgmt -n ovirtmgmt  -it vddk -io vddk-libdir=/vddktest -io vddk-thumbprint=B5:52:1F:B4:21:09:45:24:51:32:56:F6:63:6A:93:5D:54:08:2D:78 -oc https://dell-per740-22.lab.eng.pek2.redhat.com/ovirt-engine/api -op /home/rhv_passwd -oo rhv-cluster=Default -oo rhv-direct -on esx7.0-win2016-x86_64_rhel9 -oo rhv-disk-uuid=fc236597-1041-4885-bec9-1d71257c256u
virt-v2v: error: -o rhv-upload: invalid UUID for -oo rhv-disk-uuid

If reporting bugs, run virt-v2v with debugging enabled and include the 
complete output:

  virt-v2v -v -x [...]
# 

Result 6:  The conversion was failed and virt-v2v gave an expected error info