Bug 1592468

Summary: v2v to RHV transfer fails with: error: [empty name]: cannot read '//*/disksection' with value: null
Product: Red Hat Enterprise Linux 7 Reporter: Fabien Dupont <fdupont>
Component: libguestfsAssignee: Richard W.M. Jones <rjones>
Status: CLOSED ERRATA QA Contact: Virtualization Bugs <virt-bugs>
Severity: high Docs Contact:
Priority: unspecified    
Version: 7.5CC: juzhou, mxie, ptoscano, tzheng, xiaodwan
Target Milestone: rc   
Target Release: ---   
Hardware: x86_64   
OS: Linux   
Whiteboard:
Fixed In Version: libguestfs-1.38.2-6.el7 Doc Type: If docs needed, set a value
Doc Text:
Story Points: ---
Clone Of: Environment:
Last Closed: 2018-10-30 07:45:56 UTC Type: Bug
Regression: --- Mount Type: ---
Documentation: --- CRM:
Verified Versions: Category: ---
oVirt Team: --- RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: --- Target Upstream Version:
Embargoed:
Attachments:
Description Flags
v2v-import-20180614T140018-20988.log none

Description Fabien Dupont 2018-06-18 15:16:55 UTC
Description of problem:
When migrating a virtual machine using virt-v2v the generated OVF is invalid. Some size information is missing or incorrect.

Version-Release number of selected component (if applicable): - 


How reproducible: always


Steps to Reproduce:
1. Install RHV 4.2.4-4
2. Migrate a virtual machine vith virt-v2v and -o rhv-upload option to RHV

Actual results: the migration fails.

Expected results: the migration succeeds.

Additional info: -

Comment 2 Richard W.M. Jones 2018-06-18 15:21:17 UTC
Created attachment 1452664 [details]
v2v-import-20180614T140018-20988.log

Comment 3 Richard W.M. Jones 2018-06-18 15:22:34 UTC
This was caused by a change in ovirt-engine affecting parsing
of the OVF that virt-v2v generates:

https://gerrit.ovirt.org/#/c/91902/

Two patches have been posted upstream to fix this, although only
the first is strictly required:

https://www.redhat.com/archives/libguestfs/2018-June/msg00075.html
https://www.redhat.com/archives/libguestfs/2018-June/msg00077.html

Comment 4 Fabien Dupont 2018-06-19 08:33:34 UTC
The preview packages fixes the issue.

Comment 5 Richard W.M. Jones 2018-06-21 14:18:06 UTC
Pushed upstream:

https://github.com/libguestfs/libguestfs/commit/7c2afc88fd6aceb869a5e1c47a8183879ddec5fc
https://github.com/libguestfs/libguestfs/commit/75e8b1386766b18aecefdc8a75fbbf85ddb52037

Only the first is strictly required for RHEL 7, but maybe
it's best to have both.

Comment 7 mxie@redhat.com 2018-07-06 09:44:37 UTC
I can reproduce the bug with builds:
virt-v2v-1.36.10-6.10.rhvpreview.el7ev.x86_64
libguestfs-1.36.10-6.10.rhvpreview.el7ev.x86_64
rhv:4.2.4.4-0.1.el7_3

Reproduce steps:
1.Prepare vddk env on virt-v2v conversion server

2.Convert a guest from vmware via vddk to rhv4.2's data domain,the guest can't be imported to rhv4.2 due to same error with bug
#  virt-v2v -ic vpx://vsphere.local%5cAdministrator.73.141/data/10.73.75.219/?no_verify=1 -it vddk -io vddk-libdir=/root/vmware-vix-disklib-distrib -io vddk-thumbprint=1F:97:34:5F:B6:C2:BA:66:46:CB:1A:71:76:7D:6B:50:1E:03:00:EA esx6.7-rhel6.9-x86_64 --password-file /tmp/passwd -o rhv-upload -oc https://ibm-x3250m5-03.rhts.eng.pek2.redhat.com/ovirt-engine/api -os nfs_data -op /tmp/rhvpasswd -oo rhv-cafile=/home/ca.pem  -oo rhv-direct -of raw
[   0.1] Opening the source -i libvirt -ic vpx://vsphere.local%5cAdministrator.73.141/data/10.73.75.219/?no_verify=1 esx6.7-rhel6.9-x86_64 -it vddk  -io vddk-libdir=/root/vmware-vix-disklib-distrib -io vddk-thumbprint=1F:97:34:5F:B6:C2:BA:66:46:CB:1A:71:76:7D:6B:50:1E:03:00:EA
[   1.9] Creating an overlay to protect the source from being modified
[   5.0] Initializing the target -o rhv-upload -oc https://ibm-x3250m5-03.rhts.eng.pek2.redhat.com/ovirt-engine/api -op /tmp/rhvpasswd -os nfs_data
[   6.2] Opening the overlay
[  20.3] Inspecting the overlay
[  31.8] Checking for sufficient free disk space in the guest
[  31.8] Estimating space required on target for each disk
[  31.8] Converting Red Hat Enterprise Linux Server release 6.9 (Santiago) to run on KVM
virt-v2v: This guest has virtio drivers installed.
[  99.1] Mapping filesystem data to avoid copying unused and blank areas
[  99.7] Closing the overlay
[  99.9] Checking if the guest needs BIOS or UEFI to boot
[  99.9] Assigning disks to buses
[  99.9] Copying disk 1/1 to qemu URI json:{ "file.driver": "nbd", "file.path": "/var/tmp/rhvupload.UYfIP1/nbdkit1.sock", "file.export": "/" } (raw)
    (100.00/100%)
[1562.5] Creating output metadata
Traceback (most recent call last):
  File "/var/tmp/rhvupload.UYfIP1/rhv-upload-createvm.py", line 95, in <module>
    data = ovf,
  File "/usr/lib64/python2.7/site-packages/ovirtsdk4/services.py", line 33829, in add
    return self._internal_add(vm, headers, query, wait)
  File "/usr/lib64/python2.7/site-packages/ovirtsdk4/service.py", line 232, in _internal_add
    return future.wait() if wait else future
  File "/usr/lib64/python2.7/site-packages/ovirtsdk4/service.py", line 55, in wait
    return self._code(response)
  File "/usr/lib64/python2.7/site-packages/ovirtsdk4/service.py", line 229, in callback
    self._check_fault(response)
  File "/usr/lib64/python2.7/site-packages/ovirtsdk4/service.py", line 132, in _check_fault
    self._raise_error(response, body)
  File "/usr/lib64/python2.7/site-packages/ovirtsdk4/service.py", line 118, in _raise_error
    raise error
ovirtsdk4.Error: Fault reason is "Operation Failed". Fault detail is "failed to parse a given ovf configuration ovf error: [empty name]: cannot read '//*/disksection' with value: null". HTTP response code is 400.
virt-v2v: error: failed to create virtual machine, see earlier errors

If reporting bugs, run virt-v2v with debugging enabled and include the 
complete output:

  virt-v2v -v -x [...]



Verify the bug with builds
virt-v2v-1.38.2-6.el7.x86_64
libguestfs-1.38.2-6.el7.x86_64
rhv:4.2.4.4-0.1.el7_3

Steps:
1.Convert above guest from vmware via vddk to rhv4.2's data domain again
#  virt-v2v -ic vpx://vsphere.local%5cAdministrator.73.141/data/10.73.75.219/?no_verify=1 -it vddk -io vddk-libdir=/root/vmware-vix-disklib-distrib -io vddk-thumbprint=1F:97:34:5F:B6:C2:BA:66:46:CB:1A:71:76:7D:6B:50:1E:03:00:EA esx6.7-rhel6.9-x86_64 --password-file /tmp/passwd -o rhv-upload -oc https://ibm-x3250m5-03.rhts.eng.pek2.redhat.com/ovirt-engine/api -os nfs_data -op /tmp/rhvpasswd -oo rhv-cafile=/home/ca.pem  -oo rhv-direct -of raw
[   0.3] Opening the source -i libvirt -ic vpx://vsphere.local%5cAdministrator.73.141/data/10.73.75.219/?no_verify=1 esx6.7-rhel6.9-x86_64 -it vddk  -io vddk-libdir=/root/vmware-vix-disklib-distrib -io vddk-thumbprint=1F:97:34:5F:B6:C2:BA:66:46:CB:1A:71:76:7D:6B:50:1E:03:00:EA
[   2.0] Creating an overlay to protect the source from being modified
[   5.1] Initializing the target -o rhv-upload -oc https://ibm-x3250m5-03.rhts.eng.pek2.redhat.com/ovirt-engine/api -op /tmp/rhvpasswd -os nfs_data
[   6.3] Opening the overlay
[  20.3] Inspecting the overlay
[  30.4] Checking for sufficient free disk space in the guest
[  30.4] Estimating space required on target for each disk
[  30.4] Converting Red Hat Enterprise Linux Server release 6.9 (Santiago) to run on KVM
virt-v2v: This guest has virtio drivers installed.
[  84.9] Mapping filesystem data to avoid copying unused and blank areas
[  85.1] Closing the overlay
[  85.3] Checking if the guest needs BIOS or UEFI to boot
[  85.3] Assigning disks to buses
[  85.3] Copying disk 1/1 to qemu URI json:{ "file.driver": "nbd", "file.path": "/var/tmp/rhvupload.MJUzjQ/nbdkit1.sock", "file.export": "/" } (raw)
    (100.00/100%)
[1249.2] Creating output metadata
[1270.8] Finishing off

2.Power on guest and checkpoints of guest are passed except bug 1598715


Result:
  Virt-v2v could convert the guest to rhv4.2 using --rhv-upload successfully,so move the bug from ON_QA to VERIFIED

Comment 9 errata-xmlrpc 2018-10-30 07:45:56 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHEA-2018:3021