Bug 1612653 - Guest has no disk after rhv-upload converting if target data domain has similar name with other data domain on rhv4.2
Summary: Guest has no disk after rhv-upload converting if target data domain has simil...
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: Red Hat Enterprise Linux Advanced Virtualization
Classification: Red Hat
Component: libguestfs
Version: 8.0
Hardware: x86_64
OS: Unspecified
high
high
Target Milestone: rc
: 8.1
Assignee: Pino Toscano
QA Contact: Virtualization Bugs
URL:
Whiteboard: V2V
Depends On:
Blocks: 1649160
TreeView+ depends on / blocked
 
Reported: 2018-08-06 04:22 UTC by mxie@redhat.com
Modified: 2020-09-01 07:31 UTC (History)
10 users (show)

Fixed In Version: libguestfs-1.40.2-15.module+el8.1.1+4955+f0b25565
Doc Type: If docs needed, set a value
Doc Text:
Clone Of:
Environment:
Last Closed: 2020-02-04 18:28:48 UTC
Type: Bug
Target Upstream Version:
Embargoed:


Attachments (Terms of Use)
rhv-upload-data2.log (1.49 MB, text/plain)
2018-08-06 04:22 UTC, mxie@redhat.com
no flags Details
guest-data2.png (66.89 KB, image/png)
2018-08-06 04:23 UTC, mxie@redhat.com
no flags Details


Links
System ID Private Priority Status Summary Last Updated
Red Hat Product Errata RHBA-2020:0404 0 None None None 2020-02-04 18:29:57 UTC

Description mxie@redhat.com 2018-08-06 04:22:39 UTC
Created attachment 1473525 [details]
rhv-upload-data2.log

Description of problem:
Guest has no disk after rhv-upload converting if target data domain has similar name with other data domain on rhv4.2 

Version-Release number of selected component (if applicable):
virt-v2v-1.38.2-10.el7.x86_64
libguestfs-1.38.2-10.el7.x86_64
libvirt-4.5.0-6.el7.x86_64
qemu-kvm-rhev-2.12.0-9.el7.x86_64
nbdkit-1.2.6-1.el7.x86_64
nbdkit-plugin-python2-1.2.6-1.el7.x86_64

How reproducible:
100%

Steps to Reproduce:
1.Set up environment as below on rhv4.2:

DC1
  Host1 
    data
DC2 
  Host2 
    data2

2.Convert guest to data by v2v using rhv-upload on conversion server which is not ovirt node , the conversion can be finished without error and checkpoints of guest are passed


3.Convert guest to data2 by v2v using rhv-upload on conversion server which is not ovirt node, the conversion can be finished without error, but found the guest doesn't have disk after finishing conversion, pls refer to screenshot "guest-data2" and v2v log " rhv-upload-data2"

# virt-v2v rhel7.6  -o rhv-upload -oc https://ibm-x3250m5-03.rhts.eng.pek2.redhat.com/ovirt-engine/api -os data2 -op /tmp/rhvpasswd -oo rhv-cafile=/home/ca.pem  -oo rhv-direct -of raw -oa preallocated -b ovirtmgmt
[   0.8] Opening the source -i libvirt rhel7.6
[   0.9] Creating an overlay to protect the source from being modified
[   1.4] Initializing the target -o rhv-upload -oa preallocated -oc https://ibm-x3250m5-03.rhts.eng.pek2.redhat.com/ovirt-engine/api -op /tmp/rhvpasswd -os data2
[   3.6] Opening the overlay
[  36.7] Inspecting the overlay
[  68.3] Checking for sufficient free disk space in the guest
[  68.3] Estimating space required on target for each disk
[  68.3] Converting Red Hat Enterprise Linux Server 7.6 Beta (Maipo) to run on KVM
virt-v2v: This guest has virtio drivers installed.
[ 168.5] Mapping filesystem data to avoid copying unused and blank areas
[ 169.4] Closing the overlay
[ 170.9] Checking if the guest needs BIOS or UEFI to boot
[ 170.9] Assigning disks to buses
[ 170.9] Copying disk 1/1 to qemu URI json:{ "file.driver": "nbd", "file.path": "/var/tmp/rhvupload.4BneBB/nbdkit1.sock", "file.export": "/" } (raw)
    (100.00/100%)
[ 756.1] Creating output metadata
[ 781.0] Finishing off


Actual results:
As above description

Expected results:
Check points of guest are passed after rhv-upload converting if target data domain has similar name with other data domain on rhv4.2 

Additional info:

Comment 2 mxie@redhat.com 2018-08-06 04:23:42 UTC
Created attachment 1473526 [details]
guest-data2.png

Comment 3 Richard W.M. Jones 2018-08-06 08:54:15 UTC
I'm not sure about this one.  But from the log:

disk.id = 'ca12094d-0ac1-41cf-ac0c-50a71b3fef2d'
cannot read /etc/vdsm/vdsm.id, using any host: [Errno 2] No such file or directory: '/etc/vdsm/vdsm.id'
transfer.id = '7256353e-f43d-41a0-b966-ca6a34b6a6b8'
imageio features: flush=True trim=False zero=True unix_socket='\x00/org/ovirt/imageio'

Unfortunately the VM is created by a separate script which doesn't
produce very much debugging.  However the code has:

    # Get the storage domain UUID and substitute it into the OVF doc.
    sds_service = system_service.storage_domains_service()
    sd = sds_service.list(search=("name=%s" % params['output_storage']))[0]
    sd_uuid = sd.id

compared to this code in rhv-upload-plugin.py:

    system_service = connection.system_service()
    storage_name = params['output_storage']
    data_centers = system_service.data_centers_service().list(
        search='storage.name=%s' % storage_name,
        case_sensitive=True,
    )

Perhaps we need to modify rhv-upload-createvm.py?

Comment 4 Nir Soffer 2018-08-06 09:15:19 UTC
Daniel, can you review rhv-upload-createvm.py and recommend the proper way to do 
this?

Comment 6 Daniel Erez 2018-08-07 06:42:16 UTC
(In reply to Nir Soffer from comment #4)
> Daniel, can you review rhv-upload-createvm.py and recommend the proper way
> to do 
> this?

From the rhv-upload-data2.log[1], it seems that the @SD_UUID@ wasn't replaced with a value. I guess we should just change this line (from rhv-upload-createvm.py):
"ovf.replace("@SD_UUID@", sd_uuid)"
to
"ovf = ovf.replace("@SD_UUID@", sd_uuid)"

Also, shouldn't we include the pool ID as well (in 'rasd:StoragePoolId')?

[1]
<Type>disk</Type>
...
<rasd:StorageId>@SD_UUID@</rasd:StorageId>
<rasd:StoragePoolId>00000000-0000-0000-0000-000000000000</rasd:StoragePoolId>

Comment 7 Richard W.M. Jones 2018-08-07 08:05:55 UTC
(In reply to Daniel Erez from comment #6)
> (In reply to Nir Soffer from comment #4)
> > Daniel, can you review rhv-upload-createvm.py and recommend the proper way
> > to do 
> > this?
> 
> From the rhv-upload-data2.log[1], it seems that the @SD_UUID@ wasn't
> replaced with a value. I guess we should just change this line (from
> rhv-upload-createvm.py):
> "ovf.replace("@SD_UUID@", sd_uuid)"
> to
> "ovf = ovf.replace("@SD_UUID@", sd_uuid)"

This is a bug, I'll submit a fix soon, but ...

> Also, shouldn't we include the pool ID as well (in 'rasd:StoragePoolId')?
> 
> [1]
> <Type>disk</Type>
> ...
> <rasd:StorageId>@SD_UUID@</rasd:StorageId>
> <rasd:StoragePoolId>00000000-0000-0000-0000-000000000000</rasd:StoragePoolId>

What should go in there?

Comment 8 Richard W.M. Jones 2018-08-07 10:57:34 UTC
The first fix is upstream in commit
389e165519c33b5234db50ea26dcb267321ee152.

I haven't verified if it's a correct fix for this bug, although it
looks likely.

Comment 9 mxie@redhat.com 2018-08-08 08:42:19 UTC
Hi rjones,   
   
    I'm sorry,I just found I made a stupid mistake in comment0,I forgot to add DC2's cluster name in v2v command for converting guest to data2 so that guest will be converted to DC1 (default cluster) and has no disk after conversion, but I think v2v didn't find DC1 doesn't has data2 and conversion is finished without error which is still a problem

   If data storages belong to different DC have different name,v2v will report error, such as, if convert guest to iscsi_data by v2v without setting cluster name "ISCSI", the conversion will be failed, pls refer to https://bugzilla.redhat.com/show_bug.cgi?id=1600547#c4, 
  
   so if the data storages belong to different DC have similar name, v2v can't find out the cluster name is wrong.

Comment 10 Daniel Erez 2018-08-08 11:01:14 UTC
(In reply to Richard W.M. Jones from comment #7)
> (In reply to Daniel Erez from comment #6)
> > (In reply to Nir Soffer from comment #4)
> > > Daniel, can you review rhv-upload-createvm.py and recommend the proper way
> > > to do 
> > > this?
> > 
> > From the rhv-upload-data2.log[1], it seems that the @SD_UUID@ wasn't
> > replaced with a value. I guess we should just change this line (from
> > rhv-upload-createvm.py):
> > "ovf.replace("@SD_UUID@", sd_uuid)"
> > to
> > "ovf = ovf.replace("@SD_UUID@", sd_uuid)"
> 
> This is a bug, I'll submit a fix soon, but ...
> 
> > Also, shouldn't we include the pool ID as well (in 'rasd:StoragePoolId')?
> > 
> > [1]
> > <Type>disk</Type>
> > ...
> > <rasd:StorageId>@SD_UUID@</rasd:StorageId>
> > <rasd:StoragePoolId>00000000-0000-0000-0000-000000000000</rasd:StoragePoolId>
> 
> What should go in there?

This is the datacenter ID, we can use a similar code like we've added to rhv-upload-plugin.py

Comment 11 Pino Toscano 2018-08-15 14:14:43 UTC
(In reply to Richard W.M. Jones from comment #8)
> The first fix is upstream in commit
> 389e165519c33b5234db50ea26dcb267321ee152.
> 
> I haven't verified if it's a correct fix for this bug, although it
> looks likely.

This was included in libguestfs-1.38.2-11.el7.

Comment 12 Richard W.M. Jones 2018-08-20 11:10:50 UTC
I'm going to move this to 7.7 and also cancel the NEEDINFO.  The
current status seems as if there is a partial fix, but more work
will be needed on the createvm script to make it match the plugin
script.

Comment 13 Richard W.M. Jones 2019-01-28 10:46:49 UTC
Daniel - do you have any suggestions how to fix this?  If it's not fixed in
time for 7.7 then I will move this bug to RHEL 8 backlog.

Comment 14 Daniel Erez 2019-01-28 13:22:36 UTC
(In reply to Richard W.M. Jones from comment #13)
> Daniel - do you have any suggestions how to fix this?  If it's not fixed in
> time for 7.7 then I will move this bug to RHEL 8 backlog.

I think we should add a validation for the specified cluster in rhv-upload-precheck.py

I.e. Suggested validation flow:

1. We can get the relevant DC according to the specified storage name:

    data_centers = system_service.data_centers_service().list(
        search='storage.name=%s' % storage_name,
        case_sensitive=True,
    )
    if len(data_centers) == 0:
        # The storage domain is not attached to a datacenter
        # (shouldn't happen, would fail on disk creation).
        debug("storage domain (%s) is not attached to a DC" % storage_name)
        return None

    datacenter = data_centers[0]

2. If a cluster name is specified, we can search for it:

    clusters = system_service.cluster_service().list(
        search='datacenter.name=%s' % datacenter.name + 'AND' + 'name=%s' % cluster_name,
        case_sensitive=True,
    )
    if len(clusters) == 0:
       # error...
       return None
    cluster = clusters[0] ## pass cluster to rhv-upload-createvm.py

3. If no cluster was specified, we can simply select one from the DC:
    clusters = system_service.cluster_service().list(
        search='dataceneter.name=%s' % datacenter.name,
        case_sensitive=True,
    )
    if len(clusters) == 0:
       # error...
       return None
    cluster = clusters[0] ## pass cluster to rhv-upload-createvm.py


@Richard - what do you think?

Comment 17 mxie@redhat.com 2019-08-29 10:40:58 UTC
Reproduce the bug with rhel8 v2v builds:
virt-v2v-1.40.2-13.module+el8.1.0+3975+96069438.x86_64
libguestfs-1.40.2-13.module+el8.1.0+3975+96069438.x86_64
libvirt-client-5.6.0-2.module+el8.1.0+4015+63576633.x86_64
qemu-kvm-4.1.0-5.module+el8.1.0+4076+b5e41ebc.x86_64
nbdkit-1.12.5-1.module+el8.1.0+3868+35f94834.x86_64
kernel-4.18.0-135.el8.x86_64

Steps to Reproduce:
1.Set up environment as below on rhv4.3:

Datacenter name : DC1
Cluster name: Default
Host name: host1
Data domain name: data1

Datacenter name : DC2
Cluster name: Default1
Host name : host2
Data domain name: data2


1.Convert a guest from ova to data2 by v2v on standalone v2v conversion using rhv-upload without setting rhv-cluster name in command line,and the conversion can be finished without error

# virt-v2v -i ova esx6_7-rhel7.7-x86_64 -o rhv-upload -oc https://ibm-x3250m5-03.rhts.eng.pek2.redhat.com/ovirt-engine/api -os data2 -op /tmp/rhvpasswd -oo rhv-cafile=/home/ca.pem  -oo rhv-direct -of raw -oa preallocated -b ovirtmgmt
[   0.5] Opening the source -i ova esx6_7-rhel7.7-x86_64
[   9.1] Creating an overlay to protect the source from being modified
[   9.2] Opening the overlay
[  13.5] Inspecting the overlay
[  38.2] Checking for sufficient free disk space in the guest
[  38.2] Estimating space required on target for each disk
[  38.2] Converting Red Hat Enterprise Linux Server 7.7 Beta (Maipo) to run on KVM
virt-v2v: warning: guest tools directory ‘linux/el7’ is missing from 
the virtio-win directory or ISO.

Guest tools are only provided in the RHV Guest Tools ISO, so this can 
happen if you are using the version of virtio-win which contains just the 
virtio drivers.  In this case only virtio drivers can be installed in the 
guest, and installation of Guest Tools will be skipped.
virt-v2v: This guest has virtio drivers installed.
[ 185.1] Mapping filesystem data to avoid copying unused and blank areas
[ 185.9] Closing the overlay
[ 186.0] Assigning disks to buses
[ 186.0] Checking if the guest needs BIOS or UEFI to boot
[ 186.0] Initializing the target -o rhv-upload -oa preallocated -oc https://ibm-x3250m5-03.rhts.eng.pek2.redhat.com/ovirt-engine/api -op /tmp/rhvpasswd -os data2
[ 188.4] Copying disk 1/2 to qemu URI json:{ "file.driver": "nbd", "file.path": "/var/tmp/rhvupload.R6i5fM/nbdkit0.sock", "file.export": "/" } (raw)
    (100.00/100%)
[2264.5] Copying disk 2/2 to qemu URI json:{ "file.driver": "nbd", "file.path": "/var/tmp/rhvupload.R6i5fM/nbdkit1.sock", "file.export": "/" } (raw)
    (100.00/100%)
[2651.3] Creating output metadata
[2676.0] Finishing off

2.Check the guest on rhv after finishing conversion but found the guest have no disk

Expected results:
  The rhv-upload conversion should be failed and report error like "the cluster 'default' is wrong" or "cluster 'default' doesn't have data domain 'data2'"

Comment 18 mxie@redhat.com 2019-08-30 08:36:04 UTC
Add another scenario for commmet17

1.Convert a guest from vmware to data1 by v2v on standalone v2v conversion using rhv-upload with wrong cluster name,the conversion can be finished without error but the guest also has no disk after conversion

# virt-v2v -ic vpx://root.73.141/data/10.73.75.219/?no_verify=1 -it vddk -io vddk-libdir=/home/vmware-vix-disklib-distrib -io  vddk-thumbprint=1F:97:34:5F:B6:C2:BA:66:46:CB:1A:71:76:7D:6B:50:1E:03:00:EA esx6.7-ubuntu18.04LTS-x86_64  -o rhv-upload -oc https://ibm-x3250m5-03.rhts.eng.pek2.redhat.com/ovirt-engine/api -os data1 -oo rhv-cluster=Default1 -op /tmp/rhvpasswd -oo rhv-cafile=/home/ca.pem  -oo rhv-direct -of raw -ip /tmp/passwd
[   0.5] Opening the source -i libvirt -ic vpx://root.73.141/data/10.73.75.219/?no_verify=1 esx6.7-ubuntu18.04LTS-x86_64 -it vddk  -io vddk-libdir=/home/vmware-vix-disklib-distrib -io vddk-thumbprint=1F:97:34:5F:B6:C2:BA:66:46:CB:1A:71:76:7D:6B:50:1E:03:00:EA
[   2.1] Creating an overlay to protect the source from being modified
[   3.8] Opening the overlay
[   9.8] Inspecting the overlay
[  12.1] Checking for sufficient free disk space in the guest
[  12.1] Estimating space required on target for each disk
[  12.1] Converting Ubuntu 18.04.2 LTS to run on KVM
virt-v2v: warning: could not determine a way to update the configuration of 
Grub2
virt-v2v: warning: don't know how to install guest tools on ubuntu-18
virt-v2v: This guest has virtio drivers installed.
[ 288.5] Mapping filesystem data to avoid copying unused and blank areas
[ 289.9] Closing the overlay
[ 290.0] Assigning disks to buses
[ 290.0] Checking if the guest needs BIOS or UEFI to boot
[ 290.0] Initializing the target -o rhv-upload -oc https://ibm-x3250m5-03.rhts.eng.pek2.redhat.com/ovirt-engine/api -op /tmp/rhvpasswd -os data1
[ 291.3] Copying disk 1/1 to qemu URI json:{ "file.driver": "nbd", "file.path": "/var/tmp/rhvupload.ZrQFbo/nbdkit0.sock", "file.export": "/" } (raw)
    (100.00/100%)
[5629.2] Creating output metadata
[5646.3] Finishing off

Comment 21 mxie@redhat.com 2019-09-04 08:59:49 UTC
I found the v2v conversion isn't failed even if the name of data domain used in command line is not similar with the other data domains and use the wrong cluster name, but this scenario was failed before,pls refer to https://bugzilla.redhat.com/show_bug.cgi?id=1600547#c4, so the bug is not related to the name of data domain now.


Packages version:
virt-v2v-1.40.2-13.module+el8.1.0+3975+96069438.x86_64
libguestfs-1.40.2-13.module+el8.1.0+3975+96069438.x86_64
nbdkit-1.12.5-1.module+el8.1.0+3868+35f94834.x86_64
libvirt-client-5.6.0-2.module+el8.1.0+4015+63576633.x86_64
qemu-kvm-4.1.0-5.module+el8.1.0+4076+b5e41ebc.x86_64



Steps:
1.Set up environment as below on rhv4.3:

Datacenter name : NFS
Cluster name: NFS
Host name: NFS
Data domain name: nfs_data

Datacenter name : ISCSI
Cluster name: ISCSI
Host name : ISCSI
Data domain name: iscsi_data

2.Convert a guest to iscsi_data with rhv-upload and set wrong cluster name in v2v command line
# virt-v2v -ic vpx://vsphere.local%5cAdministrator.73.141/data/10.73.75.219/?no_verify=1  esx6.7-win8.1-x86_64  -o rhv-upload -oc https://hp-dl360eg8-03.lab.eng.pek2.redhat.com/ovirt-engine/api -os iscsi_data -op /tmp/rhvpasswd -oo rhv-cafile=/root/ca.pem  -oo rhv-direct -of raw --password-file /tmp/passwd -b ovirtmgmt -oa preallocated -oo rhv-cluster=default1
[   0.5] Opening the source -i libvirt -ic vpx://vsphere.local%5cAdministrator.73.141/data/10.73.75.219/?no_verify=1 esx6.7-win8.1-x86_64
[   2.5] Creating an overlay to protect the source from being modified
[   3.0] Opening the overlay
[  20.9] Inspecting the overlay
[ 161.8] Checking for sufficient free disk space in the guest
[ 161.8] Estimating space required on target for each disk
[ 161.8] Converting Windows 8.1 Enterprise to run on KVM
virt-v2v: warning: /usr/share/virt-tools/pnp_wait.exe is missing.  
Firstboot scripts may conflict with PnP.
virt-v2v: warning: there is no QXL driver for this version of Windows (6.3 
x86_64).  virt-v2v looks for this driver in 
/usr/share/virtio-win/virtio-win.iso

The guest will be configured to use a basic VGA display driver.
virt-v2v: This guest has virtio drivers installed.
[ 196.2] Mapping filesystem data to avoid copying unused and blank areas
[ 198.6] Closing the overlay
[ 198.9] Assigning disks to buses
[ 198.9] Checking if the guest needs BIOS or UEFI to boot
[ 198.9] Initializing the target -o rhv-upload -oa preallocated -oc https://hp-dl360eg8-03.lab.eng.pek2.redhat.com/ovirt-engine/api -op /tmp/rhvpasswd -os iscsi_data
[ 200.4] Copying disk 1/1 to qemu URI json:{ "file.driver": "nbd", "file.path": "/var/tmp/rhvupload.aAT3Ee/nbdkit0.sock", "file.export": "/" } (raw)
^C  (34.24/100%)


Additional info:
Also can reproduce the problem with builds:
virt-v2v-1.40.2-6.el7.x86_64
libguestfs-1.40.2-6.el7.x86_64
nbdkit-1.8.0-1.el7.x86_64
libvirt-4.5.0-23.el7.x86_64
qemu-kvm-rhev-2.12.0-33.el7_7.1.x86_64

Comment 24 mxie@redhat.com 2019-09-12 09:43:43 UTC
Created attachment 1614416 [details]
sceano1-wrong-cluster-name.log

Comment 25 mxie@redhat.com 2019-09-12 09:44:28 UTC
Created attachment 1614417 [details]
sceario4-wrong-data-domain-during-copying.log

Comment 29 Pino Toscano 2019-09-16 17:16:23 UTC
Sent a patch series upstream to fix the resources lookup issues:
https://www.redhat.com/archives/libguestfs/2019-September/msg00118.html
(for this bug, only the first 4 patches are relevant).

In addition, the existing 05e559549dab75f17e147f4a4eafbac868a7aa5d may be needed in case of backports when applying the above series.

Comment 31 Pino Toscano 2019-09-17 13:21:39 UTC
Fixed upstream with commits:
6499fdc199790619745eee28fcae3421c32c4735
cc6e2a7f9ea53258c2edb758e3ec9beb7baa1fc6
c49aa4fe01aac82d4776dd2a3524ce16e6deed06
2b39c27b7f1e72f3a3bf3a616e4576af691beb88
which are in libguestfs >= 1.41.5.

Comment 33 mxie@redhat.com 2019-12-05 08:54:15 UTC
Verify the bug with builds:
virt-v2v-1.40.2-15.module+el8.1.1+4955+f0b25565.x86_64
libguestfs-1.40.2-15.module+el8.1.1+4955+f0b25565.x86_64
libvirt-5.6.0-9.module+el8.1.1+4955+f0b25565.x86_64
qemu-kvm-4.1.0-17.module+el8.1.1+5019+2d64ad78.x86_64
kernel-4.18.0-147.el8.x86_64
nbdkit-1.12.5-2.module+el8.1.1+4904+0f013407.x86_64

Steps:
1.Set up environment as below on rhv4.3:

Datacenter name : DC1
Cluster name: Default1
Host name: host1
Data domain name: data1

Datacenter name : DC2
Cluster name: Default2
Host name : host2
Data domain name: data2


2. Convert a guest from VMware to RHV(rhv-upload) by v2v with setting wrong rhv-cluster name 

2.1 # virt-v2v -ic vpx://root.73.141/data/10.73.75.219/?no_verify=1 -it vddk -io vddk-libdir=/home/vmware-vix-disklib-distrib -io  vddk-thumbprint=1F:97:34:5F:B6:C2:BA:66:46:CB:1A:71:76:7D:6B:50:1E:03:00:EA  -o rhv-upload -oo rhv-cafile=/home/ca.pem -oo rhv-direct -oc https://ibm-x3250m5-03.rhts.eng.pek2.redhat.com/ovirt-engine/api -op /home/rhvpasswd  -b ovirtmgmt --password-file /home/passwd -of raw  esx6.7-rhel8.1-x86_64 -oo rhv-cluster=Default2 -os data1 
Traceback (most recent call last):
  File "/var/tmp/v2v.2T0cyw/rhv-upload-precheck.py", line 96, in <module>
    params['output_storage']))
RuntimeError: The cluster ‘Default2’ is not part of the DC ‘DC1’, where the storage domain ‘data1’ is
virt-v2v: error: failed server prechecks, see earlier errors

If reporting bugs, run virt-v2v with debugging enabled and include the 
complete output:

  virt-v2v -v -x [...]


2.2 # virt-v2v -ic vpx://root.73.141/data/10.73.75.219/?no_verify=1 -it vddk -io vddk-libdir=/home/vmware-vix-disklib-distrib -io  vddk-thumbprint=1F:97:34:5F:B6:C2:BA:66:46:CB:1A:71:76:7D:6B:50:1E:03:00:EA  -o rhv-upload -oo rhv-cafile=/home/ca.pem -oo rhv-direct -oc https://ibm-x3250m5-03.rhts.eng.pek2.redhat.com/ovirt-engine/api -op /home/rhvpasswd  -b ovirtmgmt --password-file /home/passwd -of raw  esx6.7-rhel8.1-x86_64 -oo rhv-cluster=Default1 -os data2 
Traceback (most recent call last):
  File "/var/tmp/v2v.etw7qZ/rhv-upload-precheck.py", line 96, in <module>
    params['output_storage']))
RuntimeError: The cluster ‘Default1’ is not part of the DC ‘DC2’, where the storage domain ‘data2’ is
virt-v2v: error: failed server prechecks, see earlier errors

If reporting bugs, run virt-v2v with debugging enabled and include the 
complete output:

  virt-v2v -v -x [...]



3.Convert a guest from VMware to RHV(rhv-upload) by v2v using wrong data domain name
# virt-v2v -ic vpx://root.73.141/data/10.73.75.219/?no_verify=1 -it vddk -io vddk-libdir=/home/vmware-vix-disklib-distrib -io  vddk-thumbprint=1F:97:34:5F:B6:C2:BA:66:46:CB:1A:71:76:7D:6B:50:1E:03:00:EA  -o rhv-upload -oo rhv-cafile=/home/ca.pem -oo rhv-direct -oc https://ibm-x3250m5-03.rhts.eng.pek2.redhat.com/ovirt-engine/api -op /home/rhvpasswd  -b ovirtmgmt --password-file /home/passwd -of raw  esx6.7-rhel8.1-x86_64 -oo rhv-cluster=Default1 -os nfs_data 
Traceback (most recent call last):
  File "/var/tmp/v2v.AUpWqF/rhv-upload-precheck.py", line 77, in <module>
    (params['output_storage']))
RuntimeError: The storage domain ‘nfs_data’ does not exist
virt-v2v: error: failed server prechecks, see earlier errors

If reporting bugs, run virt-v2v with debugging enabled and include the 
complete output:

  virt-v2v -v -x [...]


4.Convert a guest from VMware to RHV(rhv-upload) by v2v using wrong data domain name which has specific character
# virt-v2v -ic vpx://root.73.141/data/10.73.75.219/?no_verify=1 -it vddk -io vddk-libdir=/home/vmware-vix-disklib-distrib -io  vddk-thumbprint=1F:97:34:5F:B6:C2:BA:66:46:CB:1A:71:76:7D:6B:50:1E:03:00:EA  -o rhv-upload -oo rhv-cafile=/home/ca.pem -oo rhv-direct -oc https://ibm-x3250m5-03.rhts.eng.pek2.redhat.com/ovirt-engine/api -op /home/rhvpasswd  -b ovirtmgmt --password-file /home/passwd -of raw  esx6.7-rhel8.1-x86_64 -os data*
Traceback (most recent call last):
  File "/var/tmp/v2v.mmo6v8/rhv-upload-precheck.py", line 87, in <module>
    storage_domain = [sd for sd in storage_domains if sd.name == params['output_storage']][0]
IndexError: list index out of range
virt-v2v: error: failed server prechecks, see earlier errors

If reporting bugs, run virt-v2v with debugging enabled and include the 
complete output:

  virt-v2v -v -x [...]


5. Convert a guest from VMware to RHV(rhv-upload) by v2v using wrong cluster name which has specific character
# virt-v2v -ic vpx://root.73.141/data/10.73.75.219/?no_verify=1 -it vddk -io vddk-libdir=/home/vmware-vix-disklib-distrib -io  vddk-thumbprint=1F:97:34:5F:B6:C2:BA:66:46:CB:1A:71:76:7D:6B:50:1E:03:00:EA  -o rhv-upload -oo rhv-cafile=/home/ca.pem -oo rhv-direct -oc https://ibm-x3250m5-03.rhts.eng.pek2.redhat.com/ovirt-engine/api -op /home/rhvpasswd  -b ovirtmgmt --password-file /home/passwd -of raw  esx6.7-rhel8.1-x86_64 -os data1 -oo rhv-cluster=Default*
Traceback (most recent call last):
  File "/var/tmp/v2v.cuBKm9/rhv-upload-precheck.py", line 96, in <module>
    params['output_storage']))
RuntimeError: The cluster ‘Default*’ is not part of the DC ‘DC1’, where the storage domain ‘data1’ is
virt-v2v: error: failed server prechecks, see earlier errors

If reporting bugs, run virt-v2v with debugging enabled and include the 
complete output:

  virt-v2v -v -x [...]


6.Convert a guest from VMware to RHV(rhv-upload) by v2v using correct data domain name, but change the data domain name on RHV env during copying disk, the conversion can finish without error and checkpoints of guest are passed
# virt-v2v -ic vpx://root.73.141/data/10.73.75.219/?no_verify=1 -it vddk -io vddk-libdir=/home/vmware-vix-disklib-distrib -io  vddk-thumbprint=1F:97:34:5F:B6:C2:BA:66:46:CB:1A:71:76:7D:6B:50:1E:03:00:EA  -o rhv-upload -oo rhv-cafile=/home/ca.pem -oo rhv-direct -oc https://ibm-x3250m5-03.rhts.eng.pek2.redhat.com/ovirt-engine/api -op /home/rhvpasswd  -b ovirtmgmt --password-file /home/passwd -of raw  esx6.7-rhel8.1-x86_64 -os data1 -oo rhv-cluster=Default1
[   0.9] Opening the source -i libvirt -ic vpx://root.73.141/data/10.73.75.219/?no_verify=1 esx6.7-rhel8.1-x86_64 -it vddk  -io vddk-libdir=/home/vmware-vix-disklib-distrib -io vddk-thumbprint=1F:97:34:5F:B6:C2:BA:66:46:CB:1A:71:76:7D:6B:50:1E:03:00:EA
[   2.9] Creating an overlay to protect the source from being modified
[   3.5] Opening the overlay
[  12.9] Inspecting the overlay
[  31.1] Checking for sufficient free disk space in the guest
[  31.1] Estimating space required on target for each disk
[  31.1] Converting Red Hat Enterprise Linux 8.1 Beta (Ootpa) to run on KVM
virt-v2v: warning: don't know how to install guest tools on rhel-8
virt-v2v: This guest has virtio drivers installed.
[ 169.1] Mapping filesystem data to avoid copying unused and blank areas
[ 170.3] Closing the overlay
[ 170.4] Assigning disks to buses
[ 170.4] Checking if the guest needs BIOS or UEFI to boot
[ 170.4] Initializing the target -o rhv-upload -oc https://ibm-x3250m5-03.rhts.eng.pek2.redhat.com/ovirt-engine/api -op /home/rhvpasswd -os data1
[ 171.7] Copying disk 1/1 to qemu URI json:{ "file.driver": "nbd", "file.path": "/var/tmp/rhvupload.KGgXdv/nbdkit0.sock", "file.export": "/" } (raw)
    (100.00/100%)
[ 916.5] Creating output metadata
[ 918.2] Finishing off



7.Convert a guest from VMware to RHV(rhv-upload) by v2v using correct rhv cluster and data domain name,the conversion can finish without error and checkpoints of guest are passed
#  virt-v2v -ic vpx://root.73.141/data/10.73.75.219/?no_verify=1 -it vddk -io vddk-libdir=/home/vmware-vix-disklib-distrib -io  vddk-thumbprint=1F:97:34:5F:B6:C2:BA:66:46:CB:1A:71:76:7D:6B:50:1E:03:00:EA  -o rhv-upload -oo rhv-cafile=/home/ca.pem -oo rhv-direct -oc https://ibm-x3250m5-03.rhts.eng.pek2.redhat.com/ovirt-engine/api -op /home/rhvpasswd  -b ovirtmgmt --password-file /home/passwd -of raw  esx6.7-rhel7.6-x86_64 -os data2 -oo rhv-cluster=Default2
[   0.9] Opening the source -i libvirt -ic vpx://root.73.141/data/10.73.75.219/?no_verify=1 esx6.7-rhel7.6-x86_64 -it vddk  -io vddk-libdir=/home/vmware-vix-disklib-distrib -io vddk-thumbprint=1F:97:34:5F:B6:C2:BA:66:46:CB:1A:71:76:7D:6B:50:1E:03:00:EA
[   2.5] Creating an overlay to protect the source from being modified
[   3.1] Opening the overlay
[   8.1] Inspecting the overlay
[  34.3] Checking for sufficient free disk space in the guest
[  34.3] Estimating space required on target for each disk
[  34.3] Converting Red Hat Enterprise Linux Server 7.6 (Maipo) to run on KVM
virt-v2v: warning: guest tools directory ‘linux/el7’ is missing from 
the virtio-win directory or ISO.

Guest tools are only provided in the RHV Guest Tools ISO, so this can 
happen if you are using the version of virtio-win which contains just the 
virtio drivers.  In this case only virtio drivers can be installed in the 
guest, and installation of Guest Tools will be skipped.
virt-v2v: This guest has virtio drivers installed.
[ 196.3] Mapping filesystem data to avoid copying unused and blank areas
[ 197.2] Closing the overlay
[ 197.3] Assigning disks to buses
[ 197.3] Checking if the guest needs BIOS or UEFI to boot
[ 197.3] Initializing the target -o rhv-upload -oc https://ibm-x3250m5-03.rhts.eng.pek2.redhat.com/ovirt-engine/api -op /home/rhvpasswd -os data2
[ 198.6] Copying disk 1/1 to qemu URI json:{ "file.driver": "nbd", "file.path": "/var/tmp/rhvupload.ptq7i8/nbdkit0.sock", "file.export": "/" } (raw)
    (100.00/100%)
[1298.4] Creating output metadata
[1300.1] Finishing off


Hi Pino,
   just one problem, pls help to confirm if the result of step4 is a bug, thanks

Comment 34 Pino Toscano 2019-12-05 09:43:40 UTC
(In reply to mxie from comment #33)
> 4.Convert a guest from VMware to RHV(rhv-upload) by v2v using wrong data
> domain name which has specific character
> # virt-v2v -ic vpx://root.73.141/data/10.73.75.219/?no_verify=1 -it
> vddk -io vddk-libdir=/home/vmware-vix-disklib-distrib -io 
> vddk-thumbprint=1F:97:34:5F:B6:C2:BA:66:46:CB:1A:71:76:7D:6B:50:1E:03:00:EA 
> -o rhv-upload -oo rhv-cafile=/home/ca.pem -oo rhv-direct -oc
> https://ibm-x3250m5-03.rhts.eng.pek2.redhat.com/ovirt-engine/api -op
> /home/rhvpasswd  -b ovirtmgmt --password-file /home/passwd -of raw 
> esx6.7-rhel8.1-x86_64 -os data*

The potential problem I see here is that data* is expanded by the shell before the actual virt-v2v execution, and so if you have no files/directory starting with "data" then it will expanded as empty string.
Can you please try quoting that parameter, like:
  $ virt-v2v ... -os 'data*'
?

Comment 35 mxie@redhat.com 2019-12-05 10:03:25 UTC
> The potential problem I see here is that data* is expanded by the shell
> before the actual virt-v2v execution, and so if you have no files/directory
> starting with "data" then it will expanded as empty string.
> Can you please try quoting that parameter, like:
>   $ virt-v2v ... -os 'data*'
> ?

Pino is right, the error info of step4 of comment33 is right after quoting that parameter 'data*', so move the bug from ON_QA to VERIFIED according to comment 33 ~ comment35

# virt-v2v -ic vpx://root.73.141/data/10.73.75.219/?no_verify=1 -it vddk -io vddk-libdir=/home/vmware-vix-disklib-distrib -io  vddk-thumbprint=1F:97:34:5F:B6:C2:BA:66:46:CB:1A:71:76:7D:6B:50:1E:03:00:EA  -o rhv-upload -oo rhv-cafile=/home/ca.pem -oo rhv-direct -oc https://ibm-x3250m5-03.rhts.eng.pek2.redhat.com/ovirt-engine/api -op /home/rhvpasswd  -b ovirtmgmt --password-file /home/passwd -of raw  esx6.7-rhel8.1-x86_64 -os ’data*‘
Traceback (most recent call last):
  File "/var/tmp/v2v.SsuwgG/rhv-upload-precheck.py", line 77, in <module>
    (params['output_storage']))
RuntimeError: The storage domain ‘’data*‘’ does not exist
virt-v2v: error: failed server prechecks, see earlier errors

If reporting bugs, run virt-v2v with debugging enabled and include the 
complete output:

  virt-v2v -v -x [...]

Comment 37 errata-xmlrpc 2020-02-04 18:28:48 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHBA-2020:0404


Note You need to log in before you can comment on or make changes to this bug.