Bug 1750742 - [v2v][RHV][Scale] v2v Migration to RHV failed on timed out waiting for transfer to finalize
Summary: [v2v][RHV][Scale] v2v Migration to RHV failed on timed out waiting for transf...
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: Red Hat Enterprise Linux Advanced Virtualization
Classification: Red Hat
Component: libguestfs
Version: 8.0
Hardware: Unspecified
OS: Unspecified
high
high
Target Milestone: rc
: 8.1
Assignee: Richard W.M. Jones
QA Contact: Virtualization Bugs
URL:
Whiteboard: V2V
Depends On: 1680361
Blocks:
TreeView+ depends on / blocked
 
Reported: 2019-09-10 12:09 UTC by Pino Toscano
Modified: 2020-06-22 04:27 UTC (History)
22 users (show)

Fixed In Version: libguestfs-1.40.2-14.module+el8.1.0+4230+0b6e3259
Doc Type: If docs needed, set a value
Doc Text:
Clone Of: 1680361
Environment:
Last Closed: 2019-11-06 07:19:21 UTC
Type: Bug
Target Upstream Version:
Embargoed:


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
Red Hat Product Errata RHBA-2019:3723 0 None None None 2019-11-06 07:19:50 UTC

Description Pino Toscano 2019-09-10 12:09:06 UTC
+++ This bug was initially created as a clone of Bug #1680361 +++

Description of problem:
For v2v migration from VMware to RHV-4.3 failed with this nbdkit error in the v2v-import log: 

nbdkit: python[1]: error: /var/tmp/v2v.T1z7Db/rhv-upload-plugin.py: close: error: ['Traceback (most recent call last):\n', '  File "/var/tmp/v2v.T1z7Db/rhv-upload-plugin.py", line 558, in close\n', 'RuntimeError: timed out waiting for transfer to finalize\n']

The migration is of a single RHEL7 VM, 
from ISCSI storage to FC storage.

The RHV-4.3 is a scale environment:
10 Data Centers
10 Clusters
309 Hosts
10 Data Storage Domains
4903 Virtual Machines

Both VMware & RHV located in the US (RDU).

Version-Release number of selected component (if applicable):
* CFME-5.10.0.33.20190129203322_85a1e4e
* RHV-4.3.0.4-0.1.el7
* Conversion host:
    OS Version: RHEL - 7.6 - 4.el7_6
    OS Description: Red Hat Enterprise Linux Server 7.6 (Maipo)
    Kernel Version: 3.10.0 - 957.10.1.el7.x86_64
    KVM Version: 2.12.0 - 21.el7
    LIBVIRT Version: libvirt-4.5.0-10.el7_6.4
    VDSM Version: vdsm-4.30.9-1.el7ev

* v2v_vddk_package_name: "VMware-vix-disklib-6.7.1-10362358.x86_64.tar.gz"

How reproducible:
Tried twice, and it failed the same.

Additional info:
* The virtual copying rate: 1546.1 M bits/sec.

For another v2v migration, 
that was done from the same VMware template/storage,
to another RHV-4.2 (small scale)
where the migration was successful, 
the rate was: virtual copying rate: 1489.7 M bits/sec

--- Additional comment from Ilanit Stein on 2019-02-24 19:05:48 CET ---

As a comparison,
I tested CFME-5.10.0.33, with RHV-4.3.1.1-0.1.el7 - small Scale.
This migration passed - so it seems the problem do not repeat on this RHV-4.3 - small scale.

--- Additional comment from Ilanit Stein on 2019-02-26 16:55:51 CET ---

This might be related to:
bug 1668720 - [RHV] CFME fail to refresh\discover RHV-4.3.
But note that though bug 1668720 also exists on CFME-5.10.0.33,
v2v migration from VMware to RHV was successful, for a RHV-4.3 small scale.

--- Additional comment from Ilanit Stein on 2019-02-27 12:28:14 CET ---

I checked several RHV-4.3 (with & without OVN), that are not Hosted Engine - and these DO NOT have the refresh failure.
Seems the problem is specific to RHV-4.3 Hosted-Engine configuration.

--- Additional comment from Ilanit Stein on 2019-03-07 16:55:09 CET ---

I managed to reproduce this bug, using the same Scale RHV & destination host

--- Additional comment from Ilanit Stein on 2019-03-11 16:23:30 CET ---

I managed to reproduce this bug on the same RHV, cluster, but another host.

--- Additional comment from Richard W.M. Jones on 2019-03-12 09:17:23 CET ---

The timeout happens in this code:

https://github.com/libguestfs/libguestfs/blob/89b5dabf8d1797e3875d949b6e2a903a5703be5c/v2v/rhv-upload-plugin.py#L518-L533

We wait up to 5 minutes for the "transfer" to "finalize".  I'm not too familiar with the
oVirt code so I don't know what it's actually doing here.

Michal: Do you know who we could ask about this issue?

--- Additional comment from Michal Skrivanek on 2019-03-12 14:58:38 CET ---

Ilanit, I suppose we also need engine.log for starts
Tal, can anyone look at the disk upload finalization?

--- Additional comment from Nir Soffer on 2019-03-12 21:33:28 CET ---

(In reply to Richard W.M. Jones from comment #11)
> The timeout happens in this code:
> 
> https://github.com/libguestfs/libguestfs/blob/
> 89b5dabf8d1797e3875d949b6e2a903a5703be5c/v2v/rhv-upload-plugin.py#L518-L533

        transfer_service.finalize()

        # Wait until the transfer disk job is completed since
        # only then we can be sure the disk is unlocked.  As this
        # code is not very clear, what's happening is that we are
        # waiting for the transfer object to cease to exist, which
        # falls through to the exception case and then we can
        # continue.
        endt = time.time() + timeout
        try:
            while True:
                time.sleep(1)
                tmp = transfer_service.get()
                if time.time() > endt:
                    raise RuntimeError("timed out waiting for transfer "
                                       "to finalize")
        except sdk.NotFoundError:
            pass

This does not look right. I think you should check the transfer.phase.

In the sdk examples we call this without waiting or checking the state.
(This looks wrong)
https://github.com/oVirt/ovirt-engine-sdk/blob/master/sdk/examples/upload_disk.py

Finalizing an upload validates that the disk properties matches what RHV
metadata, deactivate the volume, and remove the ticket from imageio server.

ImageTransfer phases are documented here:
http://ovirt.github.io/ovirt-engine-sdk/master/types.m.html#ovirtsdk4.types.ImageTransferPhase

Unfortunately there is no documentation, only the phase names.

Based on common sense, I think the expected phases are:

    FINALIZING_SUCCESS -> FINISHED_SUCCESS

If finalizing failed (e.g. upload image format is invalid), I think we should 
get:

    FINALIZING_SUCCESS -> FINISHED_FAILURE

Any other phase does not make sense and should probably be treated as error.

Daniel, can you confirm that these are the expected phases, and give an estimate of
the maximum time a client should wait for finalizing?

--- Additional comment from Nir Soffer on 2019-03-12 22:12:49 CET ---

Ilanit, the vdsm.log from attachment 1542915 [details] seems to be the wrong log.
Maybe it is from the wrong host or maybe the right log was rotated.

In v2v-import-20190311T113412-22520.log from attachment 1542915 [details] we can
see the transfer id:

    transfer.id = '19a4662a-a635-4b4a-afd8-e103cdb6780c'

    grep 19a4662a-a635-4b4a-afd8-e103cdb6780c vdsm.log.1:
    (nothing)

Please use

    xzgrep 19a4662a-a635-4b4a-afd8-e103cdb6780c /var/log/vdsm/vdsm.log*

To find the write log.

However for this case, we don't need vdsm log, because we see very clearly
in engine log attachment 1543220 [details] that the transfer was successful.


1. Finalizing started:

2019-03-11 11:42:03,491Z INFO  [org.ovirt.engine.core.bll.storage.disk.image.ImageTransferUpdater] (default task-65) [b27de17c-3a7c-4ac7-942e-de22900e0e33] Updating image transfer 19a4662a-a635-4b4a-afd8-e103cdb6780c (image a6bf5c5a-6c2b-4f48-9112-e02d28f125fb) phase to Finalizing Success

2. Engine completed verification of the uploaded image:

2019-03-11 11:42:18,475Z INFO  [org.ovirt.engine.core.vdsbroker.vdsbroker.VerifyUntrustedVolumeVDSCommand] (EE-ManagedThreadFactory-engineScheduled-Thread-23) [c59ada05-1b39-40ca-848b-9857c096acfc] FINISH, VerifyUntrustedVolumeVDSCommand, return: StatusReturn:{status='Status [code=0, message=Done]'}, log id: 33d3c110

3. Engine marked the image as valid.

2019-03-11 11:42:19,266Z INFO  [org.ovirt.engine.core.vdsbroker.irsbroker.SetVolumeLegalityVDSCommand] (EE-ManagedThreadFactory-engineScheduled-Thread-23) [c59ada05-1b39-40ca-848b-9857c096acfc] FINISH, SetVolumeLegalityVDSCommand, return: , log id: 5d22296b

4. Upload become successful:

2019-03-11 11:42:19,298Z INFO  [org.ovirt.engine.core.bll.storage.disk.image.ImageTransferUpdater] (EE-ManagedThreadFactory-engineScheduled-Thread-23) [c59ada05-1b39-40ca-848b-9857c096acfc] Updating image transfer 19a4662a-a635-4b4a-afd8-e103cdb6780c (image a6bf5c5a-6c2b-4f48-9112-e02d28f125fb) phase to Finished Success

Not sure why we see this log again about 20 seconds later...

2019-03-11 11:42:32,580Z INFO  [org.ovirt.engine.core.bll.storage.disk.image.ImageTransferUpdater] (EE-ManagedThreadFactory-engineScheduled-Thread-78) [c59ada05-1b39-40ca-848b-9857c096acfc] Updating image transfer 19a4662a-a635-4b4a-afd8-e103cdb6780c (image a6bf5c5a-6c2b-4f48-9112-e02d28f125fb) phase to Finished Success

And again...

2019-03-11 11:42:32,908Z INFO  [org.ovirt.engine.core.bll.storage.disk.image.TransferDiskImageCommand] (EE-ManagedThreadFactory-engineScheduled-Thread-78) [c59ada05-1b39-40ca-848b-9857c096acfc] Successfully transferred disk '00000000-0000-0000-0000-000000000000' (command id '19a4662a-a635-4b4a-afd8-e103cdb6780c')

This is more than 5 minutes after finalize was called - the status changed to Canceled.
This looks like a bug.

2019-03-11 11:47:06,280Z INFO  [org.ovirt.engine.core.bll.storage.disk.image.ImageTransferUpdater] (default task-65) [12088a1d-a4ad-4f30-8928-42e04611893f] Updating image transfer 19a4662a-a635-4b4a-afd8-e103cdb6780c (image a6bf5c5a-6c2b-4f48-9112-e02d28f125fb) phase to Cancelled

But this show that if rhv-plugin was checking the transfer phase it should
have succeeded.


Looking in the API model:
http://ovirt.github.io/ovirt-engine-api-model/master/#services/image_transfer

We document the expected phase clearly:

    When finishing the transfer, the user should call finalize. This will make
    the final adjustments and verifications for finishing the transfer process.

    For example:

        transfer_service.finalize()

    In case of an error, the transfer’s phase will be changed to finished_failure,
    and the disk’s status will be changed to Illegal. Otherwise it will be changed
    to finished_success, and the disk will be ready to be used. In both cases, the
    transfer entity will be removed shortly after.

But there is no example code.

So I think we have several issues:

- v2v: Fix waiting after finalize
- ovirt-engine: ensure that phase does change after FINISHED_SUCCESS
- docs: Add example code for waiting for finalize
- sdk/examples: Wait for finalize in upload_*.py, download_*.py

Richard, Daniel, what do you think?

--- Additional comment from Richard W.M. Jones on 2019-03-13 10:35:21 CET ---

Example code is always good.  Can you comment on what specifically we're doing wrong here?
https://github.com/libguestfs/libguestfs/blob/89b5dabf8d1797e3875d949b6e2a903a5703be5c/v2v/rhv-upload-plugin.py#L518-L533

--- Additional comment from Daniel Erez on 2019-03-13 12:25:41 CET ---

> 
> And again...
> 
> 2019-03-11 11:42:32,908Z INFO 
> [org.ovirt.engine.core.bll.storage.disk.image.TransferDiskImageCommand]
> (EE-ManagedThreadFactory-engineScheduled-Thread-78)
> [c59ada05-1b39-40ca-848b-9857c096acfc] Successfully transferred disk
> '00000000-0000-0000-0000-000000000000' (command id
> '19a4662a-a635-4b4a-afd8-e103cdb6780c')

I think that the problem here is that the transfer wasn't cleared from db, which is an issue we've encountered in older versions
('Successfully transferred disk' message should contain disk's guid and not an empty one '00000000-0000-0000-0000-000000000000').
Is it reproducible in recent builds?

Anyway, I think that fixing the waiting after finalize in v2v could indeed be solve the issue.
I suggest to poll the disk's status instead, as we do in upload_disk.py (line 161) [*]
On status 'OK', we can continue the operation. Since finalize executes teardown image in vdsm,
I'm not sure what's the timeout estimation. But I guess a couple of minutes timeout should be good enough
(or maybe a few minutes to keep it safe on scale env).

[*] https://github.com/oVirt/ovirt-engine-sdk/blob/fd728f8286c57967c88e275edd68643a1f71c173/sdk/examples/upload_disk.py

> 
> This is more than 5 minutes after finalize was called - the status changed
> to Canceled.
> This looks like a bug.
>

--- Additional comment from Nir Soffer on 2019-03-13 12:44:27 CET ---

(In reply to Daniel Erez from comment #17)
> I suggest to poll the disk's status instead, as we do in upload_disk.py
> (line 161) [*]
> On status 'OK', we can continue the operation.

But if we poll on the disk, how do we detect an error in finalize?

I think it makes sense to poll the transfer.phase, waiting for FINISHED_SUCCESS
and failing on FINISHED_FAILURE.

I hope that when transfer.phase == FINISHED_SUCCESS we already
unlocked the disk and we don't need to wait for the disk.

--- Additional comment from Richard W.M. Jones on 2019-03-13 12:52:39 CET ---

OK, can someone suggest a change to the Python code to make this work?
I've no idea about how any of this stuff works.

--- Additional comment from Nir Soffer on 2019-03-13 13:03:49 CET ---

(In reply to Richard W.M. Jones from comment #16)
> Example code is always good.  Can you comment on what specifically we're
> doing wrong here?
> https://github.com/libguestfs/libguestfs/blob/
> 89b5dabf8d1797e3875d949b6e2a903a5703be5c/v2v/rhv-upload-plugin.py#L518-L533

I think we need to do (untested):

    start = time.time()

    while True:
        time.sleep(1)
        
        transfer = transfer_service.get()

        if transfer.phase == types.ImageTransferPhase.FINISHED_SUCCESS:
            debug("finalized after %s seconds", time.time() - start)
            break

        if transfer.phase == types.ImageTransferPhase.FINALIZING_SUCCESS:
            if time.time() > start + timeout:
                raise RuntimeError("timed out waiting for transfer "
                                   "to finalize")
            continue

        raise RuntimeError("Unexpected transfer phase while finalizing "
                           "upload %r" % transfer.phase)

--- Additional comment from Tal Nisan on 2019-03-13 17:47:59 CET ---

(In reply to Michal Skrivanek from comment #12)
> Ilanit, I suppose we also need engine.log for starts
> Tal, can anyone look at the disk upload finalization?

Daniel, can you please have a look?

--- Additional comment from Daniel Erez on 2019-03-13 18:11:21 CET ---

(In reply to Nir Soffer from comment #21)
> (In reply to Richard W.M. Jones from comment #16)
> > Example code is always good.  Can you comment on what specifically we're
> > doing wrong here?
> > https://github.com/libguestfs/libguestfs/blob/
> > 89b5dabf8d1797e3875d949b6e2a903a5703be5c/v2v/rhv-upload-plugin.py#L518-L533
> 
> I think we need to do (untested):
> 
>     start = time.time()
> 
>     while True:
>         time.sleep(1)
>         
>         transfer = transfer_service.get()

'transfer' could be None in this stage (in case the transfer has already been completed as we reach here).
So just need to add something like this:

          if transfer == None:
             disk = disk_service.get()
             if disk.status == types.DiskStatus.OK:
                 continue;

Makes sense?

> 
>         if transfer.phase == types.ImageTransferPhase.FINISHED_SUCCESS:
>             debug("finalized after %s seconds", time.time() - start)
>             break
> 
>         if transfer.phase == types.ImageTransferPhase.FINALIZING_SUCCESS:
>             if time.time() > start + timeout:
>                 raise RuntimeError("timed out waiting for transfer "
>                                    "to finalize")
>             continue
> 
>         raise RuntimeError("Unexpected transfer phase while finalizing "
>                            "upload %r" % transfer.phase)

--- Additional comment from Daniel Erez on 2019-03-17 13:09:30 CET ---

Sent the suggested fix to v2v (not verified): https://www.redhat.com/archives/libguestfs/2019-March/msg00044.html

--- Additional comment from Ilanit Stein on 2019-03-17 17:19:57 CET ---

Retested on these version:

ovirt-engine-4.3.2-0.1.el7.noarch
vdsm-4.30.11-1.el7ev.x86_64
ovirt-ansible-v2v-conversion-host-1.9.2-1.el7ev.noarch
virt-v2v-1.38.2-12.29.lp.el7ev.x86_64
CFME-5.10.1.2

Got the same failure in v2v-import.log:
nbdkit: python[1]: error: /var/tmp/v2v.3jibRB/rhv-upload-plugin.py: close: error: ['Traceback (most recent call last):\n', '  File "/var/tmp/v2v.3jibRB/rhv-upload-plugin.py", line 558, in close\n', 'RuntimeError: timed out waiting for transfer to finalize\n']

--- Additional comment from Ilanit Stein on 2019-08-12 12:58:40 CEST ---

In RHV conversion host, run:
$ yum downgrade libguestfs libguestfs-tools-c virt-v2v python-libguestfs,
brings interim version: 1.40.2-5.el7.1.bz1680361.v3.1.x86_64 (based on Derez fix).

Using this repo:
http://brew-task-repos.usersys.redhat.com/repos/scratch/rjones/libguestfs/1.40.2/5.el7.1.bz1680361.v3.1/

Error: 'RuntimeError: timed out waiting for transfer to finalize\n' no longer appeared.

I managed to run v2v migration, VMware->RHV ISCSI->FC, VDDK, for a single VM, with 20GB disk successfully.

However, for single/20 VMs with 100GB, disk, I encountered these 2 new bugs:
Bug 1740098 - [v2v][Scale][RHV] Single VM migration failed, but related virt-v2v error is not logged. 
Bug 1740021 - [v2v][Scale][RHV] 20 VMs migration fail on "timed out waiting for disk to become unlocked"

--- Additional comment from Richard W.M. Jones on 2019-09-09 18:01:48 CEST ---

Posted here:
https://www.redhat.com/archives/libguestfs/2019-September/thread.html#00042

--- Additional comment from Richard W.M. Jones on 2019-09-10 12:12:25 CEST ---

Upstream in eeabb3fdc7756887b53106f455a7b54309130637, virt-v2v >= 1.41.5 and >= 1.40.3.

Comment 3 liuzi 2019-09-25 02:50:24 UTC
Verify bug with builds:
virt-v2v-1.40.2-14.module+el8.1.0+4230+0b6e3259.x86_64
libguestfs-1.40.2-14.module+el8.1.0+4230+0b6e3259.x86_64
libvirt-5.6.0-6.module+el8.1.0+4244+9aa4e6bb.x86_64
nbdkit-1.12.5-1.module+el8.1.0+3868+35f94834.x86_64
VMware-vix-disklib-6.5.2-6195444.x86_64.tar.gz
RHV:4.3.6.5-0.1.el7
vdsm-4.30.30-1.el7ev.x86_64

Steps:
Scenario1:
1.1 Convert a rhel6 guest which has disk size more than 100GB from VMware host to rhv's NFS domain on standalone v2v conversion server by virt-v2v
# virt-v2v -ic esx://root.72.61/?no_verify=1 esx6.0-guest-140G -it vddk  -io vddk-libdir=/home/vmware-vix-disklib-distrib -io vddk-thumbprint=02:D6:DB:9C:F2:01:D5:89:F6:24:BE:4C:E5:B8:30:7E:C8:0E:9D:3B -o rhv-upload -os nfs_data -oc https://ibm-x3250m5-03.rhts.eng.pek2.redhat.com/ovirt-engine/api -op /home/rhvpasswd -oo rhv-cafile=/home/ca.pem -oo rhv-cluster=Default -oo rhv-verifypeer -oo rhv-direct --password-file /home/esxpasswd -b ovirtmgmt
[   0.2] Opening the source -i libvirt -ic esx://root.72.61/?no_verify=1 esx6.0-guest-140G -it vddk  -io vddk-libdir=/home/vmware-vix-disklib-distrib -io vddk-thumbprint=02:D6:DB:9C:F2:01:D5:89:F6:24:BE:4C:E5:B8:30:7E:C8:0E:9D:3B
[   1.7] Creating an overlay to protect the source from being modified
[   6.2] Opening the overlay
[  15.5] Inspecting the overlay
[  30.2] Checking for sufficient free disk space in the guest
[  30.2] Estimating space required on target for each disk
[  30.2] Converting Red Hat Enterprise Linux Server release 6.7 Beta (Santiago) to run on KVM
virt-v2v: warning: guest tools directory ‘linux/el6’ is missing from 
the virtio-win directory or ISO.

Guest tools are only provided in the RHV Guest Tools ISO, so this can 
happen if you are using the version of virtio-win which contains just the 
virtio drivers.  In this case only virtio drivers can be installed in the 
guest, and installation of Guest Tools will be skipped.
virt-v2v: This guest has virtio drivers installed.
[ 146.3] Mapping filesystem data to avoid copying unused and blank areas
[ 146.7] Closing the overlay
[ 146.9] Assigning disks to buses
[ 146.9] Checking if the guest needs BIOS or UEFI to boot
[ 146.9] Initializing the target -o rhv-upload -oc https://ibm-x3250m5-03.rhts.eng.pek2.redhat.com/ovirt-engine/api -op /home/rhvpasswd -os nfs_data
[ 148.9] Copying disk 1/1 to qemu URI json:{ "file.driver": "nbd", "file.path": "/var/tmp/rhvupload.ZdSvcs/nbdkit0.sock", "file.export": "/" } (raw)
    (100.00/100%)
[6723.1] Creating output metadata
[6736.5] Finishing off

1.2 Power on guest and checkpoints of guest are passed.



Sceanrio 2:
2.1 Convert a rhel6 guest which has disk size more than 100GB from VMware vCenter to rhv's ISCSI data domain on standalone v2v conversion server by virt-v2v
# virt-v2v -ic esx://root.72.61/?no_verify=1 esx6.0-guest-140G -it vddk  -io vddk-libdir=/home/vmware-vix-disklib-distrib -io vddk-thumbprint=02:D6:DB:9C:F2:01:D5:89:F6:24:BE:4C:E5:B8:30:7E:C8:0E:9D:3B -o rhv-upload -oo rhv-cafile=/tmp/ca.pem -oo rhv-direct -oc https://hp-dl360eg8-03.lab.eng.pek2.redhat.com/ovirt-engine/api -op /tmp/rhvpasswd -os iscsi_data -b ovirtmgmt -oo rhv-cluster=ISCSI --password-file /home/esxpasswd -of raw -oa preallocated
[   0.2] Opening the source -i libvirt -ic esx://root.72.61/?no_verify=1 esx6.0-guest-140G -it vddk  -io vddk-libdir=/home/vmware-vix-disklib-distrib -io vddk-thumbprint=02:D6:DB:9C:F2:01:D5:89:F6:24:BE:4C:E5:B8:30:7E:C8:0E:9D:3B
[   1.7] Creating an overlay to protect the source from being modified
[   6.0] Opening the overlay
[  13.9] Inspecting the overlay
[  30.5] Checking for sufficient free disk space in the guest
[  30.5] Estimating space required on target for each disk
[  30.5] Converting Red Hat Enterprise Linux Server release 6.7 Beta (Santiago) to run on KVM
virt-v2v: warning: guest tools directory ‘linux/el6’ is missing from 
the virtio-win directory or ISO.

Guest tools are only provided in the RHV Guest Tools ISO, so this can 
happen if you are using the version of virtio-win which contains just the 
virtio drivers.  In this case only virtio drivers can be installed in the 
guest, and installation of Guest Tools will be skipped.
virt-v2v: This guest has virtio drivers installed.
[ 132.7] Mapping filesystem data to avoid copying unused and blank areas
[ 133.1] Closing the overlay
[ 133.3] Assigning disks to buses
[ 133.3] Checking if the guest needs BIOS or UEFI to boot
[ 133.3] Initializing the target -o rhv-upload -oa preallocated -oc https://hp-dl360eg8-03.lab.eng.pek2.redhat.com/ovirt-engine/api -op /tmp/rhvpasswd -os iscsi_data
[ 134.6] Copying disk 1/1 to qemu URI json:{ "file.driver": "nbd", "file.path": "/var/tmp/rhvupload.LpRXY8/nbdkit0.sock", "file.export": "/" } (raw)
    (100.00/100%)
[5100.9] Creating output metadata
[5115.8] Finishing off

2.2 Power on guest and checkpoints of guest are passed.

Scenario 3:
3.1 Import the guest which has disk size more than 100GB from VMware on RHV GUI

Click import option on Virtual Machine interface-> input VMware host info to load the guest-> select the guest to import

3.2 The importing is finished without error,power on guest and checkpoints of guest are passed

Comment 5 errata-xmlrpc 2019-11-06 07:19:21 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHBA-2019:3723


Note You need to log in before you can comment on or make changes to this bug.