RHEL Engineering is moving the tracking of its product development work on RHEL 6 through RHEL 9 to Red Hat Jira (issues.redhat.com). If you're a Red Hat customer, please continue to file support cases via the Red Hat customer portal. If you're not, please head to the "RHEL project" in Red Hat Jira and file new tickets here. Individual Bugzilla bugs in the statuses "NEW", "ASSIGNED", and "POST" are being migrated throughout September 2023. Bugs of Red Hat partners with an assigned Engineering Partner Manager (EPM) are migrated in late September as per pre-agreed dates. Bugs against components "kernel", "kernel-rt", and "kpatch" are only migrated if still in "NEW" or "ASSIGNED". If you cannot log in to RH Jira, please consult article #7032570. That failing, please send an e-mail to the RH Jira admins at rh-issues@redhat.com to troubleshoot your issue as a user management inquiry. The email creates a ServiceNow ticket with Red Hat. Individual Bugzilla bugs that are migrated will be moved to status "CLOSED", resolution "MIGRATED", and set with "MigratedToJIRA" in "Keywords". The link to the successor Jira issue will be found under "Links", have a little "two-footprint" icon next to it, and direct you to the "RHEL project" in Red Hat Jira (issue links are of type "https://issues.redhat.com/browse/RHEL-XXXX", where "X" is a digit). This same link will be available in a blue banner at the top of the page informing you that that bug has been migrated.
Bug 1596810 - Transfer fails if local host belongs to another DC
Summary: Transfer fails if local host belongs to another DC
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: Red Hat Enterprise Linux 7
Classification: Red Hat
Component: libguestfs
Version: 7.6
Hardware: Unspecified
OS: Unspecified
unspecified
high
Target Milestone: rc
: 7.6
Assignee: Richard W.M. Jones
QA Contact:
URL:
Whiteboard: V2V
Depends On:
Blocks: 1588088
TreeView+ depends on / blocked
 
Reported: 2018-06-29 18:17 UTC by Nir Soffer
Modified: 2018-10-30 07:46 UTC (History)
9 users (show)

Fixed In Version: libguestfs-1.38.2-7.el7
Doc Type: If docs needed, set a value
Doc Text:
Clone Of:
Environment:
Last Closed: 2018-10-30 07:45:56 UTC
Target Upstream Version:
Embargoed:


Attachments (Terms of Use)
v2v log (167.85 KB, text/plain)
2018-06-29 18:25 UTC, Nir Soffer
no flags Details
ovirt engine log (60.30 KB, text/plain)
2018-06-29 18:25 UTC, Nir Soffer
no flags Details


Links
System ID Private Priority Status Summary Last Updated
Red Hat Bugzilla 1596851 0 unspecified CLOSED Transfer fails if local host is in maintenance mode 2021-02-22 00:41:40 UTC
Red Hat Product Errata RHEA-2018:3021 0 None None None 2018-10-30 07:46:59 UTC

Internal Links: 1596851

Description Nir Soffer 2018-06-29 18:17:19 UTC
Description of problem:

Running virt-v2v on a oVirt host belonging to another DC will cause
the upload to fail with the wrong error message.

Here is example failed run:

disk.id = 'ba390f85-2d45-4ea1-8e29-96020c4ba416'
hw_id = '85CFDED0-FD24-4645-AF7C-D47A088AE9E8'
host.id = 'b9c45d66-2067-4cd9-a9f2-d24a6f7a0fd9'
transfer.id = 'c4d1cee1-7117-42e8-b3f9-146a9749fdb2'
nbdkit: error: /home/nsoffer/src/libguestfs/tmp/rhvupload.qvlEaB/rhv-upload-plugin.py: open: error: direct upload to host not supported, requires ovirt-engine >= 4.2 and only works when virt-v2v is run within the oVirt/RHV environment, eg. on an oVirt node.

To reproduce, you need data centers and 2 hosts. This the setup:

dc1
   cluster1
       host1
   storage1
      disk1 <-- uploading to this disk
dc2
    custer2
        host2 <-- we run virt-v2v here

Assuming that only host 2 can connect to vmware, or you have another
reason to run virt-v2v on host2 and not on host1.

The virt-v2v command is (running from source):

./run virt-v2v \
    -i disk /var/tmp/fedora-27.img \
    -o rhv-upload \
    -oc https://my.engine/ovirt-engine/api \
    -os storage1 \
    -op /var/tmp/password \
    -of raw \
    -oo rhv-cafile=ca.pem \
    -oo rhv-direct=true \
    -oo rhv-cluster=cluster1 \
    --verbose

What happens is:

1. virt-v2v find the host hardware id: 85CFDED0-FD24-4645-AF7C-D47A088AE9E8
2. virt-v2v lookup a host with this hardware id, finds host 2
   (id=b9c45d66-2067-4cd9-a9f2-d24a6f7a0fd9)
3. virt-v2v starts a transfer to disk ba390f85-2d45-4ea1-8e29-96020c4ba416 on
   host b9c45d66-2067-4cd9-a9f2-d24a6f7a0fd9
4. virt-v2v starts polling for transfer state
5. engine fails to prepare the disk on host b9c45d66-2067-4cd9-a9f2-d24a6f7a0fd9
   with NullPointerException
6. engine pause the transfer, transfer.phase is now "paused"
7. virt-v2v detect that transfer is not in initialing state, and assumes that the
   transfer is ready
8. virt-v2v finds that transfer.transfer_url is None, and fail with incorrect
   error message

So we have several issues:
- incorrect host lookup in virt-v2v
- incorrect transfer status polling in virt-v2v
  transfer.phase != types.ImageTransferPhase.INITIALIZING
- incorrect example code in ovirt-engine-sdk for polling transfer status (this is
  where the incorrect code in virt-v2v came from)
- likely incorrect transfer status polling in ovirt ansible modules, based on
  same wrong code in ovirt sdk examples
- engine does not fail the transfer - it should check that the host cannot used
  for transfer to the disk, since the disk belongs to a storage domain the host 
  cannot access.
- NullPointException in engine

Expected behavior:
- virt-v2v should check that the host belongs to dc1
- If the local host is not the dc1, virt-v2v should let engine choose a host
  (host=None)
- transfer started on host1

Version-Release number of selected component (if applicable):
virt-v2v 1.39.6

How reproducible:
100%

Comment 1 Richard W.M. Jones 2018-06-29 18:22:29 UTC
Moving this downstream since we'll have to fix it in RHEL 7.6.

I'm confused on this point:
> - virt-v2v should check that the host belongs to dc1
Why is checking the host not sufficient?

Comment 3 Nir Soffer 2018-06-29 18:25:23 UTC
Created attachment 1455544 [details]
v2v log

Comment 4 Nir Soffer 2018-06-29 18:25:53 UTC
Created attachment 1455546 [details]
ovirt engine log

Comment 5 Nir Soffer 2018-06-29 18:37:10 UTC
(In reply to Richard W.M. Jones from comment #1)
> Moving this downstream since we'll have to fix it in RHEL 7.6.
> 
> I'm confused on this point:
> > - virt-v2v should check that the host belongs to dc1
> Why is checking the host not sufficient?

In ovirt every host blongs to a data center, and all storage in a data center
belongs to the data center. Hosts in one data center cannot access storage in other
data centers.

For example in this setup:

dc1
  cluster1
    host1
  cluster2
    host2
  storage1
    disk1
  storage2
    disk2
dc2
  cluster3
    host3
  cluster4
    host4
  storage3
    disk3
  storage4
    disk4

We can upload to disk1 and disk2 only from host1 and host2. host2 is good for
upload even if it is not in cluster1 (where we create the vm).

The correct way to check is probably be:
1. find the cluster by cluster name
2. find the dc from the cluster
3. check if the host belongs to this dc

I don't know ovirt API enough to tell what is the best way to do this, but I'm
sure Daniel can recommend a good way to do this.

I hope we can do something like

    "search='hw_id=xxx-yyy and data-center=yyy-zzz'"

Comment 6 Nir Soffer 2018-06-29 19:18:51 UTC
Bug 1596851 is a similar variant of this issue.

Comment 7 Nir Soffer 2018-06-29 22:48:01 UTC
This patch avoid this issue when not using rhv-direct=true option:
https://www.redhat.com/archives/libguestfs/2018-June/msg00170.html

Comment 8 Daniel Erez 2018-07-01 07:17:18 UTC
(In reply to Nir Soffer from comment #5)
> (In reply to Richard W.M. Jones from comment #1)
> > Moving this downstream since we'll have to fix it in RHEL 7.6.
> > 
> > I'm confused on this point:
> > > - virt-v2v should check that the host belongs to dc1
> > Why is checking the host not sufficient?
> 
> In ovirt every host blongs to a data center, and all storage in a data center
> belongs to the data center. Hosts in one data center cannot access storage
> in other
> data centers.
> 
> For example in this setup:
> 
> dc1
>   cluster1
>     host1
>   cluster2
>     host2
>   storage1
>     disk1
>   storage2
>     disk2
> dc2
>   cluster3
>     host3
>   cluster4
>     host4
>   storage3
>     disk3
>   storage4
>     disk4
> 
> We can upload to disk1 and disk2 only from host1 and host2. host2 is good for
> upload even if it is not in cluster1 (where we create the vm).
> 
> The correct way to check is probably be:
> 1. find the cluster by cluster name
> 2. find the dc from the cluster
> 3. check if the host belongs to this dc
> 
> I don't know ovirt API enough to tell what is the best way to do this, but
> I'm
> sure Daniel can recommend a good way to do this.
> 
> I hope we can do something like
> 
>     "search='hw_id=xxx-yyy and data-center=yyy-zzz'"

Right, for example:
  hosts = hosts_service.list(
    search='hw_id=DCBBBC71-B601-4BC2-B046-03FBF35D05AD and datacenter=Default',
    case_sensitive=False,
  )

As for the polling, transfer can start when ImageTransferPhase is TRANSFERRING.
So we should check that it's the correct status before start transferring.
E.g.
  while transfer.phase == types.ImageTransferPhase.INITIALIZING:
    time.sleep(1)
    transfer = transfer_service.get()

  if transfer.phase != types.ImageTransferPhase.TRANSFERRING:
    print "Can't start transfer, invalid status: {}".format(transfer.phase)
    sys.exit()

Can you please attach also the relevant vdsm log. As the NPE is raised on PreapreImage:
  "at org.ovirt.engine.core.vdsbroker.vdsbroker.PrepareImageReturn.<init>(PrepareImageReturn.java:15) [vdsbroker.jar:]"

Comment 9 Daniel Erez 2018-07-01 07:25:00 UTC
Note that you can also add 'status' to the search (for the issue described in bug 1596851)

i.e.
 search='hw_id=DCBBBC71-B601-4BC2-B046-03FBF35D05AD and datacenter=Default and status=Up'

Comment 10 Richard W.M. Jones 2018-07-05 20:36:43 UTC
Fixed upstream in commit
4ed1bc5a79a77ad3a620b339f9ac2ecc8df6fd03.

Comment 12 mxie@redhat.com 2018-07-18 14:44:11 UTC
Try to reproduce the bug with builds:
virt-v2v-1.38.2-6.el7.x86_64
libguestfs-1.38.2-6.el7.x86_64
libvirt-4.5.0-3.el7.x86_64
qemu-kvm-rhev-2.12.0-7.el7.x86_64

Reproduce steps:
1.Prepare test environment:there are two DataCenters on rhv4.2 and every Datacenter has its cluster, host and storage as below:

Datercenter:NFS
    Cluster:NFS
       Host:NFS
    Storage:nfs_data


Datercenter:ISCSI
    Cluster:ISCSI
       Host:ISCSI
    Storage:iscsi_data

2 Install virt-v2v on host "NFS" and try to convert guest from VMware to storage"iscsi_data", conversion is failed with same error with bug
# virt-v2v -ic vpx://vsphere.local%5cAdministrator.73.141/data/10.73.75.219/?no_verify=1  esx6.7-win8.1-x86_64  -o rhv-upload -oc https://hp-dl360eg8-03.lab.eng.pek2.redhat.com/ovirt-engine/api -os iscsi_data -op /tmp/rhvpasswd -oo rhv-cafile=/root/ca.pem  -oo rhv-direct -of raw --password-file /tmp/passwd -b ovirtmgmt -oa preallocated -oo rhv-cluster=ISCSI
[   0.1] Opening the source -i libvirt -ic vpx://vsphere.local%5cAdministrator.73.141/data/10.73.75.219/?no_verify=1 esx6.7-win8.1-x86_64
[   1.9] Creating an overlay to protect the source from being modified
[   2.7] Initializing the target -o rhv-upload -oa preallocated -oc https://hp-dl360eg8-03.lab.eng.pek2.redhat.com/ovirt-engine/api -op /tmp/rhvpasswd -os iscsi_data
[   4.0] Opening the overlay
[  17.3] Inspecting the overlay
[ 148.6] Checking for sufficient free disk space in the guest
[ 148.6] Estimating space required on target for each disk
[ 148.6] Converting Windows 8.1 Enterprise to run on KVM
virt-v2v: warning: /usr/share/virt-tools/pnp_wait.exe is missing.  
Firstboot scripts may conflict with PnP.
virt-v2v: warning: there are no virtio drivers available for this version 
of Windows (6.3 x86_64 Client).  virt-v2v looks for drivers in 
/usr/share/virtio-win

The guest will be configured to use slower emulated devices.
virt-v2v: This guest does not have virtio drivers installed.
[ 161.3] Mapping filesystem data to avoid copying unused and blank areas
[ 162.1] Closing the overlay
[ 162.2] Checking if the guest needs BIOS or UEFI to boot
[ 162.2] Assigning disks to buses
[ 162.2] Copying disk 1/1 to qemu URI json:{ "file.driver": "nbd", "file.path": "/var/tmp/rhvupload.aKmEGI/nbdkit1.sock", "file.export": "/" } (raw)
nbdkit: error: /var/tmp/rhvupload.aKmEGI/rhv-upload-plugin.py: open: error: direct upload to host not supported, requires ovirt-engine >= 4.2 and only works when virt-v2v is run within the oVirt/RHV environment, eg. on an oVirt node.
qemu-img: Could not open 'json:{ "file.driver": "nbd", "file.path": "/var/tmp/rhvupload.aKmEGI/nbdkit1.sock", "file.export": "/" }': Failed to read data: Unexpected end-of-file before all bytes were read

virt-v2v: error: qemu-img command failed, see earlier errors

If reporting bugs, run virt-v2v with debugging enabled and include the 
complete output:

  virt-v2v -v -x [...]



Verify the bug with builds:
virt-v2v-1.38.2-8.el7.x86_64
libguestfs-1.38.2-8.el7.x86_64
libvirt-4.5.0-3.el7.x86_64
qemu-kvm-rhev-2.12.0-7.el7.x86_64

Steps:
1.Just update virt-v2v to latest version on nfs host and convert a guest from VMware to storage"iscsi_data" again
# virt-v2v -ic vpx://vsphere.local%5cAdministrator.73.141/data/10.73.75.219/?no_verify=1  esx6.7-rhel7.5-x86_64  -o rhv-upload -oc https://hp-dl360eg8-03.lab.eng.pek2.redhat.com/ovirt-engine/api -os iscsi_data -op /tmp/rhvpasswd -oo rhv-cafile=/root/ca.pem  -oo rhv-direct -of raw --password-file /tmp/passwd -b ovirtmgmt -oa preallocated -oo rhv-cluster=ISCSI
[   0.1] Opening the source -i libvirt -ic vpx://vsphere.local%5cAdministrator.73.141/data/10.73.75.219/?no_verify=1 esx6.7-rhel7.5-x86_64
[   1.8] Creating an overlay to protect the source from being modified
[   2.6] Initializing the target -o rhv-upload -oa preallocated -oc https://hp-dl360eg8-03.lab.eng.pek2.redhat.com/ovirt-engine/api -op /tmp/rhvpasswd -os iscsi_data
[   3.9] Opening the overlay
[  22.4] Inspecting the overlay
[ 161.9] Checking for sufficient free disk space in the guest
[ 161.9] Estimating space required on target for each disk
[ 161.9] Converting Red Hat Enterprise Linux Server 7.5 (Maipo) to run on KVM
virt-v2v: This guest has virtio drivers installed.
[1273.2] Mapping filesystem data to avoid copying unused and blank areas
[1275.3] Closing the overlay
[1275.4] Checking if the guest needs BIOS or UEFI to boot
[1275.4] Assigning disks to buses
[1275.4] Copying disk 1/1 to qemu URI json:{ "file.driver": "nbd", "file.path": "/var/tmp/rhvupload.wYruaS/nbdkit1.sock", "file.export": "/" } (raw)
    (100.00/100%)
[1997.8] Creating output metadata
[2014.7] Finishing off

2.Power on guest on host "ISCSI" and checkpoints of guest are passed

Result:
   Virt-v2v can convert guest successfully using rhv-upload when v2v conversion server and target data domain belong to different DataCenters of rhv4.2, so move the bug from ON_QA to VERIFIED

Comment 14 errata-xmlrpc 2018-10-30 07:45:56 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHEA-2018:3021


Note You need to log in before you can comment on or make changes to this bug.