RHEL Engineering is moving the tracking of its product development work on RHEL 6 through RHEL 9 to Red Hat Jira (issues.redhat.com). If you're a Red Hat customer, please continue to file support cases via the Red Hat customer portal. If you're not, please head to the "RHEL project" in Red Hat Jira and file new tickets here. Individual Bugzilla bugs in the statuses "NEW", "ASSIGNED", and "POST" are being migrated throughout September 2023. Bugs of Red Hat partners with an assigned Engineering Partner Manager (EPM) are migrated in late September as per pre-agreed dates. Bugs against components "kernel", "kernel-rt", and "kpatch" are only migrated if still in "NEW" or "ASSIGNED". If you cannot log in to RH Jira, please consult article #7032570. That failing, please send an e-mail to the RH Jira admins at rh-issues@redhat.com to troubleshoot your issue as a user management inquiry. The email creates a ServiceNow ticket with Red Hat. Individual Bugzilla bugs that are migrated will be moved to status "CLOSED", resolution "MIGRATED", and set with "MigratedToJIRA" in "Keywords". The link to the successor Jira issue will be found under "Links", have a little "two-footprint" icon next to it, and direct you to the "RHEL project" in Red Hat Jira (issue links are of type "https://issues.redhat.com/browse/RHEL-XXXX", where "X" is a digit). This same link will be available in a blue banner at the top of the page informing you that that bug has been migrated.
Bug 1596851 - Transfer fails if local host is in maintenance mode
Summary: Transfer fails if local host is in maintenance mode
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: Red Hat Enterprise Linux 7
Classification: Red Hat
Component: libguestfs
Version: 7.6
Hardware: Unspecified
OS: Unspecified
unspecified
high
Target Milestone: rc
: ---
Assignee: Richard W.M. Jones
QA Contact: Virtualization Bugs
URL:
Whiteboard: V2V
Depends On:
Blocks: 1588088
TreeView+ depends on / blocked
 
Reported: 2018-06-29 19:18 UTC by Nir Soffer
Modified: 2018-10-30 07:46 UTC (History)
8 users (show)

Fixed In Version: libguestfs-1.38.2-7.el7
Doc Type: If docs needed, set a value
Doc Text:
Clone Of:
Environment:
Last Closed: 2018-10-30 07:45:56 UTC
Target Upstream Version:
Embargoed:


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
Red Hat Bugzilla 1596810 0 unspecified CLOSED Transfer fails if local host belongs to another DC 2021-02-22 00:41:40 UTC
Red Hat Product Errata RHEA-2018:3021 0 None None None 2018-10-30 07:46:59 UTC

Internal Links: 1596810

Description Nir Soffer 2018-06-29 19:18:17 UTC
Description of problem:

This is another variant of bug 1596810. I'm adding it as new bug because this need
separate setup for testing, and another check in virt-v2v.

Running virt-v2v fail with the same error:

disk.id = '6480e5de-5ab2-4dbf-924d-7d7c409e5608'
hw_id = '85CFDED0-FD24-4645-AF7C-D47A088AE9E8'
host.id = 'b9c45d66-2067-4cd9-a9f2-d24a6f7a0fd9'
transfer.id = '4402b3f7-0583-448e-8a8d-601a845e0c38'
nbdkit: error: /home/nsoffer/src/libguestfs/tmp/rhvupload.LjcLv5/rhv-upload-plugin.py: open: error: direct upload to host not supported, requires ovirt-engine >= 4.2 and only works when virt-v2v is run within the oVirt/RHV environment, eg. on an oVirt node.
nbdkit: debug: connection cleanup with final status -1
qemu-img: Could not open 'json:{ "file.driver": "nbd", "file.path": "/home/nsoffer/src/libguestfs/tmp/rhvupload.LjcLv5/nbdkit0.sock", "file.export": "/" }': Failed to read data: Unexpected end-of-file before all bytes were read

virt-v2v: error: qemu-img command failed, see earlier errors

If reporting bugs, run virt-v2v with debugging enabled and include the 
complete output:

  virt-v2v -v -x [...]
rm -rf '/home/nsoffer/src/libguestfs/tmp/rhvupload.LjcLv5'
rm -rf '/home/nsoffer/src/libguestfs/tmp/null.ZEGgEq'
libguestfs: closing guestfs handle 0x20e4f00 (state 0)
nbdkit: debug: /usr/lib64/nbdkit/plugins/nbdkit-python3-plugin.so: unload

But this time the issue is that host is not active.

To reproduce, you need data centers and 2 hosts. This the setup:

dc1
   cluster1
       host1 (status=up)
       host2 (status=maintenance) <-- we run virt-v2v here
   storage1
      disk1 <-- uploading to this disk

Assuming that only host 2 can connect to vmware, or you have another
reason to run virt-v2v on host2 and not on host1.

The virt-v2v command is (running from source):

./run virt-v2v \
    -i disk /var/tmp/fedora-27.img \
    -o rhv-upload \
    -oc https://my.engine/ovirt-engine/api \
    -os storage1 \
    -op /var/tmp/password \
    -of raw \
    -oo rhv-cafile=ca.pem \
    -oo rhv-direct=true \
    -oo rhv-cluster=cluster1 \
    --verbose

What happens is:

1. virt-v2v find the host hardware id: 85CFDED0-FD24-4645-AF7C-D47A088AE9E8
2. virt-v2v lookup a host with this hardware id, finds host 2
   (id=b9c45d66-2067-4cd9-a9f2-d24a6f7a0fd9)
3. virt-v2v starts a transfer to disk ba390f85-2d45-4ea1-8e29-96020c4ba416 on
   host b9c45d66-2067-4cd9-a9f2-d24a6f7a0fd9
4. virt-v2v starts polling for transfer state
5. engine fails to prepare the disk on host b9c45d66-2067-4cd9-a9f2-d24a6f7a0fd9
6. engine pause the transfer, transfer.phase is now "paused"
7. virt-v2v detect that transfer is not in initialing state, and assumes that the
   transfer is ready
8. virt-v2v finds that transfer.transfer_url is None, and fail with incorrect
   error message

Expected behavior:
- virt-v2v should check that the local host is UP (not in maintenance mode)
- If the local host is not UP, virt-v2v should let engine choose a host
  (host=None)
- transfer started on host1


Version-Release number of selected component (if applicable):
virt-v2v 1.39.6

How reproducible:
100%

Comment 2 Richard W.M. Jones 2018-07-05 20:36:47 UTC
Fixed upstream in commit
4ed1bc5a79a77ad3a620b339f9ac2ecc8df6fd03.

Comment 4 mxie@redhat.com 2018-07-20 13:40:02 UTC
Try to reproduce the bug with builds:
virt-v2v-1.38.2-6.el7.x86_64
libguestfs-1.38.2-6.el7.x86_64
libvirt-4.5.0-3.el7.x86_64
qemu-kvm-rhev-2.12.0-7.el7.x86_64
rhv:4.2.5-0.1.el7ev


Reproduce steps:
1.Prepare test environment:there are two hosts in same Datacener/cluster on rhv4.2 and these two hosts use a data storage, make host1 into maintenance status

Datercenter:Default
    Cluster:Default
       Host1:mxie1(maintenance)
       Host2:mxie2
     Storage:nfs_data

2.Convert a guest to nfs_data by virt-v2v using rhv-upload on host1 which is in maintenance status on rhv4.2 and the conversion will be failed with same error of bug
# virt-v2v -ic vpx://vsphere.local%5cAdministrator.73.141/data/10.73.75.219/?no_verify=1  esx6.7-rhel7.5-x86_64 -o rhv-upload -oc https://ibm-x3250m5-03.rhts.eng.pek2.redhat.com/ovirt-engine/api -os nfs_data -op /tmp/rhvpasswd -oo rhv-cafile=/home/ca.pem  -oo rhv-direct=true -of raw --password-file /tmp/passwd
[   0.1] Opening the source -i libvirt -ic vpx://vsphere.local%5cAdministrator.73.141/data/10.73.75.219/?no_verify=1 esx6.7-rhel7.5-x86_64
[   1.9] Creating an overlay to protect the source from being modified
[   2.6] Initializing the target -o rhv-upload -oc https://ibm-x3250m5-03.rhts.eng.pek2.redhat.com/ovirt-engine/api -op /tmp/rhvpasswd -os nfs_data
[   3.9] Opening the overlay
[  21.5] Inspecting the overlay
[ 160.8] Checking for sufficient free disk space in the guest
[ 160.8] Estimating space required on target for each disk
[ 160.8] Converting Red Hat Enterprise Linux Server 7.5 (Maipo) to run on KVM
virt-v2v: This guest has virtio drivers installed.
[1317.1] Mapping filesystem data to avoid copying unused and blank areas
[1318.8] Closing the overlay
[1318.8] Checking if the guest needs BIOS or UEFI to boot
[1318.8] Assigning disks to buses
[1318.8] Copying disk 1/1 to qemu URI json:{ "file.driver": "nbd", "file.path": "/var/tmp/rhvupload.CIPg6I/nbdkit1.sock", "file.export": "/" } (raw)
nbdkit: error: /var/tmp/rhvupload.CIPg6I/rhv-upload-plugin.py: open: error: direct upload to host not supported, requires ovirt-engine >= 4.2 and only works when virt-v2v is run within the oVirt/RHV environment, eg. on an oVirt node.
qemu-img: Could not open 'json:{ "file.driver": "nbd", "file.path": "/var/tmp/rhvupload.CIPg6I/nbdkit1.sock", "file.export": "/" }': Failed to read data: Unexpected end-of-file before all bytes were read

virt-v2v: error: qemu-img command failed, see earlier errors

If reporting bugs, run virt-v2v with debugging enabled and include the 
complete output:

  virt-v2v -v -x [...]



Verify the bug with builds:
virt-v2v-1.38.2-8.el7.x86_64
libguestfs-1.38.2-8.el7.x86_64
libvirt-4.5.0-3.el7.x86_64
qemu-kvm-rhev-2.12.0-7.el7.x86_64
rhv:4.2.5-0.1.el7ev

Steps:
1.Just update virt-v2v to latest version on host1 which is in maintenance status on rhv4.2 and convert above guest to nfs_data by virt-v2v again, the conversion could be finished without error
# virt-v2v -ic vpx://vsphere.local%5cAdministrator.73.141/data/10.73.75.219/?no_verify=1  esx6.7-rhel7.5-x86_64 -o rhv-upload -oc https://ibm-x3250m5-03.rhts.eng.pek2.redhat.com/ovirt-engine/api -os nfs_data -op /tmp/rhvpasswd -oo rhv-cafile=/home/ca.pem  -oo rhv-direct=true -of raw --password-file /tmp/passwd
[   0.1] Opening the source -i libvirt -ic vpx://vsphere.local%5cAdministrator.73.141/data/10.73.75.219/?no_verify=1 esx6.7-rhel7.5-x86_64
[   2.0] Creating an overlay to protect the source from being modified
[   2.8] Initializing the target -o rhv-upload -oc https://ibm-x3250m5-03.rhts.eng.pek2.redhat.com/ovirt-engine/api -op /tmp/rhvpasswd -os nfs_data
[   4.0] Opening the overlay
[  30.6] Inspecting the overlay
[ 168.9] Checking for sufficient free disk space in the guest
[ 168.9] Estimating space required on target for each disk
[ 168.9] Converting Red Hat Enterprise Linux Server 7.5 (Maipo) to run on KVM
virt-v2v: This guest has virtio drivers installed.
[1311.6] Mapping filesystem data to avoid copying unused and blank areas
[1313.3] Closing the overlay
[1313.3] Checking if the guest needs BIOS or UEFI to boot
[1313.3] Assigning disks to buses
[1313.3] Copying disk 1/1 to qemu URI json:{ "file.driver": "nbd", "file.path": "/var/tmp/rhvupload.6IfhEv/nbdkit1.sock", "file.export": "/" } (raw)
    (100.00/100%)
[2242.4] Creating output metadata
[2261.0] Finishing off

2.Power on guest on rhv4.2 and checkpoints are passed

Result:
  Virt-v2v can convert guest using rhv-upload on ovirt node which is in   maintenance status now, so move the bug from ON_QA to VERIFIED

Comment 6 errata-xmlrpc 2018-10-30 07:45:56 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHEA-2018:3021


Note You need to log in before you can comment on or make changes to this bug.