Note: This bug is displayed in read-only format because
the product is no longer active in Red Hat Bugzilla.
RHEL Engineering is moving the tracking of its product development work on RHEL 6 through RHEL 9 to Red Hat Jira (issues.redhat.com). If you're a Red Hat customer, please continue to file support cases via the Red Hat customer portal. If you're not, please head to the "RHEL project" in Red Hat Jira and file new tickets here. Individual Bugzilla bugs in the statuses "NEW", "ASSIGNED", and "POST" are being migrated throughout September 2023. Bugs of Red Hat partners with an assigned Engineering Partner Manager (EPM) are migrated in late September as per pre-agreed dates. Bugs against components "kernel", "kernel-rt", and "kpatch" are only migrated if still in "NEW" or "ASSIGNED". If you cannot log in to RH Jira, please consult article #7032570. That failing, please send an e-mail to the RH Jira admins at rh-issues@redhat.com to troubleshoot your issue as a user management inquiry. The email creates a ServiceNow ticket with Red Hat. Individual Bugzilla bugs that are migrated will be moved to status "CLOSED", resolution "MIGRATED", and set with "MigratedToJIRA" in "Keywords". The link to the successor Jira issue will be found under "Links", have a little "two-footprint" icon next to it, and direct you to the "RHEL project" in Red Hat Jira (issue links are of type "https://issues.redhat.com/browse/RHEL-XXXX", where "X" is a digit). This same link will be available in a blue banner at the top of the page informing you that that bug has been migrated.
Description of problem:
Virt-v2v resolves VMware provider to IPv6 address causing migration failures. We need to make sure we try to use IPv4 everywhere as RHV does not fully support IPv6 for its infra as per Yaniv Kaul. I ran a few migrations and most of them had one line in common
```
nbdkit: debug: 2018-06-02T22:04:28.821-04:00 warning -[7FF6A6CBA700] [Originator@6876 sub=Default] Failed to connect socket; <io_obj p:0x00007ff68c001920, h:-1, <TCP '0.0.0.0:0'>, <TCP '[2620:52:0:8c4:250:56ff:fe9f:b8bf]:443'>>, e: system:125(Operation canceled)
```
As you can see 2620:52:0:8c4:250:56ff:fe9f:b8bf is IPv6 address it cannot connect to, and it is for my VMware vCenter provider. To be more sure, I did remove the IPv6 DNS entry and re-ran the migrations which seems to have worked well, as migrations kept running for longer time and did not show any thing related to IPv6 in its logs.
Version-Release number of selected component (if applicable):
RHV 4.2.4-0.1.el7
virt-v2v 1.36.10rhel=7,release=6.11.rhvpreview.el7ev,libvirt
nbdkit-plugin-vddk-1.2.2-1.el7.x86_64
nbdkit-plugin-python-common-1.2.2-1.el7ev.x86_64
nbdkit-plugin-python2-1.2.2-1.el7ev.x86_64
nbdkit-devel-1.2.2-1.el7ev.x86_64
nbdkit-1.2.2-1.el7ev.x86_64
v2v-automate branch - "extended_reporting_and_rhv_upload"
MIQ Nightly 24th May
How reproducible:
Believe 100%
Steps to reproduce:
1. Provision ManageIQ appliance which has migration capabilities (as of today 24th May nightly build can be used)
2. Add VMware/RHV providers (nslookup for VMware provider should return both IPv4/IPv6 addresses or at least IPv6)
3.Select any of the RHV hosts to be conversion host, run the playbooks from RHV Manager to configure the host you want to be conversion host(i.e. to install nbdkit, virt-v2v, etc.)
4. In ManageIQ, select that host, add its root credentials, add tags for conversion hosts.
5. Import v2v-automate domain git repo in manageIQ.
6. Create infrastructure mapping and migration plan, and execute migration plan.
7. Let the plan execute, it may fail, download the logs and observe for IPv6 address.
Actual results:
Fails to connect to IPv6 address of VMware vCenter
Expected results:
Use IPv4 by default or in case of failures with IPv6.
Additional info:
Comment 5Richard W.M. Jones
2019-05-13 09:53:15 UTC
It seems unclear at best that this is a bug in virt-v2v or nbdkit, since it simply
passes along the host name to VDDK which resolves it. It sounds like a bug in RHV
if it cannot provide IPv6 routing or if names resolve to unroutable addresses.
Anyway as we are not doing any further significant development on RHEL 7 after
7.7, I'm closing this bug.