RHEL Engineering is moving the tracking of its product development work on RHEL 6 through RHEL 9 to Red Hat Jira (issues.redhat.com). If you're a Red Hat customer, please continue to file support cases via the Red Hat customer portal. If you're not, please head to the "RHEL project" in Red Hat Jira and file new tickets here. Individual Bugzilla bugs in the statuses "NEW", "ASSIGNED", and "POST" are being migrated throughout September 2023. Bugs of Red Hat partners with an assigned Engineering Partner Manager (EPM) are migrated in late September as per pre-agreed dates. Bugs against components "kernel", "kernel-rt", and "kpatch" are only migrated if still in "NEW" or "ASSIGNED". If you cannot log in to RH Jira, please consult article #7032570. That failing, please send an e-mail to the RH Jira admins at rh-issues@redhat.com to troubleshoot your issue as a user management inquiry. The email creates a ServiceNow ticket with Red Hat. Individual Bugzilla bugs that are migrated will be moved to status "CLOSED", resolution "MIGRATED", and set with "MigratedToJIRA" in "Keywords". The link to the successor Jira issue will be found under "Links", have a little "two-footprint" icon next to it, and direct you to the "RHEL project" in Red Hat Jira (issue links are of type "https://issues.redhat.com/browse/RHEL-XXXX", where "X" is a digit). This same link will be available in a blue banner at the top of the page informing you that that bug has been migrated.
Bug 1684075 - Virt-v2v can't convert a guest from VMware via nbdkit-vddk if original guest disk address is irregular
Summary: Virt-v2v can't convert a guest from VMware via nbdkit-vddk if original guest ...
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: Red Hat Enterprise Linux 9
Classification: Red Hat
Component: virt-v2v
Version: unspecified
Hardware: x86_64
OS: Unspecified
medium
high
Target Milestone: rc
: ---
Assignee: Richard W.M. Jones
QA Contact: mxie@redhat.com
URL:
Whiteboard: V2V
Depends On: 2059287
Blocks:
TreeView+ depends on / blocked
 
Reported: 2019-02-28 11:10 UTC by mxie@redhat.com
Modified: 2022-11-15 10:22 UTC (History)
10 users (show)

Fixed In Version: virt-v2v-2.0.0-1.el9
Doc Type: If docs needed, set a value
Doc Text:
Clone Of:
Environment:
Last Closed: 2022-11-15 09:55:44 UTC
Type: Bug
Target Upstream Version:
Embargoed:


Attachments (Terms of Use)
nbdkit-vddk-disk_address_irregular.log (12.27 KB, text/plain)
2019-02-28 11:10 UTC, mxie@redhat.com
no flags Details
vmware-guest-disk-address.png (53.18 KB, image/png)
2019-04-09 09:34 UTC, mxie@redhat.com
no flags Details
irregular-disk-address-VMware-vix-disklib-6.7.3.log (101.68 KB, text/plain)
2019-12-18 12:16 UTC, mxie@redhat.com
no flags Details


Links
System ID Private Priority Status Summary Last Updated
Red Hat Product Errata RHSA-2022:7968 0 None None None 2022-11-15 09:56:01 UTC

Description mxie@redhat.com 2019-02-28 11:10:15 UTC
Created attachment 1539429 [details]
nbdkit-vddk-disk_address_irregular.log

Description of problem:
Virt-v2v can't convert a guest from VMware via nbdkit-vddk if guest disk address is irregular

Version-Release number of selected component (if applicable):
virt-v2v-1.40.2-1.el7.x86_64
libguestfs-1.40.2-1.el7.x86_64
libvirt-4.5.0-10.el7_6.6.x86_64
qemu-kvm-rhev-2.12.0-18.el7_6.3.x86_64
nbdkit-1.8.0-1.el7.x86_64
nbdkit-plugin-vddk-1.8.0-1.el7.x86_64
VMware-vix-disklib-6.5.2-6195444.x86_64

How reproducible:
100%

Steps to reproduce:

Scenario1:
1.Prepare a VMware guest, the guest has only one disk but its disk address isn't <address type='drive' controller='0' bus='0' target='0' unit='0'/>, by the way,the original guest can boot into os normally with irregular disk address

# virsh -c vpx://root.73.141/data/10.73.75.219/?no_verify=1 
Enter root's password for 10.73.73.141: 
Welcome to virsh, the virtualization interactive terminal.

Type:  'help' for help with commands
       'quit' to quit

virsh # dumpxml esx6.7-rhel7.5-x86_64
....
    <disk type='file' device='disk'>
      <source file='[esx6.7] esx6.7-rhel7.5-x86_64/esx6.7-rhel7.5-x86_64.vmdk'/>
      <target dev='sdc' bus='scsi'/>
      <address type='drive' controller='0' bus='0' target='0' unit='2'/>
    </disk>

....

2.Convert the guest from VMware via vddk by virt-v2v
#  virt-v2v -ic vpx://root.73.141/data/10.73.75.219/?no_verify=1 -it vddk -io vddk-libdir=/home/vmware-vix-disklib-distrib -io  vddk-thumbprint=1F:97:34:5F:B6:C2:BA:66:46:CB:1A:71:76:7D:6B:50:1E:03:00:EA -n default esx6.7-rhel7.5-x86_64 --password-file /tmp/passwd 
[   0.0] Opening the source -i libvirt -ic vpx://root.73.141/data/10.73.75.219/?no_verify=1 esx6.7-rhel7.5-x86_64 -it vddk  -io vddk-libdir=/home/vmware-vix-disklib-distrib -io vddk-thumbprint=1F:97:34:5F:B6:C2:BA:66:46:CB:1A:71:76:7D:6B:50:1E:03:00:EA
[   1.8] Creating an overlay to protect the source from being modified
nbdkit: error: VixDiskLibVim: Failed to open disk using NFC. VixError 1 at 985.
nbdkit: error: VixDiskLib_Open: [esx6.7] esx6.7-rhel7.5-x86_64/esx6.7-rhel7.5-x86_64.vmdk: Unknown error
qemu-img: /var/tmp/v2vovld1939d.qcow2: Failed to read data: Unexpected end-of-file before all bytes were read
Could not open backing image to determine size.
virt-v2v: error: qemu-img command failed, see earlier errors

If reporting bugs, run virt-v2v with debugging enabled and include the 
complete output:

  virt-v2v -v -x [...]



Scenario2:
1.Prepare a VMware guest with multiple disks and its disk address is irregular like below (original guest can boot into os normally)

# virsh -c vpx://root.73.141/data/10.73.75.219/?no_verify=1 
Enter root's password for 10.73.73.141: 
Welcome to virsh, the virtualization interactive terminal.

Type:  'help' for help with commands
       'quit' to quit

virsh # dumpxml esx6.7-rhel6.9-multi-disks
<domain type='vmware' xmlns:vmware='http://libvirt.org/schemas/domain/vmware/1.0'>
  <name>esx6.7-rhel6.9-multi-disks</name>
  ....
    <disk type='file' device='disk'>
      <source file='[esx6.7] rhel6.7-rhel6.9-multi-disks/rhel6.7-rhel6.9-multi-disks_3.vmdk'/>
      <target dev='sda' bus='scsi'/>
      <address type='drive' controller='0' bus='0' target='0' unit='0'/>
    </disk>
    <disk type='file' device='disk'>
      <source file='[esx6.7] rhel6.7-rhel6.9-multi-disks/rhel6.7-rhel6.9-multi-disks.vmdk'/>
      <target dev='sdg' bus='scsi'/>
      <address type='drive' controller='0' bus='0' target='0' unit='6'/>
    </disk>
    <disk type='file' device='disk'>
      <source file='[esx6.7] rhel6.7-rhel6.9-multi-disks/rhel6.7-rhel6.9-multi-disks_2.vmdk'/>
      <target dev='sdi' bus='scsi'/>
      <address type='drive' controller='0' bus='0' target='0' unit='9'/>
    </disk>
    <disk type='file' device='disk'>
      <source file='[esx6.7] rhel6.7-rhel6.9-multi-disks/rhel6.7-rhel6.9-multi-disks_1.vmdk'/>
      <target dev='sdj' bus='scsi'/>
      <address type='drive' controller='0' bus='0' target='0' unit='10'/>
    </disk>

2.Convert the guest from VMware via vddk by virt-v2v
# virt-v2v -ic vpx://root.73.141/data/10.73.75.219/?no_verify=1 -it vddk -io vddk-libdir=/home/vmware-vix-disklib-distrib -io  vddk-thumbprint=1F:97:34:5F:B6:C2:BA:66:46:CB:1A:71:76:7D:6B:50:1E:03:00:EA -n default esx6.7-rhel6.9-multi-disks --password-file /tmp/passwd 
[   0.1] Opening the source -i libvirt -ic vpx://root.73.141/data/10.73.75.219/?no_verify=1 esx6.7-rhel6.9-multi-disks -it vddk  -io vddk-libdir=/home/vmware-vix-disklib-distrib -io vddk-thumbprint=1F:97:34:5F:B6:C2:BA:66:46:CB:1A:71:76:7D:6B:50:1E:03:00:EA
[   5.1] Creating an overlay to protect the source from being modified
nbdkit: error: VixDiskLibVim: Failed to open disk using NFC. VixError 1 at 985.
nbdkit: error: VixDiskLib_Open: [esx6.7] rhel6.7-rhel6.9-multi-disks/rhel6.7-rhel6.9-multi-disks.vmdk: Unknown error
qemu-img: /var/tmp/v2vovl618a9b.qcow2: Failed to read data: Unexpected end-of-file before all bytes were read
Could not open backing image to determine size.
virt-v2v: error: qemu-img command failed, see earlier errors

If reporting bugs, run virt-v2v with debugging enabled and include the 
complete output:

  virt-v2v -v -x [...]


Actual results:
As description

Expected results:
Virt-v2v can convert a guest from VMware via nbdkit-vddk if guest disk address is irregular


Additional info:
1.Can't reproduce the problem if convert problematic guest without nbdit-vddk by virt-v2v
2.Also can reproduce the bug on rhel8

Comment 2 Richard W.M. Jones 2019-04-09 09:16:53 UTC
I've just got round to looking at this and I have to say I have absolutely no
idea what's going on.  When you say:

> Prepare a VMware guest, the guest has only one disk but its disk address isn't
> <address type='drive' controller='0' bus='0' target='0' unit='0'/>,

how do you precisely create such a guest?  Is there a VMware option for
"irregular disks"?

Comment 3 mxie@redhat.com 2019-04-09 09:33:47 UTC
If change the disk address for VMware guest via edit setting-> Hard disk -> Virtual Device node -> change "SCSI(0:0)" to SCSI(0:2),then disk address will be changed from “<address type='drive' controller='0' bus='0' target='0' unit='0'/>” to "<address type='drive' controller='0' bus='0' target='0' unit='2'/>", pls refer to screenshot"vmware-guest-disk-address"

Comment 4 mxie@redhat.com 2019-04-09 09:34:15 UTC
Created attachment 1553803 [details]
vmware-guest-disk-address.png

Comment 6 Richard W.M. Jones 2019-12-13 08:47:48 UTC
It's not a customer bug so doesn't have to be fixed in RHEL 7.
If it still happens in RHEL AV can we move it to RHEL AV 8.3?

Comment 7 mxie@redhat.com 2019-12-18 12:15:52 UTC
(In reply to Richard W.M. Jones from comment #6)
> It's not a customer bug so doesn't have to be fixed in RHEL 7.
> If it still happens in RHEL AV can we move it to RHEL AV 8.3?

Sorry for late reply, I can't reproduce the bug suddenly when I want to reply the comment, that's why I replied so late...but I can reproduce the bug today

Reproduce the bug with packages:
virt-v2v-1.40.2-1.el7.x86_64
libguestfs-1.40.2-1.el7.x86_64
libvirt-4.5.0-10.el7_6.6.x86_64
qemu-kvm-rhev-2.12.0-18.el7_6.3.x86_64
nbdkit-1.8.0-1.el7.x86_64
VMware-vix-disklib-6.5.1-5993564.x86_64.tar.gz

Steps to reproduce:
1.Prepare a VMware guest with multiple disks and its disk address is irregular like below (original guest can boot into os normally)

# virsh -c vpx://root.73.141/data/10.73.75.219/?no_verify=1 dumpxml esx6.7-rhel7.5-multi-disks
Enter root's password for 10.73.73.141: 
<domain type='vmware' xmlns:vmware='http://libvirt.org/schemas/domain/vmware/1.0'>
  <name>esx6.7-rhel7.5-multi-disks</name>
 ....
   <devices>
    <disk type='file' device='disk'>
      <source file='[esx6.7] esx6.7-rhel7.5-multi-disks/esx6.7-rhel7.5-multi-disks.vmdk'/>
      <target dev='sda' bus='scsi'/>
      <address type='drive' controller='0' bus='0' target='0' unit='0'/>
    </disk>
    <disk type='file' device='disk'>
      <source file='[esx6.7] esx6.7-rhel7.5-multi-disks/esx6.7-rhel7.5-multi-disks_3.vmdk'/>
      <target dev='sdd' bus='scsi'/>
      <address type='drive' controller='0' bus='0' target='0' unit='3'/>
    </disk>
    <disk type='file' device='disk'>
      <source file='[esx6.7] esx6.7-rhel7.5-multi-disks/esx6.7-rhel7.5-multi-disks_1.vmdk'/>
      <target dev='sdh' bus='scsi'/>
      <address type='drive' controller='0' bus='0' target='0' unit='8'/>
    </disk>
    <disk type='file' device='disk'>
      <source file='[esx6.7] esx6.7-rhel7.5-multi-disks/esx6.7-rhel7.5-multi-disks_2.vmdk'/>
      <target dev='sdn' bus='scsi'/>
      <address type='drive' controller='0' bus='0' target='0' unit='14'/>
    </disk>
....
</domain>

2. Convert the guest from VMware to rhv by virt-v2v
#  virt-v2v -ic vpx://root.73.141/data/10.73.75.219/?no_verify=1 -it vddk -io vddk-libdir=/tmp/vmware-vix-disklib-distrib -io  vddk-thumbprint=1F:97:34:5F:B6:C2:BA:66:46:CB:1A:71:76:7D:6B:50:1E:03:00:EA -n default esx6.7-rhel7.5-multi-disks --password-file /home/passwd 
[   0.1] Opening the source -i libvirt -ic vpx://root.73.141/data/10.73.75.219/?no_verify=1 esx6.7-rhel7.5-multi-disks -it vddk  -io vddk-libdir=/tmp/vmware-vix-disklib-distrib -io vddk-thumbprint=1F:97:34:5F:B6:C2:BA:66:46:CB:1A:71:76:7D:6B:50:1E:03:00:EA
[   5.1] Creating an overlay to protect the source from being modified
nbdkit: error: VixDiskLibVim: Failed to open disk using NFC. VixError 1 at 985.
nbdkit: error: VixDiskLib_Open: [esx6.7] esx6.7-rhel7.5-multi-disks/esx6.7-rhel7.5-multi-disks_1.vmdk: Unknown error
qemu-img: /var/tmp/v2vovl5dc050.qcow2: Failed to read data: Unexpected end-of-file before all bytes were read
Could not open backing image to determine size.
virt-v2v: error: qemu-img command failed, see earlier errors

If reporting bugs, run virt-v2v with debugging enabled and include the 
complete output:

  virt-v2v -v -x [...]

----------------------------------------------------------------------------

Use VMware-vix-disklib-6.7.3-14389676 to replace VMware-vix-disklib-6.5.1-5993564 to convert the guest again

#  virt-v2v -ic vpx://root.73.141/data/10.73.75.219/?no_verify=1 -it vddk -io vddk-libdir=/home/vmware-vix-disklib-distrib -io  vddk-thumbprint=1F:97:34:5F:B6:C2:BA:66:46:CB:1A:71:76:7D:6B:50:1E:03:00:EA -n default esx6.7-rhel7.5-multi-disks --password-file /home/passwd 
[   0.1] Opening the source -i libvirt -ic vpx://root.73.141/data/10.73.75.219/?no_verify=1 esx6.7-rhel7.5-multi-disks -it vddk  -io vddk-libdir=/home/vmware-vix-disklib-distrib -io vddk-thumbprint=1F:97:34:5F:B6:C2:BA:66:46:CB:1A:71:76:7D:6B:50:1E:03:00:EA
[   5.1] Creating an overlay to protect the source from being modified
nbdkit: error: VixDiskLib: VixDiskLibIsLegacyConnParams: the instance of VixDiskLibConnectParams is NOT allocated by VixDiskLib_AllocateConnectParams. The new features in 6.7 or later are not supported.
nbdkit: error: VixDiskLib: VixDiskLibIsLegacyConnParams: the instance of VixDiskLibConnectParams is NOT allocated by VixDiskLib_AllocateConnectParams. The new features in 6.7 or later are not supported.
nbdkit: error: VixDiskLib: VixDiskLibIsLegacyConnParams: the instance of VixDiskLibConnectParams is NOT allocated by VixDiskLib_AllocateConnectParams. The new features in 6.7 or later are not supported.
nbdkit: error: VixDiskLib: VixDiskLibIsLegacyConnParams: the instance of VixDiskLibConnectParams is NOT allocated by VixDiskLib_AllocateConnectParams. The new features in 6.7 or later are not supported.
[   9.4] Opening the overlay
nbdkit: error: VixDiskLib: VixDiskLibIsLegacyConnParams: the instance of VixDiskLibConnectParams is NOT allocated by VixDiskLib_AllocateConnectParams. The new features in 6.7 or later are not supported.
nbdkit: error: VixDiskLib: VixDiskLibIsLegacyConnParams: the instance of VixDiskLibConnectParams is NOT allocated by VixDiskLib_AllocateConnectParams. The new features in 6.7 or later are not supported.
nbdkit: error: VixDiskLib: VixDiskLibIsLegacyConnParams: the instance of VixDiskLibConnectParams is NOT allocated by VixDiskLib_AllocateConnectParams. The new features in 6.7 or later are not supported.
nbdkit: error: VixDiskLibVim: Failed to open disk using NFC. VixError 1 at 1166.
nbdkit: error: VixDiskLib_Open: [esx6.7] esx6.7-rhel7.5-multi-disks/esx6.7-rhel7.5-multi-disks_1.vmdk: Unknown error
virt-v2v: error: libguestfs error: could not create appliance through 
libvirt.

Try running qemu directly without libvirt using this environment variable:
export LIBGUESTFS_BACKEND=direct

Original error from libvirt: internal error: qemu unexpectedly closed the 
monitor: 2019-12-18 11:46:32.864+0000: Domain id=7 is tainted: custom-argv
2019-12-18 11:46:32.864+0000: Domain id=7 is tainted: host-cpu
2019-12-18T11:46:34.789615Z qemu-kvm: -drive 
file=/var/tmp/v2vovlff90c9.qcow2,format=qcow2,if=none,id=drive-scsi0-0-2-0,cache=unsafe,copy-on-read=on,discard=unmap: 
Could not open backing file: Failed to read data: Unexpected end-of-file 
before all bytes were read [code=1 int1=-1]

If reporting bugs, run virt-v2v with debugging enabled and include the 
complete output:

  virt-v2v -v -x [...]

# export LIBGUESTFS_BACKEND=direct
#  virt-v2v -ic vpx://root.73.141/data/10.73.75.219/?no_verify=1 -it vddk -io vddk-libdir=/home/vmware-vix-disklib-distrib -io  vddk-thumbprint=1F:97:34:5F:B6:C2:BA:66:46:CB:1A:71:76:7D:6B:50:1E:03:00:EA -n default esx6.7-rhel7.5-multi-disks --password-file /home/passwd 
[   0.1] Opening the source -i libvirt -ic vpx://root.73.141/data/10.73.75.219/?no_verify=1 esx6.7-rhel7.5-multi-disks -it vddk  -io vddk-libdir=/home/vmware-vix-disklib-distrib -io vddk-thumbprint=1F:97:34:5F:B6:C2:BA:66:46:CB:1A:71:76:7D:6B:50:1E:03:00:EA
[   5.0] Creating an overlay to protect the source from being modified
nbdkit: error: VixDiskLib: VixDiskLibIsLegacyConnParams: the instance of VixDiskLibConnectParams is NOT allocated by VixDiskLib_AllocateConnectParams. The new features in 6.7 or later are not supported.
nbdkit: error: VixDiskLib: VixDiskLibIsLegacyConnParams: the instance of VixDiskLibConnectParams is NOT allocated by VixDiskLib_AllocateConnectParams. The new features in 6.7 or later are not supported.
nbdkit: error: VixDiskLib: VixDiskLibIsLegacyConnParams: the instance of VixDiskLibConnectParams is NOT allocated by VixDiskLib_AllocateConnectParams. The new features in 6.7 or later are not supported.
nbdkit: error: VixDiskLib: VixDiskLibIsLegacyConnParams: the instance of VixDiskLibConnectParams is NOT allocated by VixDiskLib_AllocateConnectParams. The new features in 6.7 or later are not supported.
[   9.6] Opening the overlay
nbdkit: error: VixDiskLib: VixDiskLibIsLegacyConnParams: the instance of VixDiskLibConnectParams is NOT allocated by VixDiskLib_AllocateConnectParams. The new features in 6.7 or later are not supported.
nbdkit: error: VixDiskLib: VixDiskLibIsLegacyConnParams: the instance of VixDiskLibConnectParams is NOT allocated by VixDiskLib_AllocateConnectParams. The new features in 6.7 or later are not supported.
nbdkit: error: VixDiskLib: VixDiskLibIsLegacyConnParams: the instance of VixDiskLibConnectParams is NOT allocated by VixDiskLib_AllocateConnectParams. The new features in 6.7 or later are not supported.
nbdkit: error: VixDiskLibVim: Failed to open disk using NFC. VixError 1 at 1166.
nbdkit: error: VixDiskLib_Open: [esx6.7] esx6.7-rhel7.5-multi-disks/esx6.7-rhel7.5-multi-disks_1.vmdk: Unknown error
virt-v2v: error: libguestfs error: guestfs_launch failed.
This usually means the libguestfs appliance failed to start or crashed.
Do:
  export LIBGUESTFS_DEBUG=1 LIBGUESTFS_TRACE=1
and run the command again.  For further information, read:
  http://libguestfs.org/guestfs-faq.1.html#debugging-libguestfs
You can also run 'libguestfs-test-tool' and post the *complete* output
into a bug report or message to the libguestfs mailing list.

If reporting bugs, run virt-v2v with debugging enabled and include the 
complete output:

  virt-v2v -v -x [...]


--------------------------------------------
Update virt-v2v, qemu and libvirt to rhel7 latest version to convert the guest again

Packages:
virt-v2v-1.40.2-5.el7_7.3.x86_64
libguestfs-1.40.2-5.el7_7.3.x86_64
libvirt-4.5.0-23.el7_7.4.x86_64
qemu-kvm-rhev-2.12.0-33.el7_7.7.x86_64
nbdkit-1.8.0-1.el7.x86_64
VMware-vix-disklib-6.7.3-14389676.x86_64.tar.gz 

Steps:
#  virt-v2v -ic vpx://root.73.141/data/10.73.75.219/?no_verify=1 -it vddk -io vddk-libdir=/home/vmware-vix-disklib-distrib -io  vddk-thumbprint=1F:97:34:5F:B6:C2:BA:66:46:CB:1A:71:76:7D:6B:50:1E:03:00:EA -n default esx6.7-rhel7.5-multi-disks --password-file /home/passwd 
[   0.1] Opening the source -i libvirt -ic vpx://root.73.141/data/10.73.75.219/?no_verify=1 esx6.7-rhel7.5-multi-disks -it vddk  -io vddk-libdir=/home/vmware-vix-disklib-distrib -io vddk-thumbprint=1F:97:34:5F:B6:C2:BA:66:46:CB:1A:71:76:7D:6B:50:1E:03:00:EA
[   5.1] Creating an overlay to protect the source from being modified
nbdkit: error: VixDiskLib: VixDiskLibIsLegacyConnParams: the instance of VixDiskLibConnectParams is NOT allocated by VixDiskLib_AllocateConnectParams. The new features in 6.7 or later are not supported.
nbdkit: error: VixDiskLib: VixDiskLibIsLegacyConnParams: the instance of VixDiskLibConnectParams is NOT allocated by VixDiskLib_AllocateConnectParams. The new features in 6.7 or later are not supported.
nbdkit: error: VixDiskLib: VixDiskLibIsLegacyConnParams: the instance of VixDiskLibConnectParams is NOT allocated by VixDiskLib_AllocateConnectParams. The new features in 6.7 or later are not supported.
nbdkit: error: VixDiskLib: VixDiskLibIsLegacyConnParams: the instance of VixDiskLibConnectParams is NOT allocated by VixDiskLib_AllocateConnectParams. The new features in 6.7 or later are not supported.
[  10.0] Opening the overlay
nbdkit: error: VixDiskLib: VixDiskLibIsLegacyConnParams: the instance of VixDiskLibConnectParams is NOT allocated by VixDiskLib_AllocateConnectParams. The new features in 6.7 or later are not supported.
nbdkit: error: VixDiskLib: VixDiskLibIsLegacyConnParams: the instance of VixDiskLibConnectParams is NOT allocated by VixDiskLib_AllocateConnectParams. The new features in 6.7 or later are not supported.
nbdkit: error: VixDiskLib: VixDiskLibIsLegacyConnParams: the instance of VixDiskLibConnectParams is NOT allocated by VixDiskLib_AllocateConnectParams. The new features in 6.7 or later are not supported.
nbdkit: error: VixDiskLibVim: Failed to open disk using NFC. VixError 1 at 1166.
nbdkit: error: VixDiskLib_Open: [esx6.7] esx6.7-rhel7.5-multi-disks/esx6.7-rhel7.5-multi-disks_1.vmdk: Unknown error
virt-v2v: error: libguestfs error: guestfs_launch failed.
This usually means the libguestfs appliance failed to start or crashed.
Do:
  export LIBGUESTFS_DEBUG=1 LIBGUESTFS_TRACE=1
and run the command again.  For further information, read:
  http://libguestfs.org/guestfs-faq.1.html#debugging-libguestfs
You can also run 'libguestfs-test-tool' and post the *complete* output
into a bug report or message to the libguestfs mailing list.

If reporting bugs, run virt-v2v with debugging enabled and include the 
complete output:

  virt-v2v -v -x [...]

------------------------------------
Test the bug with rhel8 builds:
virt-v2v-1.40.2-16.module+el8.1.1+5096+6326d3d5.x86_64
libguestfs-1.40.2-16.module+el8.1.1+5096+6326d3d5.x86_64
libvirt-5.6.0-10.module+el8.1.1+5131+a6fe889c.x86_64
qemu-kvm-4.1.0-19.module+el8.1.1+5172+e3ff58a1.x86_64
nbdkit-1.12.5-2.module+el8.1.1+4904+0f013407.x86_64
VMware-vix-disklib-6.7.3-14389676.x86_64.tar.gz


steps:
# virt-v2v -ic vpx://root.73.141/data/10.73.75.219/?no_verify=1 -it vddk -io vddk-libdir=/home/vmware-vix-disklib-distrib -io  vddk-thumbprint=1F:97:34:5F:B6:C2:BA:66:46:CB:1A:71:76:7D:6B:50:1E:03:00:EA -n default esx6.7-rhel7.5-multi-disks --password-file /home/passwd 
[   0.1] Opening the source -i libvirt -ic vpx://root.73.141/data/10.73.75.219/?no_verify=1 esx6.7-rhel7.5-multi-disks -it vddk  -io vddk-libdir=/home/vmware-vix-disklib-distrib -io vddk-thumbprint=1F:97:34:5F:B6:C2:BA:66:46:CB:1A:71:76:7D:6B:50:1E:03:00:EA
[   4.8] Creating an overlay to protect the source from being modified
[   6.4] Opening the overlay
nbdkit: vddk[2]: error: VixDiskLibVim: Failed to open disk using NFC. VixError 1 at 1166.
nbdkit: vddk[2]: error: VixDiskLib_Open: [esx6.7] esx6.7-rhel7.5-multi-disks/esx6.7-rhel7.5-multi-disks_1.vmdk: Unknown error
virt-v2v: error: libguestfs error: guestfs_launch failed.
This usually means the libguestfs appliance failed to start or crashed.
Do:
  export LIBGUESTFS_DEBUG=1 LIBGUESTFS_TRACE=1
and run the command again.  For further information, read:
  http://libguestfs.org/guestfs-faq.1.html#debugging-libguestfs
You can also run 'libguestfs-test-tool' and post the *complete* output
into a bug report or message to the libguestfs mailing list.

If reporting bugs, run virt-v2v with debugging enabled and include the 
complete output:

  virt-v2v -v -x [...]



Hi rjones,

   This is a negative test case and I also can reproduce the bug on rhel8.1.1 AV, so I agree move it to RHEL AV 8.3, but I found the error info will be different if testing bug with different VMware-vix-disklib version, pls help to confirm if they are same problem, thanks

Comment 8 mxie@redhat.com 2019-12-18 12:16:29 UTC
Created attachment 1646098 [details]
irregular-disk-address-VMware-vix-disklib-6.7.3.log

Comment 9 Richard W.M. Jones 2020-01-07 08:43:45 UTC
Thanks for retesting this.  Moving to RHEL AV (backlog).

Comment 11 Richard W.M. Jones 2020-11-10 14:25:21 UTC
Reviewed this in bug triage, still a problem.

Comment 13 RHEL Program Management 2021-03-15 07:33:43 UTC
After evaluating this issue, there are no plans to address it further or fix it in an upcoming release.  Therefore, it is being closed.  If plans change such that this issue will be fixed in an upcoming release, then the bug can be reopened.

Comment 14 Richard W.M. Jones 2021-03-15 10:26:45 UTC
My apologies, this bug was closed by an automated process that we have no control
over, and it should not have been.  I am reopening it.

Comment 15 Eric Hadley 2021-09-08 16:45:02 UTC
Bulk update: Move RHEL-AV bugs to RHEL9. If necessary to resolve in RHEL8, then clone to the current RHEL8 release.

Comment 17 RHEL Program Management 2021-12-31 07:27:01 UTC
After evaluating this issue, there are no plans to address it further or fix it in an upcoming release.  Therefore, it is being closed.  If plans change such that this issue will be fixed in an upcoming release, then the bug can be reopened.

Comment 18 Richard W.M. Jones 2021-12-31 09:17:46 UTC
I have to apologise - this bug was closed in error by a process we have no
control over which closes bugs claiming they are "stale" when they are not.
We have tried to have this process removed, with no success.  I am reopening
it and setting the "stale" date far into the future.

Comment 19 Laszlo Ersek 2022-03-10 10:30:07 UTC
Looks pretty much like a vddk bug to me; we should report it to vmware.

Comment 20 Laszlo Ersek 2022-03-10 10:36:29 UTC
sorry didn't mean to set needinfo

Comment 21 Laszlo Ersek 2022-03-10 10:39:35 UTC
(In reply to mxie from comment #4)
> Created attachment 1553803 [details]
> vmware-guest-disk-address.png

Interestingly, vmware has some limitations regarding virtual device nodes:

https://docs.vmware.com/en/VMware-vSphere/6.7/com.vmware.vsphere.vm_admin.doc/GUID-5872D173-A076-42FE-8D0B-9DB0EB0E7362.html

This says that for a scsi controller, (0:7) is a reserved vitual device node address. Perhaps there's another limitation with (0:2)?

Anyway I don't think we can do anything about this without looking at the vddk source code (lol) or talking to wmvare.

Comment 22 Richard W.M. Jones 2022-03-10 10:40:51 UTC
nbdkit: error: VixDiskLibVim: Failed to open disk using NFC. VixError 1 at 985.

1 == VIX_E_FAIL.  Not very useful.

mxie: Can you send me the login/guest details for a guest where this
currently happens?  I think it's best to start diagnosing this at
the nbdkit level to see if there is more debug info from VDDK available.

Comment 23 mxie@redhat.com 2022-03-10 15:17:04 UTC
(In reply to Richard W.M. Jones from comment #22)
> nbdkit: error: VixDiskLibVim: Failed to open disk using NFC. VixError 1 at
> 985.
> 
> 1 == VIX_E_FAIL.  Not very useful.
> 
> mxie: Can you send me the login/guest details for a guest where this
> currently happens?  I think it's best to start diagnosing this at
> the nbdkit level to see if there is more debug info from VDDK available.

Hi Richard, 

   Send you the VMware info where the guest is in by mail,BTW, I can reproduce the bug on rhel9 when vddk version is 6.7 and 6.5, but can't reproduce the bug when vddk version >= 7.0


Package versions:
virt-v2v-1.45.99-1.el9.x86_64
libguestfs-1.46.1-2.el9.x86_64
guestfs-tools-1.46.1-6.el9.x86_64
libvirt-libs-8.0.0-6.el9.x86_64
qemu-img-6.2.0-11.el9.x86_64
nbdkit-1.28.5-1.el9.x86_64
libnbd-1.10.5-1.el9.x86_64

Steps:
1.Check the disk info of VMware guest by virsh
# virsh -c vpx://root.73.141/data/10.73.75.219/?no_verify=1 dumpxml esx6.7-rhel7.5-multi-disks-bug1684075
Enter root's password for 10.73.73.141: 
.....
  <devices>
    <disk type='file' device='disk'>
      <source file='[esx6.7-function] esx6.7-rhel7.5-multi-disks-bug1684075/esx6.7-rhel7.5-multi-disks-bug1684075.vmdk'/>
      <target dev='sda' bus='scsi'/>
      <address type='drive' controller='0' bus='0' target='0' unit='0'/>
    </disk>
    <disk type='file' device='disk'>
      <source file='[esx6.7-function] esx6.7-rhel7.5-multi-disks-bug1684075/esx6.7-rhel7.5-multi-disks-bug1684075_1.vmdk'/>
      <target dev='sdl' bus='scsi'/>
      <address type='drive' controller='0' bus='0' target='0' unit='12'/>
    </disk>
    <disk type='file' device='disk'>
      <source file='[esx6.7-function] esx6.7-rhel7.5-multi-disks-bug1684075/esx6.7-rhel7.5-multi-disks-bug1684075_3.vmdk'/>
      <target dev='sdp' bus='scsi'/>
      <address type='drive' controller='0' bus='0' target='0' unit='16'/>
    </disk>
    <disk type='file' device='disk'>
      <source file='[esx6.7-function] esx6.7-rhel7.5-multi-disks-bug1684075/esx6.7-rhel7.5-multi-disks-bug1684075_2.vmdk'/>
      <target dev='sdt' bus='scsi'/>
      <address type='drive' controller='0' bus='0' target='0' unit='20'/>
    </disk>
......

2.Convert the guest from VMware via vddk6.5 by v2v
# virt-v2v -ic vpx://root.73.141/data/10.73.75.219/?no_verify=1 -it vddk -io vddk-libdir=/home/vddk6.5 -io vddk-thumbprint=1F:97:34:5F:B6:C2:BA:66:46:CB:1A:71:76:7D:6B:50:1E:03:00:EA esx6.7-rhel7.5-multi-disks-bug1684075  -ip /home/passwd
[   0.0] Setting up the source: -i libvirt -ic vpx://root.73.141/data/10.73.75.219/?no_verify=1 -it vddk esx6.7-rhel7.5-multi-disks-bug1684075
[   5.2] Opening the source
nbdkit: vddk[1]: error: VixDiskLibVim: Failed to open disk using NFC. VixError 1 at 985.
nbdkit: vddk[1]: error: VixDiskLib_Open: [esx6.7-function] esx6.7-rhel7.5-multi-disks-bug1684075/esx6.7-rhel7.5-multi-disks-bug1684075_1.vmdk: Unknown error
virt-v2v: error: libguestfs error: could not create appliance through 
libvirt.

Try running qemu directly without libvirt using this environment variable:
export LIBGUESTFS_BACKEND=direct

Original error from libvirt: internal error: process exited while 
connecting to monitor: 2022-03-10T14:58:39.230017Z qemu-kvm: -blockdev 
{"driver":"nbd","server":{"type":"unix","path":"/tmp/v2v.11yfAt/in1"},"node-name":"libvirt-4-storage","cache":{"direct":false,"no-flush":true},"auto-read-only":true,"discard":"unmap"}: 
Requested export not available [code=1 int1=-1]

If reporting bugs, run virt-v2v with debugging enabled and include the 
complete output:

  virt-v2v -v -x [...]

3.Convert the guest from VMware via vddk6.7 by v2v
# virt-v2v -ic vpx://root.73.141/data/10.73.75.219/?no_verify=1 -it vddk -io vddk-libdir=/home/vddk6.7 -io vddk-thumbprint=1F:97:34:5F:B6:C2:BA:66:46:CB:1A:71:76:7D:6B:50:1E:03:00:EA esx6.7-rhel7.5-multi-disks-bug1684075  -ip /home/passwd
[   0.0] Setting up the source: -i libvirt -ic vpx://root.73.141/data/10.73.75.219/?no_verify=1 -it vddk esx6.7-rhel7.5-multi-disks-bug1684075
[   5.2] Opening the source
nbdkit: vddk[1]: error: VixDiskLibVim: Failed to open disk using NFC. VixError 1 at 1166.
nbdkit: vddk[1]: error: VixDiskLib_Open: [esx6.7-function] esx6.7-rhel7.5-multi-disks-bug1684075/esx6.7-rhel7.5-multi-disks-bug1684075_1.vmdk: Unknown error
virt-v2v: error: libguestfs error: could not create appliance through 
libvirt.

Try running qemu directly without libvirt using this environment variable:
export LIBGUESTFS_BACKEND=direct

Original error from libvirt: internal error: process exited while 
connecting to monitor: 2022-03-10T14:59:07.793465Z qemu-kvm: -blockdev 
{"driver":"nbd","server":{"type":"unix","path":"/tmp/v2v.y9MO3a/in1"},"node-name":"libvirt-4-storage","cache":{"direct":false,"no-flush":true},"auto-read-only":true,"discard":"unmap"}: 
Requested export not available [code=1 int1=-1]

If reporting bugs, run virt-v2v with debugging enabled and include the 
complete output:

  virt-v2v -v -x [...]

4.Convert the guest from VMware via vddk7.0 by v2v
# virt-v2v -ic vpx://root.73.141/data/10.73.75.219/?no_verify=1 -it vddk -io vddk-libdir=/home/vddk7.0 -io vddk-thumbprint=1F:97:34:5F:B6:C2:BA:66:46:CB:1A:71:76:7D:6B:50:1E:03:00:EA esx6.7-rhel7.5-multi-disks-bug1684075  -ip /home/passwd
[   0.0] Setting up the source: -i libvirt -ic vpx://root.73.141/data/10.73.75.219/?no_verify=1 -it vddk esx6.7-rhel7.5-multi-disks-bug1684075
[   5.2] Opening the source
[  12.5] Inspecting the source
[  21.2] Checking for sufficient free disk space in the guest
[  21.2] Converting Red Hat Enterprise Linux Server 7.5 (Maipo) to run on KVM
virt-v2v: This guest has virtio drivers installed.
[ 153.4] Mapping filesystem data to avoid copying unused and blank areas
[ 155.7] Closing the overlay
[ 156.0] Assigning disks to buses
[ 156.0] Checking if the guest needs BIOS or UEFI to boot
[ 156.0] Setting up the destination: -o libvirt
[ 163.1] Copying disk 1/4
^Cvirt-v2v: Exiting on signal SIGINT------------]

5.Convert the guest from VMware via vddk7.0.2 by v2v
# virt-v2v -ic vpx://root.73.141/data/10.73.75.219/?no_verify=1 -it vddk -io vddk-libdir=/home/vddk7.0.2 -io vddk-thumbprint=1F:97:34:5F:B6:C2:BA:66:46:CB:1A:71:76:7D:6B:50:1E:03:00:EA esx6.7-rhel7.5-multi-disks-bug1684075  -ip /home/passwd
[   0.0] Setting up the source: -i libvirt -ic vpx://root.73.141/data/10.73.75.219/?no_verify=1 -it vddk esx6.7-rhel7.5-multi-disks-bug1684075
[   5.1] Opening the source
[  12.6] Inspecting the source
[  20.9] Checking for sufficient free disk space in the guest
[  20.9] Converting Red Hat Enterprise Linux Server 7.5 (Maipo) to run on KVM
virt-v2v: This guest has virtio drivers installed.
[ 147.8] Mapping filesystem data to avoid copying unused and blank areas
[ 150.1] Closing the overlay
[ 150.5] Assigning disks to buses
[ 150.5] Checking if the guest needs BIOS or UEFI to boot
[ 150.5] Setting up the destination: -o libvirt
[ 157.4] Copying disk 1/4
█ 100% [****************************************]
[ 231.5] Copying disk 2/4
█ 100% [****************************************]
[ 261.7] Copying disk 3/4
█ 100% [****************************************]
[ 290.7] Copying disk 4/4
█ 100% [****************************************]
[ 349.4] Creating output metadata
[ 349.5] Finishing off

Comment 25 Richard W.M. Jones 2022-03-10 18:17:44 UTC
> BTW, I can reproduce the bug on rhel9 when vddk version is 6.7 and 6.5, but can't reproduce the bug when vddk version >= 7.0

I tried various scenarios running 4 x nbdkit + guestfish to
simulate something like what virt-v2v would do, and could not
reproduce it.  However I was using VDDK 7.0.3 the whole time.
I used VDDK 6.7 in an earlier test but using a single nbdkit.

So I tried again using 4 x nbdkit + VDDK 6.7 + guestfish ...
and I reproduced it!

So it's a bug in VDDK 6.7, the only thing to suggest if someone
sees this is that they upgrade.  There's nothing we can do
in virt-v2v, but I will add a note to the documentation.

Comment 26 Richard W.M. Jones 2022-03-10 18:24:58 UTC
Documentation fix:
https://github.com/libguestfs/virt-v2v/commit/b7c3d8820d65e9d6ff3fd7ad453eb304e766d9dd

Comment 27 Richard W.M. Jones 2022-03-14 21:31:34 UTC
This should be fixed (as in documented) in virt-v2v-2.0.0-1.el9,
but is missing qe ack.

Comment 28 mxie@redhat.com 2022-03-21 14:37:43 UTC
Test the bug with below builds:
virt-v2v-2.0.0-2.el9.x86_64

Steps:
1.Check virt-v2v-input-vmware man page about error 1166
# man virt-v2v-input-vmware |grep 1166 -A 4
troff: <standard input>:466: warning [p 5, 6.8i]: can't break line
        nbdkit: vddk[2]: error: VixDiskLibVim: Failed to open disk using NFC. VixError 1 at 1166.

       then it is caused by a bug in VDDK ≤ 6.7.  The suggested solution it to upgrade to the latest
       VDDK.  See also https://bugzilla.redhat.com/1684075

Result:
    The bug has been fixed

Comment 31 mxie@redhat.com 2022-03-24 02:14:42 UTC
Veirfy the bug with below builds:
virt-v2v-2.0.1-1.el9.x86_64

Steps:
1.Check virt-v2v-input-vmware man page about error 1166
#  man virt-v2v-input-vmware |grep 1166 -B 2 -A 3
       If you see an error similar to:

        nbdkit: vddk[2]: error: VixDiskLibVim: Failed to open disk using NFC. VixError 1 at 1166.

       then it is caused by a bug in VDDK ≤ 6.7.  The suggested solution it to upgrade to the latest
       VDDK.  See also https://bugzilla.redhat.com/1684075


Result:
    The bug has been fixed, move the bug from ON_QA to VERIFIED

Comment 33 errata-xmlrpc 2022-11-15 09:55:44 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory (Low: virt-v2v security, bug fix, and enhancement update), and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHSA-2022:7968


Note You need to log in before you can comment on or make changes to this bug.