RHEL Engineering is moving the tracking of its product development work on RHEL 6 through RHEL 9 to Red Hat Jira (issues.redhat.com). If you're a Red Hat customer, please continue to file support cases via the Red Hat customer portal. If you're not, please head to the "RHEL project" in Red Hat Jira and file new tickets here. Individual Bugzilla bugs in the statuses "NEW", "ASSIGNED", and "POST" are being migrated throughout September 2023. Bugs of Red Hat partners with an assigned Engineering Partner Manager (EPM) are migrated in late September as per pre-agreed dates. Bugs against components "kernel", "kernel-rt", and "kpatch" are only migrated if still in "NEW" or "ASSIGNED". If you cannot log in to RH Jira, please consult article #7032570. That failing, please send an e-mail to the RH Jira admins at rh-issues@redhat.com to troubleshoot your issue as a user management inquiry. The email creates a ServiceNow ticket with Red Hat. Individual Bugzilla bugs that are migrated will be moved to status "CLOSED", resolution "MIGRATED", and set with "MigratedToJIRA" in "Keywords". The link to the successor Jira issue will be found under "Links", have a little "two-footprint" icon next to it, and direct you to the "RHEL project" in Red Hat Jira (issue links are of type "https://issues.redhat.com/browse/RHEL-XXXX", where "X" is a digit). This same link will be available in a blue banner at the top of the page informing you that that bug has been migrated.
Bug 1774386 - input_vmx: cleanly reject guests with snapshots when using "-it ssh"
Summary: input_vmx: cleanly reject guests with snapshots when using "-it ssh"
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: Red Hat Enterprise Linux 9
Classification: Red Hat
Component: virt-v2v
Version: unspecified
Hardware: x86_64
OS: Unspecified
medium
high
Target Milestone: rc
: ---
Assignee: Laszlo Ersek
QA Contact: mxie@redhat.com
URL:
Whiteboard: V2V
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2019-11-20 08:32 UTC by mxie@redhat.com
Modified: 2022-11-15 10:22 UTC (History)
10 users (show)

Fixed In Version: virt-v2v-2.0.7-1.el9
Doc Type: If docs needed, set a value
Doc Text:
Clone Of:
Environment:
Last Closed: 2022-11-15 09:55:44 UTC
Type: Bug
Target Upstream Version:
Embargoed:


Attachments (Terms of Use)
vmx+ssh-snapshot.log (9.06 KB, text/plain)
2019-11-20 08:32 UTC, mxie@redhat.com
no flags Details


Links
System ID Private Priority Status Summary Last Updated
Red Hat Bugzilla 1854275 1 high CLOSED document that vmx+ssh "-ip" auth doesn't cover ssh / scp shell commands 2024-01-17 14:32:45 UTC
Red Hat Product Errata RHSA-2022:7968 0 None None None 2022-11-15 09:56:01 UTC

Description mxie@redhat.com 2019-11-20 08:32:49 UTC
Created attachment 1638032 [details]
vmx+ssh-snapshot.log

Description:
V2v can't convert guest using vmx+ssh if the original guest has snapshot 

Version-Release number of selected component (if applicable):
virt-v2v-1.40.2-14.module+el8.2.0+4673+ff4b3b61.x86_64
libguestfs-1.40.2-14.module+el8.2.0+4673+ff4b3b61.x86_64
libvirt-5.9.0-2.module+el8.2.0+4683+7e10e783.x86_64
qemu-kvm-4.2.0-0.module+el8.2.0+4743+23ad88a2.x86_64


How reducible:
100%

Reproducing steps:
1. Prepare a guest and create a snapshot on VMware 

2. Convert this guest from VMware with vmx+ssh by virt-v2v
# virt-v2v  -i vmx  -o rhev -os 10.73.194.236:/home/nfs_export -of raw -it ssh ssh://root.75.219/vmfs/volumes/esx6.7/esx6.7-rhel6.10-x86_64_1/esx6.7-rhel6.10-x86_64.vmx
[   0.0] Opening the source -i vmx ssh://root.75.219/vmfs/volumes/esx6.7/esx6.7-rhel6.10-x86_64_1/esx6.7-rhel6.10-x86_64.vmx
[   0.8] Creating an overlay to protect the source from being modified
qemu-img: /var/tmp/v2vovld11d72.qcow2: Cannot use relative paths with VMDK descriptor file 'ssh://root.75.219:22/vmfs/volumes/esx6.7/esx6.7-rhel6.10-x86_64_1/esx6.7-rhel6.10-x86_64-000001.vmdk?host_key_check=no': Cannot generate a base directory with host_key_check set
Could not open backing image to determine size.
virt-v2v: error: qemu-img command failed, see earlier errors

If reporting bugs, run virt-v2v with debugging enabled and include the 
complete output:

  virt-v2v -v -x [...]


Actual result:
As above description


Expected result:
V2v can convert guest using vmx+ssh if the original guest has snapshot 


Additional info:
1. Can convert the snapshot guest with vmx by virt-v2v
1.1 Mount the nfs storage on local v2v conversion server
1.2 Convert the guest from vmx by v2v
# virt-v2v -i vmx esx6.7-rhel6.10-x86_64.vmx -o null
[   0.0] Opening the source -i vmx esx6.7-rhel6.10-x86_64.vmx
[   0.0] Creating an overlay to protect the source from being modified
[   0.2] Opening the overlay
[   5.1] Inspecting the overlay
[  25.0] Checking for sufficient free disk space in the guest
[  25.0] Estimating space required on target for each disk
[  25.0] Converting Red Hat Enterprise Linux Server release 6.10 (Santiago) to run on KVM
virt-v2v: warning: guest tools directory ‘linux/el6’ is missing from 
the virtio-win directory or ISO.

Guest tools are only provided in the RHV Guest Tools ISO, so this can 
happen if you are using the version of virtio-win which contains just the 
virtio drivers.  In this case only virtio drivers can be installed in the 
guest, and installation of Guest Tools will be skipped.
virt-v2v: This guest has virtio drivers installed.
[ 156.6] Mapping filesystem data to avoid copying unused and blank areas
[ 157.6] Closing the overlay
[ 157.7] Assigning disks to buses
[ 157.7] Checking if the guest needs BIOS or UEFI to boot
[ 157.7] Initializing the target -o null
[ 157.7] Copying disk 1/2 to qemu URI json:{ "file.driver": "null-co", "file.size": "1E" } (raw)
^C  (1.01/100%)


2.Can convert the snapshot guest with vpx+vddk by virt-v2v
# virt-v2v -ic vpx://root.73.141/data/10.73.75.219/?no_verify=1 -io vddk-thumbprint=1F:97:34:5F:B6:C2:BA:66:46:CB:1A:71:76:7D:6B:50:1E:03:00:EA -o rhv-upload -of raw -b ovirtmgmt -n ovirtmgmt -it vddk -io vddk-libdir=/home/vmware-vix-disklib-distrib -oo rhv-cafile=/home/ca.pem -oo rhv-direct -oc https://ibm-x3250m5-03.rhts.eng.pek2.redhat.com/ovirt-engine/api -op /home/rhvpasswd -os nfs_data esx6.7-rhel6.10-x86_64 -ip /home/passwd
[   0.5] Opening the source -i libvirt -ic vpx://root.73.141/data/10.73.75.219/?no_verify=1 esx6.7-rhel6.10-x86_64 -it vddk  -io vddk-thumbprint=1F:97:34:5F:B6:C2:BA:66:46:CB:1A:71:76:7D:6B:50:1E:03:00:EA -io vddk-libdir=/home/vmware-vix-disklib-distrib
[   3.4] Creating an overlay to protect the source from being modified
[   4.6] Opening the overlay
[  10.3] Inspecting the overlay
[  25.8] Checking for sufficient free disk space in the guest
[  25.8] Estimating space required on target for each disk
[  25.8] Converting Red Hat Enterprise Linux Server release 6.10 (Santiago) to run on KVM
virt-v2v: warning: guest tools directory ‘linux/el6’ is missing from 
the virtio-win directory or ISO.

Guest tools are only provided in the RHV Guest Tools ISO, so this can 
happen if you are using the version of virtio-win which contains just the 
virtio drivers.  In this case only virtio drivers can be installed in the 
guest, and installation of Guest Tools will be skipped.
virt-v2v: This guest has virtio drivers installed.
[ 140.1] Mapping filesystem data to avoid copying unused and blank areas
[ 140.9] Closing the overlay
[ 141.0] Assigning disks to buses
[ 141.0] Checking if the guest needs BIOS or UEFI to boot
[ 141.0] Initializing the target -o rhv-upload -oc https://ibm-x3250m5-03.rhts.eng.pek2.redhat.com/ovirt-engine/api -op /home/rhvpasswd -os nfs_data
[ 143.4] Copying disk 1/2 to qemu URI json:{ "file.driver": "nbd", "file.path": "/var/tmp/rhvupload.030Axo/nbdkit0.sock", "file.export": "/" } (raw)
^C  (1.01/100%)


3.Can reproduce the problem on rhel7 with builds:
virt-v2v-1.40.2-8.el7.x86_64
libguestfs-1.40.2-8.el7.x86_64
libvirt-4.5.0-28.el7.x86_64
qemu-kvm-rhev-2.12.0-38.el7.x86_64

Comment 1 Richard W.M. Jones 2019-11-25 10:53:14 UTC
I'm going to turn this into an RFE because virt-v2v only has partial support
for handling guests with snapshots and we usually advise that customers should
remove snapshots before conversion.  Also it appears to work for VDDK which is
how most IMS users are using virt-v2v.

Comment 5 Eric Hadley 2021-09-08 16:49:00 UTC
Bulk update: Move RHEL-AV bugs to RHEL9. If necessary to resolve in RHEL8, then clone to the current RHEL8 release.

Comment 6 RHEL Program Management 2022-01-01 07:26:59 UTC
After evaluating this issue, there are no plans to address it further or fix it in an upcoming release.  Therefore, it is being closed.  If plans change such that this issue will be fixed in an upcoming release, then the bug can be reopened.

Comment 7 Richard W.M. Jones 2022-01-01 16:53:33 UTC
The comment above is incorrect but was generated by a process we have
no control over, reopening.

Comment 8 Laszlo Ersek 2022-03-03 10:54:47 UTC
Here's my analysis:

(1) I don't understand *at all* how the bug report relates to guests
that have snapshots on the *input* hypervisor. The error message in
comment 0 is entirely unrelated to snapshots. Consider:

> [   0.8] Creating an overlay to protect the source from being modified
> qemu-img: /var/tmp/v2vovld11d72.qcow2: Cannot use relative paths with
> VMDK descriptor file
> 'ssh://root.75.219:22/vmfs/volumes/esx6.7/esx6.7-rhel6.10-x86_64_1/esx6.7-rhel6.10-x86_64-000001.vmdk?host_key_check=no':
> Cannot generate a base directory with host_key_check set
> Could not open backing image to determine size.

The problem here is not that the input disk image contains snapshots.
Instead, the problem is that qemu-img cannot create the protective
overlay for which the original disk image would be a backing file.

This is entirely orthogonal to whether the original disk image contains
snapshots or not.


(2) The qemu-img error is caused by the following.

QEMU commit 21205c7c3bd0 ("block/ssh: Implement .bdrv_dirname()",
2019-05-07) very correctly implements the "bdrv_dirname" callback for
QEMU's ssh block driver such that, the operation fail if the original
location specifier contains a *query string* -- for example,
"?host_key_check=no".

The reason for this is explained in QEMU commit 1e89d0f9bed7 ("block:
Add bdrv_dirname()", 2019-02-25) -- excerpt:

> diff --git a/include/block/block_int.h b/include/block/block_int.h
> index dd7276cde232..e4d4817ea6ff 100644
> --- a/include/block/block_int.h
> +++ b/include/block/block_int.h
> @@ -141,6 +141,13 @@ struct BlockDriver {
>
>      void (*bdrv_refresh_filename)(BlockDriverState *bs, QDict *options);
>
> +    /*
> +     * Returns an allocated string which is the directory name of this BDS: It
> +     * will be used to make relative filenames absolute by prepending this
> +     * function's return value to them.
> +     */
> +    char *(*bdrv_dirname)(BlockDriverState *bs, Error **errp);
> +
>      /* aio */
>      BlockAIOCB *(*bdrv_aio_preadv)(BlockDriverState *bs,
>          uint64_t offset, uint64_t bytes, QEMUIOVector *qiov, int flags,

In other words, bdrv_dirname() intends to strip off the *last pathname
component*, and carry over the leading pathname components, so that
*sibling files* can be created to the original.

Obviously this cannot work if the original "pathname" terminates with
some query string. In that case, the "pathname" components to carry
over, to sibling files, would be the leading pathname components, *plus*
the whole query string. The QEMU block layer (in general) simply cannot
do that, and throwing away the query string (such as
"?host_key_check=no") would break access to those sibling files, so the
only thing the SSH block driver can do is reject the attempt.


(3) Virt-v2v started using the "host_key_check=no" query string right
when the ssh transport option was extended to VMX input, namely in
commit e183bd10edd3 ("v2v: -i vmx: Enhance VMX support with ability to
use ‘-it ssh’ transport.", 2017-12-09).

The commit doesn't seem to explain the use of query string; in my
opinion (in retrospect) it is a security problem.


(4) Regardless, in commit 7a6f6113a25f ("v2v: -i vmx -it ssh: Replace
qemu block ssh driver with nbdkit-ssh-plugin.", 2019-10-08), virt-v2v
stopped using the QEMU ssh block driver for vmx+ssh, replacing it with
nbdkit-ssh-plugin.

With that, the "host_key_check=no" query string had been eliminated.


(5) "nbdkit-ssh-plugin" is strict about host key checking (correctly
so!), see here:

https://libguestfs.org/nbdkit-ssh-plugin.1.html#Known-hosts

> Known hosts
>
> The SSH server’s host key is checked at connection time, and must be
> present and correct in the local "known hosts" file.
>
> If you have never connected to the SSH server before then the
> connection will usually fail. You can:
>
> * connect to the server first using ssh(1) so you can manually accept
>   the host key, or
>
> * provide the host key in an alternate file which you specify using
>   the known-hosts option, or
>
> * set verify-remote-host=false on the command line. This latter option
>   is dangerous because it allows a MITM attack to be conducted against
>   you.

Virt-v2v does not set "verify-remote-host" (rightly so), and does not
seem to expose "known-hosts" to the user. Therefore virt-v2v users can
rely on the first option -- accept the host key manually, separately, at
first. In my opinion, that's just good enough.


(6) Assuming that the protective overlay can now be created -- that is,
assuming that the *original* symptom (which is totally unrelated to
snapshots) had disappeared --, please retest this use case:

- Is there an actual problem with snapshots?
- If so, do we care? (See comment 1.)

FWIW I don't think snapshots should matter at all. Virt-v2v does not
attempt to carry over snapshots from source to destination, and all the
conversion and copying steps (with nbdcopy) only care about the top --
active -- layer of the source disk image (more precisely, about the
protective overlay we place upon *that*). Snapshots on the source are an
internal representation detail of the source disk image.

It's hard for me to suggest a resolution for this BZ, as the bug title
("snapshots break conversion") and the actual symptom in comment 0
(QEMU's ssh block driver refusing to create the protective overlay)
contradict each other.

Regarding the title, I would say, "I don't think so (suggest
WORKSFORME), but please retest".

Regarding comment 0, I'd say "fixed in the current release; we no longer
use QEMU's ssh block driver for the protective overlay".


So... I guess, please retest whether we can now open source disk images
with snapshots. Thanks!

Comment 9 mxie@redhat.com 2022-03-04 08:47:05 UTC
(In reply to Laszlo Ersek from comment #8)
> So... I guess, please retest whether we can now open source disk images
> with snapshots. Thanks!

Test the bug with below builds:
virt-v2v-1.45.99-1.el9.x86_64
libguestfs-1.46.1-2.el9.x86_64
guestfs-tools-1.46.1-6.el9.x86_64
libvirt-libs-8.0.0-6.el9.x86_64
qemu-img-6.2.0-11.el9.x86_64


Steps:
1. Convert a guest which has snapshot from VMware via vmx+ssh by virt-v2v
# virt-v2v -i vmx -it ssh ssh://root.199.217/vmfs/volumes/esx7.0-function/esx7.0-rhel9.0-snapshot/esx7.0-rhel9.0-snapshot.vmx
[   0.0] Setting up the source: -i vmx ssh://root.199.217/vmfs/volumes/esx7.0-function/esx7.0-rhel9.0-snapshot/esx7.0-rhel9.0-snapshot.vmx
[   1.6] Opening the source
[   6.1] Inspecting the source
virt-v2v: error: inspection could not detect the source guest (or physical 
machine).

Assuming that you are running virt-v2v/virt-p2v on a source which is 
supported (and not, for example, a blank disk), then this should not 
happen.

No root device found in this operating system image.

If reporting bugs, run virt-v2v with debugging enabled and include the 
complete output:

  virt-v2v -v -x [...]


2.Convert the guest which is used in step1 from VMware via 'vmx' by v2v
# virt-v2v -i vmx /media/esx7.0/esx7.0-rhel9.0-snapshot/esx7.0-rhel9.0-snapshot.vm
esx7.0-rhel9.0-snapshot.vmdk  esx7.0-rhel9.0-snapshot.vmsd  esx7.0-rhel9.0-snapshot.vmx   
[root@dell-per740-53 esx7.0]# virt-v2v -i vmx /media/esx7.0/esx7.0-rhel9.0-snapshot/esx7.0-rhel9.0-snapshot.vmx 
[   0.0] Setting up the source: -i vmx /media/esx7.0/esx7.0-rhel9.0-snapshot/esx7.0-rhel9.0-snapshot.vmx
[   1.0] Opening the source
[   6.1] Inspecting the source
[  14.3] Checking for sufficient free disk space in the guest
[  14.3] Converting Red Hat Enterprise Linux 9.0 Beta (Plow) to run on KVM
virt-v2v: This guest has virtio drivers installed.
[ 143.1] Mapping filesystem data to avoid copying unused and blank areas
[ 144.7] Closing the overlay
[ 145.0] Assigning disks to buses
[ 145.0] Checking if the guest needs BIOS or UEFI to boot
[ 145.0] Setting up the destination: -o libvirt
[ 146.2] Copying disk 1/1
^Cvirt-v2v: Exiting on signal SIGINT------------]

3.Convert the guest which is used in step1 from VMware via vddk by v2v
# virt-v2v -ic vpx://root.198.169/data/10.73.199.217/?no_verify=1 -it vddk -io vddk-libdir=/home/vddk7.0.2 -io  vddk-thumbprint=B5:52:1F:B4:21:09:45:24:51:32:56:F6:63:6A:93:5D:54:08:2D:78  -o rhv-upload -of qcow2 -oc https://dell-per740-22.lab.eng.pek2.redhat.com/ovirt-engine/api  -op /home/rhvpasswd  -os nfs_data -b ovirtmgmt esx7.0-rhel9.0-snapshot  -ip /home/passwd 
[   0.0] Setting up the source: -i libvirt -ic vpx://root.198.169/data/10.73.199.217/?no_verify=1 -it vddk esx7.0-rhel9.0-snapshot
[   1.8] Opening the source
[   6.9] Inspecting the source
[  12.8] Checking for sufficient free disk space in the guest
[  12.8] Converting Red Hat Enterprise Linux 9.0 Beta (Plow) to run on KVM
virt-v2v: This guest has virtio drivers installed.
[ 121.2] Mapping filesystem data to avoid copying unused and blank areas
[ 122.1] Closing the overlay
[ 122.4] Assigning disks to buses
[ 122.4] Checking if the guest needs BIOS or UEFI to boot
[ 122.4] Setting up the destination: -o rhv-upload -oc https://dell-per740-22.lab.eng.pek2.redhat.com/ovirt-engine/api -os nfs_data
[ 143.8] Copying disk 1/1
█ 100% [****************************************]
[ 283.0] Creating output metadata
[ 289.8] Finishing off

Comment 18 Laszlo Ersek 2022-04-08 10:52:21 UTC
(In reply to Richard W.M. Jones from comment #11)
> IIRC when a guest is snapshotted, the name of the *-flat.vmdk file changes.
> We don't have any way to list the remote files over ssh (or over https
> for that matter - the same problem applies there), so we have to guess
> what filename is used.  We could use libssh (sftp) to query the remote server
> for the list of files.  It should become clearer once you've got ESXi
> installed.

Before snapshot:

[root@esxi:~] ls -l /vmfs/volumes/datastore1/win2019
total 10240064
-rw-r--r--    1 root     root        115737 Apr  8 10:25 vmware-1.log
-rw-r--r--    1 root     root        122986 Apr  8 10:26 vmware-2.log
-rw-r--r--    1 root     root        128871 Apr  8 10:30 vmware-3.log
-rw-r--r--    1 root     root        195786 Apr  8 10:37 vmware-4.log
-rw-r--r--    1 root     root        227331 Apr  8 10:41 vmware-5.log
-rw-r--r--    1 root     root        150797 Apr  8 10:45 vmware-6.log
-rw-r--r--    1 root     root        135475 Apr  8 10:45 vmware.log
-rw-------    1 root     root     96636764160 Apr  8 10:45 win2019-flat.vmdk
-rw-------    1 root     root        270840 Apr  8 10:45 win2019.nvram
-rw-------    1 root     root           529 Apr  8 10:45 win2019.vmdk
-rw-r--r--    1 root     root             0 Apr  8 10:24 win2019.vmsd
-rwxr-xr-x    1 root     root          3563 Apr  8 10:45 win2019.vmx
-rw-------    1 root     root          3690 Apr  8 10:40 win2019.vmxf

After snapshotting the offline guest:

[root@esxi:~] ls -l /vmfs/volumes/datastore1/win2019
total 10242112
-rw-r--r--    1 root     root        115737 Apr  8 10:25 vmware-1.log
-rw-r--r--    1 root     root        122986 Apr  8 10:26 vmware-2.log
-rw-r--r--    1 root     root        128871 Apr  8 10:30 vmware-3.log
-rw-r--r--    1 root     root        195786 Apr  8 10:37 vmware-4.log
-rw-r--r--    1 root     root        227331 Apr  8 10:41 vmware-5.log
-rw-r--r--    1 root     root        150797 Apr  8 10:45 vmware-6.log
-rw-r--r--    1 root     root        135475 Apr  8 10:45 vmware.log
-rw-------    1 root     root     383778816 Apr  8 10:48 win2019-000001-sesparse.vmdk
-rw-------    1 root     root           311 Apr  8 10:48 win2019-000001.vmdk
-rw-------    1 root     root        294668 Apr  8 10:48 win2019-Snapshot1.vmsn
-rw-------    1 root     root     96636764160 Apr  8 10:45 win2019-flat.vmdk
-rw-------    1 root     root        270840 Apr  8 10:45 win2019.nvram
-rw-------    1 root     root           529 Apr  8 10:45 win2019.vmdk
-rw-r--r--    1 root     root           434 Apr  8 10:48 win2019.vmsd
-rwxr-xr-x    1 root     root          3570 Apr  8 10:48 win2019.vmx
-rw-------    1 root     root          3690 Apr  8 10:40 win2019.vmxf

The "win2019-flat.vmdk" file name persists -- I can't reproduce the issue.

Comment 19 Laszlo Ersek 2022-04-08 12:08:24 UTC
I can reproduce the issue (clarified in Scenario 1 in comment 9). Here's
what happens.

With no snapshot, the"scsi0:0.fileName" in the VMX file is:

> scsi0:0.fileName = "win2019.vmdk"

In virt-v2v, we extend this to "win2019-flat.vmdk". And indeed, the
contents of that file are a raw disk image.

With the snapshot created, the "scsi0:0.fileName" entry in the VMX file
changes to:

> scsi0:0.fileName = "win2019-000001.vmdk"

and in virt-v2v, we test for the existence of
"win2019-000001-flat.vmdk":

> ssh 'esxi' test -f '/vmfs/volumes/datastore1/win2019/win2019-000001-flat.vmdk'

and when that fails, we stick with the unaltered filename
"win2019-000001.vmdk", and posit that its format is not "raw", but
"vmdk". Here's the contents indeed:

> # Disk DescriptorFile
> version=1
> encoding="UTF-8"
> CID=e9553752
> parentCID=e9553752
> createType="seSparse"
> parentFileNameHint="win2019.vmdk"
> # Extent description
> RW 188743680 SESPARSE "win2019-000001-sesparse.vmdk"
> 
> # The Disk Data Base 
> #DDB
> 
> ddb.grain = "8"
> ddb.longContentID = "d73f9d51264714ca1e60df60e9553752"

So it basically implements a "backing chain" to "win2019.vmdk".

The issue is that we still end up handling this descriptor (text) file
as a raw disk image! See "input/input_vmx.ml":

            (* XXX This is a hack to work around qemu / VMDK limitation
             *   "Cannot use relative extent paths with VMDK descriptor file"
             * We can remove this if the above is fixed.
             *)
            let abs_path, format =
              let flat_vmdk =
                PCRE.replace (PCRE.compile "\\.vmdk$") "-flat.vmdk" abs_path in
              if remote_file_exists uri flat_vmdk then (flat_vmdk, "raw")
              else (abs_path, format) in

            (* XXX In virt-v2v 1.42+ importing from VMX over SSH
             * was broken if the -flat.vmdk file did not exist.
             * It is still broken here.
             *)
            ignore format;

We simply proceed to exposing this descriptor file as a network block
device with nbdkit, but then (as expected) fail to parse that as a block
device containing a partition table, or an LVM PV, or a raw filesystem,
and then we give up.

Unfortunately, with a snapshotted guest like this, all new writes from
the guest seem to go into the "win2019-000001-sesparse.vmdk" file --
that is, the sparse overlay. I guess this is very similar to qcow2.

So, in case we wanted to expose the *active* layer of the guest disk
over NBD, we'd have to use qemu-nbd (rather than nbdkit) with
--format=vmdk. Some notes on that:

- Per my point (4) in comment 8, in commit 7a6f6113a25f ("v2v: -i vmx
  -it ssh: Replace qemu block ssh driver with nbdkit-ssh-plugin.",
  2019-10-08), we implemented the opposite change; so we'd have to
  revert that in some way;

- but we must not restore the "?host_key_check=no" query string, for two
  reasons (see points (3) and (2) in my comment 8): first, because it is a
  security problem IMO, and second, because with a query string attached
  to the URL, qemu-nbd could not go from the descriptor file

    /vmfs/volumes/datastore1/win2019/win2019-000001.vmdk

  to the next desc file

    /vmfs/volumes/datastore1/win2019/win2019.vmdk

  to the raw file

    /vmfs/volumes/datastore1/win2019/win2019-flat.vmdk

Summary: we can only entertain converting vmware guests with snapshots
over vmx+ssh *if* we roll back to qemu-nbd *but* without the
"?host_key_check=no" query string.

I think this is way too much work for a feature that is otherwise
available with two other transports ("-i vmx" over NFS or otherwise
directly accessible media, plus vpx/vddk). Additionally, per commit
7a6f6113a25f, using nbkdkit-ssh-plugin has additional benefits, one of
them being "we can use libvirt again"; and we'd lose those benefits by
going back to qemu-nbd for vmx+ssh.

I'll send a documentation patch for describing the limitation (no
snapshots) with vmx+ssh.

Comment 20 Laszlo Ersek 2022-04-08 12:12:05 UTC
I'd like to note that even without a snapshot, we play fast and loose. Even without a snapshot, the "win2019.vmdk" file is a descriptor file; we just ignore it altogether. Rather than parsing it, and finding the raw image file "win2019-flat.vmdk" from the parsed descriptor, we just go ahead and insert "-flat" ourselves. It seems quite brittle and we shouldn't have more of that.

Comment 21 Laszlo Ersek 2022-04-08 12:59:09 UTC
BTW it's *absolutely terrible* that we do not support sshfs in RHEL!

sshfs is a no-brainer to configure & set up, in comparison to NFS. And consider what it would give us:

- Passwordless login immediately. The user would be responsible for launching sshfs, so no need for virt-v2v to deal with any passwords.

- complete coverage with "-i vmx", such as for snapshotted guests, even in case the user can only connect to ESXi over sftp, and not over NFS. "-i vmx -it ssh" might just as well be deprecated and then eventually removed.

But we cannot point users to sshfs in the documentation because sshfs is not in RHEL. Very sad.

Comment 22 Laszlo Ersek 2022-04-08 13:09:56 UTC
Test run of my patch:

> [   0.0] Setting up the source: -i vmx ssh://esxi/vmfs/volumes/datastore1/win2019/win2019.vmx
> virt-v2v: error: failure: this transport does not support guests with 
> snapshots
> 
> If reporting bugs, run virt-v2v with debugging enabled and include the 
> complete output:
> 
>   virt-v2v -v -x [...]

and, after removing / consolidating the snapshots:

[   0.0] Setting up the source: -i vmx ssh://esxi/vmfs/volumes/datastore1/win2019/win2019.vmx
[   1.9] Opening the source
[   6.5] Inspecting the source
[  13.9] Checking for sufficient free disk space in the guest
[  13.9] Converting Windows Server 2019 Datacenter to run on KVM
virt-v2v: This guest has virtio drivers installed.
[  35.7] Mapping filesystem data to avoid copying unused and blank areas
[  38.5] Closing the overlay
[  38.6] Assigning disks to buses
[  38.6] Checking if the guest needs BIOS or UEFI to boot
virt-v2v: This guest requires UEFI on the target to boot.
[  38.6] Setting up the destination: -o libvirt
[  40.0] Copying disk 1/1
█ 100% [****************************************]
[ 141.4] Creating output metadata
[ 141.4] Finishing off

Comment 23 Laszlo Ersek 2022-04-08 13:14:02 UTC
(In reply to Laszlo Ersek from comment #22)
> Test run of my patch:
> 
> > [   0.0] Setting up the source: -i vmx ssh://esxi/vmfs/volumes/datastore1/win2019/win2019.vmx
> > virt-v2v: error: failure: this transport does not support guests with 
> > snapshots
> > 
> > If reporting bugs, run virt-v2v with debugging enabled and include the 
> > complete output:
> > 
> >   virt-v2v -v -x [...]
> 
> and, after removing / consolidating the snapshots:
> 
> [   0.0] Setting up the source: -i vmx
> ssh://esxi/vmfs/volumes/datastore1/win2019/win2019.vmx
> [   1.9] Opening the source
> [   6.5] Inspecting the source
> [  13.9] Checking for sufficient free disk space in the guest
> [  13.9] Converting Windows Server 2019 Datacenter to run on KVM
> virt-v2v: This guest has virtio drivers installed.
> [  35.7] Mapping filesystem data to avoid copying unused and blank areas
> [  38.5] Closing the overlay
> [  38.6] Assigning disks to buses
> [  38.6] Checking if the guest needs BIOS or UEFI to boot
> virt-v2v: This guest requires UEFI on the target to boot.
> [  38.6] Setting up the destination: -o libvirt
> [  40.0] Copying disk 1/1
> █ 100% [****************************************]
> [ 141.4] Creating output metadata
> [ 141.4] Finishing off

... and the converted guest boots fine.

Comment 24 Laszlo Ersek 2022-04-08 13:17:30 UTC
[v2v PATCH] input_vmx: cleanly reject guests with snapshots when using "-it ssh"
Message-Id: <20220408131639.11977-1-lersek>
https://listman.redhat.com/archives/libguestfs/2022-April/028607.html

Comment 25 Laszlo Ersek 2022-04-09 06:13:51 UTC
[v2v PATCH v2] input_vmx: cleanly reject guests with snapshots when using "-it ssh"
Message-Id: <20220409061252.6551-1-lersek>
https://listman.redhat.com/archives/libguestfs/2022-April/028625.html

Comment 26 Laszlo Ersek 2022-04-09 07:14:09 UTC
(In reply to Laszlo Ersek from comment #25)
> [v2v PATCH v2] input_vmx: cleanly reject guests with snapshots when using "-it ssh"
> Message-Id: <20220409061252.6551-1-lersek>
> https://listman.redhat.com/archives/libguestfs/2022-April/028625.html

Commit e0b08ee0213f.

Comment 29 mxie@redhat.com 2022-04-15 07:07:43 UTC
Verify the bug with below builds:
virt-v2v-2.0.3-1.el9.x86_64
libguestfs-1.48.1-1.el9.x86_64
guestfs-tools-1.48.0-1.el9.x86_64
libvirt-libs-8.2.0-1.el9.x86_64
qemu-img-6.2.0-13.el9.x86_64
nbdkit-server-1.30.2-1.el9.x86_64
libnbd-1.12.2-1.el9.x86_64

Steps:
1. Prepare a guest which has a snapshot on VMware 

2. Convert this guest from VMware with vmx+ssh by virt-v2v
# virt-v2v -i vmx -it ssh ssh://root.199.217/vmfs/volumes/esx7.0-function/esx7.0-rhel9.0-snapshot/esx7.0-rhel9.0-snapshot.vmx -ip /home/esxpw 
[   0.0] Setting up the source: -i vmx ssh://root.199.217/vmfs/volumes/esx7.0-function/esx7.0-rhel9.0-snapshot/esx7.0-rhel9.0-snapshot.vmx
(root.199.217) Password: 
(root.199.217) Password: 
virt-v2v: error: This transport does not support guests with snapshots. 
Either collapse the snapshots for this guest and try the conversion again, 
or use one of the alternate conversion methods described in 
virt-v2v-input-vmware(1) section "NOTES".

If reporting bugs, run virt-v2v with debugging enabled and include the 
complete output:

  virt-v2v -v -x [...]


3.Check virt-v2v-input-vmware man page about section "NOTES"
# man virt-v2v-input-vmware
.....
 -i vmx -it ssh ssh://...
           Full documentation: "INPUT FROM VMWARE VMX"

           This is similar to the method above, except it uses an SSH connection to ESXi to read the GUEST.vmx file
           and associated disks.  This requires that you have enabled SSH access to the VMware ESXi hypervisor - in
           the default ESXi configuration this is turned off.

           This transport is incompatible with guests that have snapshots; refer to "NOTES".
.....
.....
NOTES
       When accessing the guest.vmx file on ESXi over an SSH connection (that is, when using the -i vmx -it ssh
       options), the conversion will not work if the guest has snapshots (files called guest-000001.vmdk and
       similar).  Either collapse the snapshots for the guest and retry the conversion with the same -i vmx -it ssh
       options, or leave the snapshots intact and use a transport different from SSH: just -i vmx, or -ic vpx://...
       -it vddk or -ic esx://... -it vddk.  Refer to https://bugzilla.redhat.com/1774386.



Result:
   The v2v error info is clear when convert guest which has snapshot via vmx+ssh and virt-v2v-input-vmware has clear description for this scenario, move the bug from ON_QA to VERIFIED

Comment 31 errata-xmlrpc 2022-11-15 09:55:44 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory (Low: virt-v2v security, bug fix, and enhancement update), and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHSA-2022:7968


Note You need to log in before you can comment on or make changes to this bug.