Bug 1774386
Summary: | input_vmx: cleanly reject guests with snapshots when using "-it ssh" | ||||||
---|---|---|---|---|---|---|---|
Product: | Red Hat Enterprise Linux 9 | Reporter: | mxie <mxie> | ||||
Component: | virt-v2v | Assignee: | Laszlo Ersek <lersek> | ||||
Status: | CLOSED ERRATA | QA Contact: | mxie <mxie> | ||||
Severity: | high | Docs Contact: | |||||
Priority: | medium | ||||||
Version: | unspecified | CC: | ehadley, fdupont, jsuchane, juzhou, lersek, mzhan, ptoscano, rjones, tzheng, xiaodwan | ||||
Target Milestone: | rc | Keywords: | Reopened, Triaged | ||||
Target Release: | --- | ||||||
Hardware: | x86_64 | ||||||
OS: | Unspecified | ||||||
Whiteboard: | V2V | ||||||
Fixed In Version: | virt-v2v-2.0.7-1.el9 | Doc Type: | If docs needed, set a value | ||||
Doc Text: | Story Points: | --- | |||||
Clone Of: | Environment: | ||||||
Last Closed: | 2022-11-15 09:55:44 UTC | Type: | Bug | ||||
Regression: | --- | Mount Type: | --- | ||||
Documentation: | --- | CRM: | |||||
Verified Versions: | Category: | --- | |||||
oVirt Team: | --- | RHEL 7.3 requirements from Atomic Host: | |||||
Cloudforms Team: | --- | Target Upstream Version: | |||||
Embargoed: | |||||||
Attachments: |
|
Description
mxie@redhat.com
2019-11-20 08:32:49 UTC
I'm going to turn this into an RFE because virt-v2v only has partial support for handling guests with snapshots and we usually advise that customers should remove snapshots before conversion. Also it appears to work for VDDK which is how most IMS users are using virt-v2v. Bulk update: Move RHEL-AV bugs to RHEL9. If necessary to resolve in RHEL8, then clone to the current RHEL8 release. After evaluating this issue, there are no plans to address it further or fix it in an upcoming release. Therefore, it is being closed. If plans change such that this issue will be fixed in an upcoming release, then the bug can be reopened. The comment above is incorrect but was generated by a process we have no control over, reopening. Here's my analysis: (1) I don't understand *at all* how the bug report relates to guests that have snapshots on the *input* hypervisor. The error message in comment 0 is entirely unrelated to snapshots. Consider: > [ 0.8] Creating an overlay to protect the source from being modified > qemu-img: /var/tmp/v2vovld11d72.qcow2: Cannot use relative paths with > VMDK descriptor file > 'ssh://root.75.219:22/vmfs/volumes/esx6.7/esx6.7-rhel6.10-x86_64_1/esx6.7-rhel6.10-x86_64-000001.vmdk?host_key_check=no': > Cannot generate a base directory with host_key_check set > Could not open backing image to determine size. The problem here is not that the input disk image contains snapshots. Instead, the problem is that qemu-img cannot create the protective overlay for which the original disk image would be a backing file. This is entirely orthogonal to whether the original disk image contains snapshots or not. (2) The qemu-img error is caused by the following. QEMU commit 21205c7c3bd0 ("block/ssh: Implement .bdrv_dirname()", 2019-05-07) very correctly implements the "bdrv_dirname" callback for QEMU's ssh block driver such that, the operation fail if the original location specifier contains a *query string* -- for example, "?host_key_check=no". The reason for this is explained in QEMU commit 1e89d0f9bed7 ("block: Add bdrv_dirname()", 2019-02-25) -- excerpt: > diff --git a/include/block/block_int.h b/include/block/block_int.h > index dd7276cde232..e4d4817ea6ff 100644 > --- a/include/block/block_int.h > +++ b/include/block/block_int.h > @@ -141,6 +141,13 @@ struct BlockDriver { > > void (*bdrv_refresh_filename)(BlockDriverState *bs, QDict *options); > > + /* > + * Returns an allocated string which is the directory name of this BDS: It > + * will be used to make relative filenames absolute by prepending this > + * function's return value to them. > + */ > + char *(*bdrv_dirname)(BlockDriverState *bs, Error **errp); > + > /* aio */ > BlockAIOCB *(*bdrv_aio_preadv)(BlockDriverState *bs, > uint64_t offset, uint64_t bytes, QEMUIOVector *qiov, int flags, In other words, bdrv_dirname() intends to strip off the *last pathname component*, and carry over the leading pathname components, so that *sibling files* can be created to the original. Obviously this cannot work if the original "pathname" terminates with some query string. In that case, the "pathname" components to carry over, to sibling files, would be the leading pathname components, *plus* the whole query string. The QEMU block layer (in general) simply cannot do that, and throwing away the query string (such as "?host_key_check=no") would break access to those sibling files, so the only thing the SSH block driver can do is reject the attempt. (3) Virt-v2v started using the "host_key_check=no" query string right when the ssh transport option was extended to VMX input, namely in commit e183bd10edd3 ("v2v: -i vmx: Enhance VMX support with ability to use ‘-it ssh’ transport.", 2017-12-09). The commit doesn't seem to explain the use of query string; in my opinion (in retrospect) it is a security problem. (4) Regardless, in commit 7a6f6113a25f ("v2v: -i vmx -it ssh: Replace qemu block ssh driver with nbdkit-ssh-plugin.", 2019-10-08), virt-v2v stopped using the QEMU ssh block driver for vmx+ssh, replacing it with nbdkit-ssh-plugin. With that, the "host_key_check=no" query string had been eliminated. (5) "nbdkit-ssh-plugin" is strict about host key checking (correctly so!), see here: https://libguestfs.org/nbdkit-ssh-plugin.1.html#Known-hosts > Known hosts > > The SSH server’s host key is checked at connection time, and must be > present and correct in the local "known hosts" file. > > If you have never connected to the SSH server before then the > connection will usually fail. You can: > > * connect to the server first using ssh(1) so you can manually accept > the host key, or > > * provide the host key in an alternate file which you specify using > the known-hosts option, or > > * set verify-remote-host=false on the command line. This latter option > is dangerous because it allows a MITM attack to be conducted against > you. Virt-v2v does not set "verify-remote-host" (rightly so), and does not seem to expose "known-hosts" to the user. Therefore virt-v2v users can rely on the first option -- accept the host key manually, separately, at first. In my opinion, that's just good enough. (6) Assuming that the protective overlay can now be created -- that is, assuming that the *original* symptom (which is totally unrelated to snapshots) had disappeared --, please retest this use case: - Is there an actual problem with snapshots? - If so, do we care? (See comment 1.) FWIW I don't think snapshots should matter at all. Virt-v2v does not attempt to carry over snapshots from source to destination, and all the conversion and copying steps (with nbdcopy) only care about the top -- active -- layer of the source disk image (more precisely, about the protective overlay we place upon *that*). Snapshots on the source are an internal representation detail of the source disk image. It's hard for me to suggest a resolution for this BZ, as the bug title ("snapshots break conversion") and the actual symptom in comment 0 (QEMU's ssh block driver refusing to create the protective overlay) contradict each other. Regarding the title, I would say, "I don't think so (suggest WORKSFORME), but please retest". Regarding comment 0, I'd say "fixed in the current release; we no longer use QEMU's ssh block driver for the protective overlay". So... I guess, please retest whether we can now open source disk images with snapshots. Thanks! (In reply to Laszlo Ersek from comment #8) > So... I guess, please retest whether we can now open source disk images > with snapshots. Thanks! Test the bug with below builds: virt-v2v-1.45.99-1.el9.x86_64 libguestfs-1.46.1-2.el9.x86_64 guestfs-tools-1.46.1-6.el9.x86_64 libvirt-libs-8.0.0-6.el9.x86_64 qemu-img-6.2.0-11.el9.x86_64 Steps: 1. Convert a guest which has snapshot from VMware via vmx+ssh by virt-v2v # virt-v2v -i vmx -it ssh ssh://root.199.217/vmfs/volumes/esx7.0-function/esx7.0-rhel9.0-snapshot/esx7.0-rhel9.0-snapshot.vmx [ 0.0] Setting up the source: -i vmx ssh://root.199.217/vmfs/volumes/esx7.0-function/esx7.0-rhel9.0-snapshot/esx7.0-rhel9.0-snapshot.vmx [ 1.6] Opening the source [ 6.1] Inspecting the source virt-v2v: error: inspection could not detect the source guest (or physical machine). Assuming that you are running virt-v2v/virt-p2v on a source which is supported (and not, for example, a blank disk), then this should not happen. No root device found in this operating system image. If reporting bugs, run virt-v2v with debugging enabled and include the complete output: virt-v2v -v -x [...] 2.Convert the guest which is used in step1 from VMware via 'vmx' by v2v # virt-v2v -i vmx /media/esx7.0/esx7.0-rhel9.0-snapshot/esx7.0-rhel9.0-snapshot.vm esx7.0-rhel9.0-snapshot.vmdk esx7.0-rhel9.0-snapshot.vmsd esx7.0-rhel9.0-snapshot.vmx [root@dell-per740-53 esx7.0]# virt-v2v -i vmx /media/esx7.0/esx7.0-rhel9.0-snapshot/esx7.0-rhel9.0-snapshot.vmx [ 0.0] Setting up the source: -i vmx /media/esx7.0/esx7.0-rhel9.0-snapshot/esx7.0-rhel9.0-snapshot.vmx [ 1.0] Opening the source [ 6.1] Inspecting the source [ 14.3] Checking for sufficient free disk space in the guest [ 14.3] Converting Red Hat Enterprise Linux 9.0 Beta (Plow) to run on KVM virt-v2v: This guest has virtio drivers installed. [ 143.1] Mapping filesystem data to avoid copying unused and blank areas [ 144.7] Closing the overlay [ 145.0] Assigning disks to buses [ 145.0] Checking if the guest needs BIOS or UEFI to boot [ 145.0] Setting up the destination: -o libvirt [ 146.2] Copying disk 1/1 ^Cvirt-v2v: Exiting on signal SIGINT------------] 3.Convert the guest which is used in step1 from VMware via vddk by v2v # virt-v2v -ic vpx://root.198.169/data/10.73.199.217/?no_verify=1 -it vddk -io vddk-libdir=/home/vddk7.0.2 -io vddk-thumbprint=B5:52:1F:B4:21:09:45:24:51:32:56:F6:63:6A:93:5D:54:08:2D:78 -o rhv-upload -of qcow2 -oc https://dell-per740-22.lab.eng.pek2.redhat.com/ovirt-engine/api -op /home/rhvpasswd -os nfs_data -b ovirtmgmt esx7.0-rhel9.0-snapshot -ip /home/passwd [ 0.0] Setting up the source: -i libvirt -ic vpx://root.198.169/data/10.73.199.217/?no_verify=1 -it vddk esx7.0-rhel9.0-snapshot [ 1.8] Opening the source [ 6.9] Inspecting the source [ 12.8] Checking for sufficient free disk space in the guest [ 12.8] Converting Red Hat Enterprise Linux 9.0 Beta (Plow) to run on KVM virt-v2v: This guest has virtio drivers installed. [ 121.2] Mapping filesystem data to avoid copying unused and blank areas [ 122.1] Closing the overlay [ 122.4] Assigning disks to buses [ 122.4] Checking if the guest needs BIOS or UEFI to boot [ 122.4] Setting up the destination: -o rhv-upload -oc https://dell-per740-22.lab.eng.pek2.redhat.com/ovirt-engine/api -os nfs_data [ 143.8] Copying disk 1/1 █ 100% [****************************************] [ 283.0] Creating output metadata [ 289.8] Finishing off (In reply to Richard W.M. Jones from comment #11) > IIRC when a guest is snapshotted, the name of the *-flat.vmdk file changes. > We don't have any way to list the remote files over ssh (or over https > for that matter - the same problem applies there), so we have to guess > what filename is used. We could use libssh (sftp) to query the remote server > for the list of files. It should become clearer once you've got ESXi > installed. Before snapshot: [root@esxi:~] ls -l /vmfs/volumes/datastore1/win2019 total 10240064 -rw-r--r-- 1 root root 115737 Apr 8 10:25 vmware-1.log -rw-r--r-- 1 root root 122986 Apr 8 10:26 vmware-2.log -rw-r--r-- 1 root root 128871 Apr 8 10:30 vmware-3.log -rw-r--r-- 1 root root 195786 Apr 8 10:37 vmware-4.log -rw-r--r-- 1 root root 227331 Apr 8 10:41 vmware-5.log -rw-r--r-- 1 root root 150797 Apr 8 10:45 vmware-6.log -rw-r--r-- 1 root root 135475 Apr 8 10:45 vmware.log -rw------- 1 root root 96636764160 Apr 8 10:45 win2019-flat.vmdk -rw------- 1 root root 270840 Apr 8 10:45 win2019.nvram -rw------- 1 root root 529 Apr 8 10:45 win2019.vmdk -rw-r--r-- 1 root root 0 Apr 8 10:24 win2019.vmsd -rwxr-xr-x 1 root root 3563 Apr 8 10:45 win2019.vmx -rw------- 1 root root 3690 Apr 8 10:40 win2019.vmxf After snapshotting the offline guest: [root@esxi:~] ls -l /vmfs/volumes/datastore1/win2019 total 10242112 -rw-r--r-- 1 root root 115737 Apr 8 10:25 vmware-1.log -rw-r--r-- 1 root root 122986 Apr 8 10:26 vmware-2.log -rw-r--r-- 1 root root 128871 Apr 8 10:30 vmware-3.log -rw-r--r-- 1 root root 195786 Apr 8 10:37 vmware-4.log -rw-r--r-- 1 root root 227331 Apr 8 10:41 vmware-5.log -rw-r--r-- 1 root root 150797 Apr 8 10:45 vmware-6.log -rw-r--r-- 1 root root 135475 Apr 8 10:45 vmware.log -rw------- 1 root root 383778816 Apr 8 10:48 win2019-000001-sesparse.vmdk -rw------- 1 root root 311 Apr 8 10:48 win2019-000001.vmdk -rw------- 1 root root 294668 Apr 8 10:48 win2019-Snapshot1.vmsn -rw------- 1 root root 96636764160 Apr 8 10:45 win2019-flat.vmdk -rw------- 1 root root 270840 Apr 8 10:45 win2019.nvram -rw------- 1 root root 529 Apr 8 10:45 win2019.vmdk -rw-r--r-- 1 root root 434 Apr 8 10:48 win2019.vmsd -rwxr-xr-x 1 root root 3570 Apr 8 10:48 win2019.vmx -rw------- 1 root root 3690 Apr 8 10:40 win2019.vmxf The "win2019-flat.vmdk" file name persists -- I can't reproduce the issue. I can reproduce the issue (clarified in Scenario 1 in comment 9). Here's what happens. With no snapshot, the"scsi0:0.fileName" in the VMX file is: > scsi0:0.fileName = "win2019.vmdk" In virt-v2v, we extend this to "win2019-flat.vmdk". And indeed, the contents of that file are a raw disk image. With the snapshot created, the "scsi0:0.fileName" entry in the VMX file changes to: > scsi0:0.fileName = "win2019-000001.vmdk" and in virt-v2v, we test for the existence of "win2019-000001-flat.vmdk": > ssh 'esxi' test -f '/vmfs/volumes/datastore1/win2019/win2019-000001-flat.vmdk' and when that fails, we stick with the unaltered filename "win2019-000001.vmdk", and posit that its format is not "raw", but "vmdk". Here's the contents indeed: > # Disk DescriptorFile > version=1 > encoding="UTF-8" > CID=e9553752 > parentCID=e9553752 > createType="seSparse" > parentFileNameHint="win2019.vmdk" > # Extent description > RW 188743680 SESPARSE "win2019-000001-sesparse.vmdk" > > # The Disk Data Base > #DDB > > ddb.grain = "8" > ddb.longContentID = "d73f9d51264714ca1e60df60e9553752" So it basically implements a "backing chain" to "win2019.vmdk". The issue is that we still end up handling this descriptor (text) file as a raw disk image! See "input/input_vmx.ml": (* XXX This is a hack to work around qemu / VMDK limitation * "Cannot use relative extent paths with VMDK descriptor file" * We can remove this if the above is fixed. *) let abs_path, format = let flat_vmdk = PCRE.replace (PCRE.compile "\\.vmdk$") "-flat.vmdk" abs_path in if remote_file_exists uri flat_vmdk then (flat_vmdk, "raw") else (abs_path, format) in (* XXX In virt-v2v 1.42+ importing from VMX over SSH * was broken if the -flat.vmdk file did not exist. * It is still broken here. *) ignore format; We simply proceed to exposing this descriptor file as a network block device with nbdkit, but then (as expected) fail to parse that as a block device containing a partition table, or an LVM PV, or a raw filesystem, and then we give up. Unfortunately, with a snapshotted guest like this, all new writes from the guest seem to go into the "win2019-000001-sesparse.vmdk" file -- that is, the sparse overlay. I guess this is very similar to qcow2. So, in case we wanted to expose the *active* layer of the guest disk over NBD, we'd have to use qemu-nbd (rather than nbdkit) with --format=vmdk. Some notes on that: - Per my point (4) in comment 8, in commit 7a6f6113a25f ("v2v: -i vmx -it ssh: Replace qemu block ssh driver with nbdkit-ssh-plugin.", 2019-10-08), we implemented the opposite change; so we'd have to revert that in some way; - but we must not restore the "?host_key_check=no" query string, for two reasons (see points (3) and (2) in my comment 8): first, because it is a security problem IMO, and second, because with a query string attached to the URL, qemu-nbd could not go from the descriptor file /vmfs/volumes/datastore1/win2019/win2019-000001.vmdk to the next desc file /vmfs/volumes/datastore1/win2019/win2019.vmdk to the raw file /vmfs/volumes/datastore1/win2019/win2019-flat.vmdk Summary: we can only entertain converting vmware guests with snapshots over vmx+ssh *if* we roll back to qemu-nbd *but* without the "?host_key_check=no" query string. I think this is way too much work for a feature that is otherwise available with two other transports ("-i vmx" over NFS or otherwise directly accessible media, plus vpx/vddk). Additionally, per commit 7a6f6113a25f, using nbkdkit-ssh-plugin has additional benefits, one of them being "we can use libvirt again"; and we'd lose those benefits by going back to qemu-nbd for vmx+ssh. I'll send a documentation patch for describing the limitation (no snapshots) with vmx+ssh. I'd like to note that even without a snapshot, we play fast and loose. Even without a snapshot, the "win2019.vmdk" file is a descriptor file; we just ignore it altogether. Rather than parsing it, and finding the raw image file "win2019-flat.vmdk" from the parsed descriptor, we just go ahead and insert "-flat" ourselves. It seems quite brittle and we shouldn't have more of that. BTW it's *absolutely terrible* that we do not support sshfs in RHEL! sshfs is a no-brainer to configure & set up, in comparison to NFS. And consider what it would give us: - Passwordless login immediately. The user would be responsible for launching sshfs, so no need for virt-v2v to deal with any passwords. - complete coverage with "-i vmx", such as for snapshotted guests, even in case the user can only connect to ESXi over sftp, and not over NFS. "-i vmx -it ssh" might just as well be deprecated and then eventually removed. But we cannot point users to sshfs in the documentation because sshfs is not in RHEL. Very sad. Test run of my patch:
> [ 0.0] Setting up the source: -i vmx ssh://esxi/vmfs/volumes/datastore1/win2019/win2019.vmx
> virt-v2v: error: failure: this transport does not support guests with
> snapshots
>
> If reporting bugs, run virt-v2v with debugging enabled and include the
> complete output:
>
> virt-v2v -v -x [...]
and, after removing / consolidating the snapshots:
[ 0.0] Setting up the source: -i vmx ssh://esxi/vmfs/volumes/datastore1/win2019/win2019.vmx
[ 1.9] Opening the source
[ 6.5] Inspecting the source
[ 13.9] Checking for sufficient free disk space in the guest
[ 13.9] Converting Windows Server 2019 Datacenter to run on KVM
virt-v2v: This guest has virtio drivers installed.
[ 35.7] Mapping filesystem data to avoid copying unused and blank areas
[ 38.5] Closing the overlay
[ 38.6] Assigning disks to buses
[ 38.6] Checking if the guest needs BIOS or UEFI to boot
virt-v2v: This guest requires UEFI on the target to boot.
[ 38.6] Setting up the destination: -o libvirt
[ 40.0] Copying disk 1/1
█ 100% [****************************************]
[ 141.4] Creating output metadata
[ 141.4] Finishing off
(In reply to Laszlo Ersek from comment #22) > Test run of my patch: > > > [ 0.0] Setting up the source: -i vmx ssh://esxi/vmfs/volumes/datastore1/win2019/win2019.vmx > > virt-v2v: error: failure: this transport does not support guests with > > snapshots > > > > If reporting bugs, run virt-v2v with debugging enabled and include the > > complete output: > > > > virt-v2v -v -x [...] > > and, after removing / consolidating the snapshots: > > [ 0.0] Setting up the source: -i vmx > ssh://esxi/vmfs/volumes/datastore1/win2019/win2019.vmx > [ 1.9] Opening the source > [ 6.5] Inspecting the source > [ 13.9] Checking for sufficient free disk space in the guest > [ 13.9] Converting Windows Server 2019 Datacenter to run on KVM > virt-v2v: This guest has virtio drivers installed. > [ 35.7] Mapping filesystem data to avoid copying unused and blank areas > [ 38.5] Closing the overlay > [ 38.6] Assigning disks to buses > [ 38.6] Checking if the guest needs BIOS or UEFI to boot > virt-v2v: This guest requires UEFI on the target to boot. > [ 38.6] Setting up the destination: -o libvirt > [ 40.0] Copying disk 1/1 > █ 100% [****************************************] > [ 141.4] Creating output metadata > [ 141.4] Finishing off ... and the converted guest boots fine. [v2v PATCH] input_vmx: cleanly reject guests with snapshots when using "-it ssh" Message-Id: <20220408131639.11977-1-lersek> https://listman.redhat.com/archives/libguestfs/2022-April/028607.html [v2v PATCH v2] input_vmx: cleanly reject guests with snapshots when using "-it ssh" Message-Id: <20220409061252.6551-1-lersek> https://listman.redhat.com/archives/libguestfs/2022-April/028625.html (In reply to Laszlo Ersek from comment #25) > [v2v PATCH v2] input_vmx: cleanly reject guests with snapshots when using "-it ssh" > Message-Id: <20220409061252.6551-1-lersek> > https://listman.redhat.com/archives/libguestfs/2022-April/028625.html Commit e0b08ee0213f. Verify the bug with below builds: virt-v2v-2.0.3-1.el9.x86_64 libguestfs-1.48.1-1.el9.x86_64 guestfs-tools-1.48.0-1.el9.x86_64 libvirt-libs-8.2.0-1.el9.x86_64 qemu-img-6.2.0-13.el9.x86_64 nbdkit-server-1.30.2-1.el9.x86_64 libnbd-1.12.2-1.el9.x86_64 Steps: 1. Prepare a guest which has a snapshot on VMware 2. Convert this guest from VMware with vmx+ssh by virt-v2v # virt-v2v -i vmx -it ssh ssh://root.199.217/vmfs/volumes/esx7.0-function/esx7.0-rhel9.0-snapshot/esx7.0-rhel9.0-snapshot.vmx -ip /home/esxpw [ 0.0] Setting up the source: -i vmx ssh://root.199.217/vmfs/volumes/esx7.0-function/esx7.0-rhel9.0-snapshot/esx7.0-rhel9.0-snapshot.vmx (root.199.217) Password: (root.199.217) Password: virt-v2v: error: This transport does not support guests with snapshots. Either collapse the snapshots for this guest and try the conversion again, or use one of the alternate conversion methods described in virt-v2v-input-vmware(1) section "NOTES". If reporting bugs, run virt-v2v with debugging enabled and include the complete output: virt-v2v -v -x [...] 3.Check virt-v2v-input-vmware man page about section "NOTES" # man virt-v2v-input-vmware ..... -i vmx -it ssh ssh://... Full documentation: "INPUT FROM VMWARE VMX" This is similar to the method above, except it uses an SSH connection to ESXi to read the GUEST.vmx file and associated disks. This requires that you have enabled SSH access to the VMware ESXi hypervisor - in the default ESXi configuration this is turned off. This transport is incompatible with guests that have snapshots; refer to "NOTES". ..... ..... NOTES When accessing the guest.vmx file on ESXi over an SSH connection (that is, when using the -i vmx -it ssh options), the conversion will not work if the guest has snapshots (files called guest-000001.vmdk and similar). Either collapse the snapshots for the guest and retry the conversion with the same -i vmx -it ssh options, or leave the snapshots intact and use a transport different from SSH: just -i vmx, or -ic vpx://... -it vddk or -ic esx://... -it vddk. Refer to https://bugzilla.redhat.com/1774386. Result: The v2v error info is clear when convert guest which has snapshot via vmx+ssh and virt-v2v-input-vmware has clear description for this scenario, move the bug from ON_QA to VERIFIED Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory (Low: virt-v2v security, bug fix, and enhancement update), and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://access.redhat.com/errata/RHSA-2022:7968 |