Bug 1557273
Summary: | [RFE] Upload images directly to oVirt (virt-v2v -o rhv-upload) | ||||||||||||||
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
Product: | Red Hat Enterprise Linux 7 | Reporter: | Jaroslav Suchanek <jsuchane> | ||||||||||||
Component: | libguestfs | Assignee: | Richard W.M. Jones <rjones> | ||||||||||||
Status: | CLOSED ERRATA | QA Contact: | Virtualization Bugs <virt-bugs> | ||||||||||||
Severity: | high | Docs Contact: | Jiri Herrmann <jherrman> | ||||||||||||
Priority: | high | ||||||||||||||
Version: | 7.6 | CC: | ahadas, bthurber, jherrman, michal.skrivanek, mtessun, mxie, mzhan, omachace, ptoscano, rjones, tzheng, virt-bugs | ||||||||||||
Target Milestone: | rc | Keywords: | FutureFeature | ||||||||||||
Target Release: | --- | ||||||||||||||
Hardware: | Unspecified | ||||||||||||||
OS: | Unspecified | ||||||||||||||
Whiteboard: | V2V | ||||||||||||||
Fixed In Version: | libguestfs-1.38.2-8.el7 | Doc Type: | Release Note | ||||||||||||
Doc Text: |
*virt-v2v* can import virtual machines directly to RHV
The *virt-v2v* utility is now able to output a converted virtual machine (VM) directly to a Red Hat Virtualization (RHV) client. As a result, importing VMs converted by *virt-v2v* using the Red Hat Virtualization Manager (RHVM) is now easier, faster, and more reliable.
Note that this feature requires RHV version 4.2 or later to work properly.
|
Story Points: | --- | ||||||||||||
Clone Of: | Environment: | ||||||||||||||
Last Closed: | 2018-10-30 07:45:24 UTC | Type: | Bug | ||||||||||||
Regression: | --- | Mount Type: | --- | ||||||||||||
Documentation: | --- | CRM: | |||||||||||||
Verified Versions: | Category: | --- | |||||||||||||
oVirt Team: | --- | RHEL 7.3 requirements from Atomic Host: | |||||||||||||
Cloudforms Team: | --- | Target Upstream Version: | |||||||||||||
Embargoed: | |||||||||||||||
Bug Depends On: | 1519486 | ||||||||||||||
Bug Blocks: | 1558125, 1559027, 1613946, 1614750 | ||||||||||||||
Attachments: |
|
Description
Jaroslav Suchanek
2018-03-16 11:07:53 UTC
Latest upstream version: https://www.redhat.com/archives/libguestfs/2018-March/msg00032.html *** Bug 1512836 has been marked as a duplicate of this bug. *** Forgot to backport the RHEL-specific switch to Python 2. Test bug with below builds: virt-v2v-1.38.2-2.el7.x86_64 libguestfs-1.38.2-2.el7.x86_64 libvirt-4.3.0-1.el7.x86_64 qemu-kvm-rhev-2.10.0-21.el7_5.3.x86_64 rhv:4.2.4-0.1.el7 Steps: 1.Check rhv-upload and its related options in v2v man page #man virt-v2v -o rhv-upload Set the output method to rhv-upload. The converted guest is written directly to a RHV Data Domain. This is a faster method than -o rhv, but requires oVirt or RHV ≥ 4.2. See "OUTPUT TO RHV" below. -oo rhv-cafile=ca.pem For -o rhv-upload ("OUTPUT TO RHV") only, the ca.pem file (Certificate Authority), copied from /etc/pki/ovirt-engine/ca.pem on the oVirt engine. ..... 1.Convert guest to rhv's data domain # virt-v2v rhel7.5 -o rhv-upload -oc https://ibm-x3250m5-03.rhts.eng.pek2.redhat.com/ovirt-engine/api -os 10.66.144.40:/home/nfs_data -op /tmp/rhvpasswd -oo rhv-cafile=/home/ca.pem -oo rhv-direct=true -v -x libguestfs: trace: set_verbose true libguestfs: trace: set_verbose = 0 libguestfs: trace: get_cachedir libguestfs: trace: get_cachedir = "/var/tmp" virt-v2v: libguestfs 1.38.2rhel=7,release=2.el7,libvirt (x86_64) libvirt version: 4.3.0 virt-v2v: error: no python binary called ‘python3’ can be found on the $PATH rm -rf '/var/tmp/rhvupload.gcxZxs' rm -rf '/var/tmp/null.DsiH5q' libguestfs: trace: close libguestfs: closing guestfs handle 0xec9eb0 (state 0) Result: Virt-v2v can't convert guest to rhv's data domain with rhv-upload option, have confirmed with Pino, will fix the problem in next libguestfs release Verify the bug with below builds: virt-v2v-1.38.2-3.el7.x86_64 libguestfs-1.38.2-3.el7.x86_64 libvirt-4.3.0-1.el7.x86_64 qemu-kvm-rhev-2.12.0-2.el7.x86_64 Steps: 1.Check rhv-upload and its related options in v2v man page #man virt-v2v -o rhv-upload Set the output method to rhv-upload. The converted guest is written directly to a RHV Data Domain. This is a faster method than -o rhv, but requires oVirt or RHV ≥ 4.2. See "OUTPUT TO RHV" below. -oo rhv-cafile=ca.pem For -o rhv-upload ("OUTPUT TO RHV") only, the ca.pem file (Certificate Authority), copied from /etc/pki/ovirt-engine/ca.pem on the oVirt engine. ..... -oo rhv-verifypeer For -o rhv-upload ("OUTPUT TO RHV") only, verify the oVirt/RHV server’s identity by checking the server‘s certificate against the Certificate Authority. 2.Convert guest to rhv's data domain with rhv-upload and set rhv-direct=true # virt-v2v rhel7.5 -o rhv-upload -oc https://ibm-x3250m5-03.rhts.eng.pek2.redhat.com/ovirt-engine/api -os 10.66.144.40:/home/nfs_data -op /tmp/rhvpasswd -oo rhv-cafile=/home/ca.pem -oo rhv-direct=true -v -x libguestfs: trace: set_verbose true libguestfs: trace: set_verbose = 0 libguestfs: trace: get_cachedir libguestfs: trace: get_cachedir = "/var/tmp" virt-v2v: libguestfs 1.38.2rhel=7,release=3.el7,libvirt (x86_64) libvirt version: 4.3.0 sh: nbdkit: command not found virt-v2v: error: nbdkit is not installed or not working. It is required to use ‘-o rhv-upload’. See "OUTPUT TO RHV" in the virt-v2v(1) manual. rm -rf '/var/tmp/rhvupload.Dp4X3q' rm -rf '/var/tmp/null.BH3jTY' libguestfs: trace: close libguestfs: closing guestfs handle 0xf14eb0 (state 0) Hi Pino, Pls help to check above error,thanks nbdkit is required for -o rhv-upload, see also bug 1519486 (which is already a dependency of this bug). Verify the bug with below builds: virt-v2v-1.38.2-3.el7.x86_64 libguestfs-1.38.2-3.el7.x86_64 libvirt-4.3.0-1.el7.x86_64 qemu-kvm-rhev-2.12.0-2.el7.x86_64 Steps: 1.According to comment6's error, check nbdkit info in "OUTPUT TO RHV" of virt-v2v manual page #man virt-v2v .... OUTPUT TO RHV This new method to upload guests to oVirt or RHV directly via the REST API requires oVirt/RHV ≥ 4.2. You need to specify -o rhv-upload as well as the following extra parameters: ..... Result1: Can't find any info about nbdkit in "OUTPUT TO RHV" part 2.Install related nbdkit and python-ovirt-engine-sdk4 packages on v2v conversion server after confirming with Pino # rpm -qa |grep python-ovirt python-ovirt-engine-sdk4-4.2.6-1.el7ev.x86_64 # rpm -qa |grep nbdkitnbdkit-plugin-python2-1.2.2-1.el7ev.x86_64 nbdkit-1.2.2-1.el7ev.x86_64 nbdkit-plugin-python-common-1.2.2-1.el7ev.x86_64 3.Convert guest to rhv's data domain without setting output disk type # virt-v2v rhel7.5 -o rhv-upload -oc https://ibm-x3250m5-03.rhts.eng.pek2.redhat.com/ovirt-engine/api -os 10.66.144.40:/home/nfs_data -op /tmp/rhvpasswd -oo rhv-cafile=/home/ca.pem -oo rhv-direct=true -on rhv-upload [ 0.5] Opening the source -i libvirt rhel7.5 [ 0.5] Creating an overlay to protect the source from being modified [ 1.2] Initializing the target -o rhv-upload -oc https://ibm-x3250m5-03.rhts.eng.pek2.redhat.com/ovirt-engine/api -op /tmp/rhvpasswd -os 10.66.144.40:/home/nfs_data virt-v2v: error: rhv-upload: currently you must use ‘-of raw’. This restriction will be loosened in a future version. If reporting bugs, run virt-v2v with debugging enabled and include the complete output: virt-v2v -v -x [...] Result3: Converting guest to rhv's data domain with rhv-upload can't use qcow2 format 4.Convert guest to rhv's data domain with setting '-of qcow2' # virt-v2v rhel7.5 -o rhv-upload -oc https://ibm-x3250m5-03.rhts.eng.pek2.redhat.com/ovirt-engine/api -os 10.66.144.40:/home/nfs_data -op /tmp/rhvpasswd -oo rhv-cafile=/home/ca.pem -oo rhv-direct=true -on rhv-upload -of raw [ 0.4] Opening the source -i libvirt rhel7.5 [ 0.4] Creating an overlay to protect the source from being modified [ 1.5] Initializing the target -o rhv-upload -oc https://ibm-x3250m5-03.rhts.eng.pek2.redhat.com/ovirt-engine/api -op /tmp/rhvpasswd -os 10.66.144.40:/home/nfs_data [ 3.0] Opening the overlay [ 34.6] Inspecting the overlay [ 80.4] Checking for sufficient free disk space in the guest [ 80.4] Estimating space required on target for each disk [ 80.4] Converting Red Hat Enterprise Linux Server 7.5 (Maipo) to run on KVM virt-v2v: This guest has virtio drivers installed. [ 226.8] Mapping filesystem data to avoid copying unused and blank areas [ 229.2] Closing the overlay [ 231.1] Checking if the guest needs BIOS or UEFI to boot [ 231.1] Assigning disks to buses [ 231.1] Copying disk 1/1 to qemu URI json:{ "file.driver": "nbd", "file.path": "/var/tmp/rhvupload.y5EoBm/nbdkit1.sock", "file.export": "/" } (raw) nbdkit: error: /var/tmp/rhvupload.y5EoBm/rhv-upload-plugin.py: open: error: Fault reason is "Operation Failed". Fault detail is "Entity not found: 10.66.144.40:/home/nfs_data". HTTP response code is 404. qemu-img: Could not open 'json:{ "file.driver": "nbd", "file.path": "/var/tmp/rhvupload.y5EoBm/nbdkit1.sock", "file.export": "/" }': Failed to read data: Unexpected end-of-file before all bytes were read virt-v2v: error: qemu-img command failed, see earlier errors If reporting bugs, run virt-v2v with debugging enabled and include the complete output: virt-v2v -v -x [...] Result4: Conversion is failed with qemu error, details pls refer to log'v2v-rhv-upload' Hi Pino, Please help to check test result 1,3 and 4, thanks very much! Created attachment 1445356 [details]
v2v-rhv-upload.log
We need to get these upstream fixes to the rhv upload plugin in 7.6: 1. 0ae61ce99c351f9cda598016fb55ccc50313df67 v2v: rhv-upload-plugin: Fix name error This makes virt-v2v ready for next imageio version (hopefully in 4.2.5) supporting keep alive connections. In my tests this speeds up HTTPS sparse files uploads by 212% and unix socket uploads by 14%. I expect similar speedup for virt-v2v. See more info in https://gerrit.ovirt.org/#/c/92296/. 2. 0ae61ce99c351f9cda598016fb55ccc50313df67 v2v: rhv-upload-plugin: Fix name error This fixes possible crash when working with old imageio versions that do not support "zero" feature. If there is a chance that virt-v2v in 7.6 will run with such old versions we should backport this trivial patch. (In reply to Nir Soffer from comment #13) > We need to get these upstream fixes to the rhv upload plugin in 7.6: > > 1. 0ae61ce99c351f9cda598016fb55ccc50313df67 v2v: rhv-upload-plugin: Fix name > error > > This makes virt-v2v ready for next imageio version (hopefully in 4.2.5) > supporting > keep alive connections. In my tests this speeds up HTTPS sparse files uploads > by 212% and unix socket uploads by 14%. I expect similar speedup for > virt-v2v. > See more info in https://gerrit.ovirt.org/#/c/92296/. This commit hash is the same of (2)... what is the actual commit you wanted to mention? (In reply to Pino Toscano from comment #14) > (In reply to Nir Soffer from comment #13) > This commit hash is the same of (2)... what is the actual commit you wanted > to mention? Sorry, here is the correct info: 1. commit f4e0a8342dbeb2c779c76e1807a37b24a0c96feb v2v: rvh-upload-plugin: Always read the response This makes virt-v2v ready for next imageio version (hopefully in 4.2.5) supporting keep alive connections. In my tests this speeds up HTTPS sparse files uploads by 212% and unix socket uploads by 14%. I expect similar speedup for virt-v2v. See more info in https://gerrit.ovirt.org/#/c/92296/. @Ming Xie: would it be possible to test again, with libguestfs-1.38.2-6.el7 ? Created attachment 1455480 [details]
rhv-upload-1.38.2-6.log
Created attachment 1456242 [details]
virt-v2v-rhv-upload-graphic.log
Created attachment 1456243 [details]
rhv-upload-video-registered-host.xml
Created attachment 1456244 [details]
rhv-upload-video-rhv4.2.png
> (2)Pls help to check scenario1-> 1.4, could make virt-v2v report MAC > address problem before copying because copying will cost a long time > and waste customer's time This is a tricky one unless we get help from RHV. The error only happens when we create the VM (at the end). AFAIK there's no way to reserve a MAC before then. > (3)I have a question about rhv-direct and rhv-verifypeer options, > could assign a value for them? rhv-direct is a somewhat mysterious setting. What's supposed to happen is that enabled means we talk directly to the target imageio instead of going through a proxy on the engine. It sounds to me like we should make this the default, but I need to aks Nir about it. rhv-verifypeer sets some obscure SSL settings: https://github.com/oVirt/ovirt-engine-sdk/blob/19aa7070b80e60a4cfd910448287aecf9083acbe/sdk/lib/ovirtsdk4/__init__.py#L395 > If set value true or false for them,do > them have same function with Not assigning? These are only true/false at the moment. They both default to false if they are not set. (In reply to Richard W.M. Jones from comment #24) > > (2)Pls help to check scenario1-> 1.4, could make virt-v2v report MAC > > address problem before copying because copying will cost a long time > > and waste customer's time > > This is a tricky one unless we get help from RHV. The error only > happens when we create the VM (at the end). AFAIK there's no way > to reserve a MAC before then. > > > (3)I have a question about rhv-direct and rhv-verifypeer options, > > could assign a value for them? We can create a vm first, without a disk, ensuring that we have a MAC address and other resources. Then we can upload the disk, and attach it to the vm. Arik, what do you think? > rhv-direct is a somewhat mysterious setting. What's supposed to > happen is that enabled means we talk directly to the target imageio > instead of going through a proxy on the engine. It sounds to me > like we should make this the default, but I need to aks Nir about it. Yes, direct should be the default. The only reason to not use direct is that you run virt-v2v on a host which cannot access the the ovirt hypervisor. Can we do this? 1. find the current host via engine 2. If we can use the current host, we are done 3. If we cannot use the current host, let engine pick the host 4. send OPTIONS to transfer_url 5. If we can communicate with the host, we are done 6. If we cannot communicate, use proxy_url Maybe there is a smarter way to confirm connectivity with the host. > rhv-verifypeer sets some obscure SSL settings: > > https://github.com/oVirt/ovirt-engine-sdk/blob/ > 19aa7070b80e60a4cfd910448287aecf9083acbe/sdk/lib/ovirtsdk4/__init__.py#L395 > > > If set value true or false for them,do > > them have same function with Not assigning? > > These are only true/false at the moment. They both default to false > if they are not set. I don't have any idea about rhv-verifypeer, maybe Ondra can help? Ondra, do you have a clue about the rhv-verifypeer issue? see comment 25. > The only reason to not use direct is that > you run virt-v2v on a host which cannot access the the ovirt hypervisor. Is this ever realisticly a thing that could happen? I'd be more than happy to get rid of the direct option completely and use direct always. > > rhv-verifypeer sets some obscure SSL settings: One thing I forgot to mention is that we are defaulting to insecure == 1 at the moment. (In reply to Nir Soffer from comment #25) > (In reply to Richard W.M. Jones from comment #24) > > > (2)Pls help to check scenario1-> 1.4, could make virt-v2v report MAC > > > address problem before copying because copying will cost a long time > > > and waste customer's time > > > > This is a tricky one unless we get help from RHV. The error only > > happens when we create the VM (at the end). AFAIK there's no way > > to reserve a MAC before then. > > > > > (3)I have a question about rhv-direct and rhv-verifypeer options, > > > could assign a value for them? > > We can create a vm first, without a disk, ensuring that we have a MAC > address and > other resources. Then we can upload the disk, and attach it to the vm. > > Arik, what do you think? Right, that's closer to how I think it should have been implemented: 1. That the client (e.g., virt-v2v) would trigger a call to ovirt-engine saying "I would like to upload a VM X with disks Y1...Yn". 2. ovirt-engine would add the VM in locked state, could be with some progress bar. 3. ovirt-engine would start monitoring the uploads of the disks. 4. disks are uploaded and attached to the VM. 5. if the uploads fail - ovirt-engine does the roll-back. if succeed - VM is unlocked. The process you proposed above lacks the locking (the VM would not be locked during the uploads of the disks) and the client still has to perform cleanup in case of a failure. So I don't know, it sounds better than what we currently have but we can do better, it depends on how much time we can devote to such changes at this point. (In reply to Arik from comment #28) > (In reply to Nir Soffer from comment #25) > > (In reply to Richard W.M. Jones from comment #24) > > > > (2)Pls help to check scenario1-> 1.4, could make virt-v2v report MAC > > > > address problem before copying because copying will cost a long time > > > > and waste customer's time > > > > > > This is a tricky one unless we get help from RHV. The error only > > > happens when we create the VM (at the end). AFAIK there's no way > > > to reserve a MAC before then. > > > > > > > (3)I have a question about rhv-direct and rhv-verifypeer options, > > > > could assign a value for them? > > > > We can create a vm first, without a disk, ensuring that we have a MAC > > address and > > other resources. Then we can upload the disk, and attach it to the vm. > > > > Arik, what do you think? > > Right, that's closer to how I think it should have been implemented: > 1. That the client (e.g., virt-v2v) would trigger a call to ovirt-engine > saying "I would like to upload a VM X with disks Y1...Yn". > 2. ovirt-engine would add the VM in locked state, could be with some > progress bar. > 3. ovirt-engine would start monitoring the uploads of the disks. > 4. disks are uploaded and attached to the VM. 4.5. if the configuration of the VM needs to be changed because of something that was discovered while converting the disks - the VM configuration is updated. > 5. if the uploads fail - ovirt-engine does the roll-back. if succeed - VM is > unlocked. > > The process you proposed above lacks the locking (the VM would not be locked > during the uploads of the disks) and the client still has to perform cleanup > in case of a failure. > So I don't know, it sounds better than what we currently have but we can do > better, it depends on how much time we can devote to such changes at this > point. > 4.5. if the configuration of the VM needs to be changed because
> of something that was discovered while converting the disks -
> the VM configuration is updated.
Right, this is essentially the reason we're not doing it like this now.
We don't have a full picture of the metadata until after conversion
has finished.
However - It's possible we could add another output method step between
conversion and copying which is actually I think late enough that we
know the final metadata, but early enough that the long copy has not
started, and that could solve this. It requires a fairly easy and
backwards compatible change to virt-v2v.
(In reply to Richard W.M. Jones from comment #30) > > 4.5. if the configuration of the VM needs to be changed because > > of something that was discovered while converting the disks - > > the VM configuration is updated. > > Right, this is essentially the reason we're not doing it like this now. > We don't have a full picture of the metadata until after conversion > has finished. > > However - It's possible we could add another output method step between > conversion and copying which is actually I think late enough that we > know the final metadata, but early enough that the long copy has not > started, and that could solve this. It requires a fairly easy and > backwards compatible change to virt-v2v. As this was just a bit of refactoring I added it upstream in preparation (note these commits are not required to be backported at the moment): https://github.com/libguestfs/libguestfs/commit/186ac2bfb21bc4536de93d682eb61a0b8928aeb8 https://github.com/libguestfs/libguestfs/commit/8fdc1d4852bfe5067b35cb8dca387d56b3836645 https://github.com/libguestfs/libguestfs/commit/6ca5986e45bb891f9b06324270cee7c0818e4d8f https://github.com/libguestfs/libguestfs/commit/7850e0bfb832e44a22f8e6ec35175cde1282c8e0 https://github.com/libguestfs/libguestfs/commit/2894b8f3924835b6d41c0990a9df5033c2be5249 well, we do not really plan any changes in that flow right now Note that regarding MAC conflicts there is more logic on the RHV side. Client decides if it should be reallocated in case of a conflict or when it's out of mac pool range of the DC. Currently we try to keep it the same even when outside of the pool, and it would fail to import in case of a conflict. OTOH with conversions there is no practical chance that such conflict can happen as the MAC prefix is different for KVM VMs and VMware VMs, and you could only get it when you try to import the same VM once again. That doesn't sound too important. Plus as a fail-safe there is an option to just produce the OVF and let end users to call the REST API themselves possibly after adjusting it. Pino, is there anything to track here other than clarification of man page? I have filed the bug 1598715 to track the problem of comment20-> scenario1-> 1.3.3 (In reply to Nir Soffer from comment #26) > Ondra, do you have a clue about the rhv-verifypeer issue? see comment 25. As per Python SDK documentation: `insecure`:: A boolean flag that indicates if the server TLS certificate and host name should be checked. `ca_file`:: A PEM file containing the trusted CA certificates. The certificate presented by the server will be verified using these CA certificates. If `ca_file` parameter is not set, system wide CA certificate store is used. if insecure=True, we don't check certificate and host name. If insecure=False + ca_file is passed we verify the certificate. For testing puroposes it's OK to use insecure=True, but for production use, you should required to pass the CA file. Verify the bug with builds: virt-v2v-1.38.2-8.el7.x86_64 libguestfs-1.38.2-8.el7.x86_64 libvirt-4.5.0-3.el7.x86_64 qemu-kvm-rhev-2.12.0-7.el7.x86_64 nbdkit-1.2.4-4.el7.x86_64 nbdkit-plugin-python2-1.2.4-4.el7.x86_64 nbdkit-plugin-vddk-1.2.4-1.el7.x86_64 OVMF-20180508-2.gitee3198e672e2.el7.noarch virtio-win-1.9.4-2.el7.noarch ovirt-imageio-daemon-1.4.1-0.el7ev.noarch RHV:4.2.5-0.1.el7ev Steps: 1.Check related rhv-upload info in v2v man page #man virt-v2v .... -o rhv-upload is used to write to a RHV / oVirt target. -o rhv is a legacy method to write to RHV / oVirt < 4.2. -o vdsm is only used when virt-v2v runs under VDSM control. .... -os storage For -o rhv-upload, this is the name of the destination Storage Domain. .... OUTPUT TO RHV .... -of raw Currently you must use -of raw and you cannot use -oa preallocated. .... Result: Didn't find mistake except bug1607129 Scenario1: 1.1 Convert a windows guest from VMware to rhv using --rhv-upload by virt-v2v,the conversion could be finished without error # virt-v2v -ic vpx://vsphere.local%5cAdministrator.73.141/data/10.73.75.219/?no_verify=1 esx6.7-win2016-x86_64 -o rhv-upload -oc https://ibm-x3250m5-03.rhts.eng.pek2.redhat.com/ovirt-engine/api -os nfs_data -op /tmp/rhvpasswd -oo rhv-cafile=/home/ca.pem -oo rhv-direct=true -of raw --password-file /tmp/passwd [ 0.4] Opening the source -i libvirt -ic vpx://vsphere.local%5cAdministrator.73.141/data/10.73.75.219/?no_verify=1 esx6.7-win2016-x86_64 [ 2.6] Creating an overlay to protect the source from being modified [ 3.6] Initializing the target -o rhv-upload -oc https://ibm-x3250m5-03.rhts.eng.pek2.redhat.com/ovirt-engine/api -op /tmp/rhvpasswd -os nfs_data [ 5.1] Opening the overlay [ 20.7] Inspecting the overlay [ 176.7] Checking for sufficient free disk space in the guest [ 176.7] Estimating space required on target for each disk [ 176.7] Converting Windows Server 2016 Standard to run on KVM virt-v2v: warning: /usr/share/virt-tools/pnp_wait.exe is missing. Firstboot scripts may conflict with PnP. virt-v2v: warning: there is no QXL driver for this version of Windows (10.0 x86_64). virt-v2v looks for this driver in /usr/share/virtio-win/virtio-win.iso The guest will be configured to use a basic VGA display driver. virt-v2v: This guest has virtio drivers installed. [ 223.7] Mapping filesystem data to avoid copying unused and blank areas [ 225.7] Closing the overlay [ 226.8] Checking if the guest needs BIOS or UEFI to boot [ 226.8] Assigning disks to buses [ 226.8] Copying disk 1/1 to qemu URI json:{ "file.driver": "nbd", "file.path": "/var/tmp/rhvupload.aRTXuO/nbdkit1.sock", "file.export": "/" } (raw) (100.00/100%) [2256.4] Creating output metadata [2279.6] Finishing off 1.2 Power on guest and checkpoints of guest are passed Scenario2: 2.1 Convert a linux guest from VMware via vddk to rhv using --rhv-upload by virt-v2v, the conversion could be finished without error # virt-v2v -ic vpx://vsphere.local%5cAdministrator.73.141/data/10.73.196.89/?no_verify=1 -it vddk -io vddk-libdir=/home/vmware-vix-disklib-distrib -io vddk-thumbprint=1F:97:34:5F:B6:C2:BA:66:46:CB:1A:71:76:7D:6B:50:1E:03:00:EA esx6.5-rhel6.9-x86_64 --password-file /tmp/passwd -o rhv-upload -oc https://ibm-x3250m5-03.rhts.eng.pek2.redhat.com/ovirt-engine/api -os nfs_data -op /tmp/rhvpasswd -oo rhv-cafile=/home/ca.pem -oo rhv-direct -of raw -oa preallocated -b ovirtmgmt [ 1.3] Opening the source -i libvirt -ic vpx://vsphere.local%5cAdministrator.73.141/data/10.73.196.89/?no_verify=1 esx6.5-rhel6.9-x86_64 -it vddk -io vddk-libdir=/home/vmware-vix-disklib-distrib -io vddk-thumbprint=1F:97:34:5F:B6:C2:BA:66:46:CB:1A:71:76:7D:6B:50:1E:03:00:EA [ 3.1] Creating an overlay to protect the source from being modified [ 6.8] Initializing the target -o rhv-upload -oa preallocated -oc https://ibm-x3250m5-03.rhts.eng.pek2.redhat.com/ovirt-engine/api -op /tmp/rhvpasswd -os nfs_data [ 8.4] Opening the overlay [ 15.7] Inspecting the overlay [ 40.7] Checking for sufficient free disk space in the guest [ 40.7] Estimating space required on target for each disk [ 40.7] Converting Red Hat Enterprise Linux Server release 6.9 (Santiago) to run on KVM virt-v2v: This guest has virtio drivers installed. [ 165.6] Mapping filesystem data to avoid copying unused and blank areas [ 166.7] Closing the overlay [ 168.2] Checking if the guest needs BIOS or UEFI to boot [ 168.2] Assigning disks to buses [ 168.2] Copying disk 1/1 to qemu URI json:{ "file.driver": "nbd", "file.path": "/var/tmp/rhvupload.lAmIeo/nbdkit1.sock", "file.export": "/" } (raw) (100.00/100%) [1471.8] Creating output metadata [1490.9] Finishing off 2.2 Power on guest and checkpoints of guest are passed Scenario3: 3.1 Convert a guest from VMX to rhv4.2's iSCSI data using --rhv-upload by virt-v2v # virt-v2v -i vmx esx5.5-win2012R2-x86_64.vmx -o rhv-upload -oc https://hp-dl360eg8-03.lab.eng.pek2.redhat.com/ovirt-engine/api -os iscsi_data -op /tmp/rhvpasswd -oo rhv-cafile=/root/ca.pem -of raw --password-file /tmp/passwd -b ovirtmgmt -oa preallocated -oo rhv-cluster=ISCSI -oo rhv-verifypeer [ 2.2] Opening the source -i vmx esx5.5-win2012R2-x86_64.vmx [ 2.3] Creating an overlay to protect the source from being modified [ 3.4] Initializing the target -o rhv-upload -oa preallocated -oc https://hp-dl360eg8-03.lab.eng.pek2.redhat.com/ovirt-engine/api -op /tmp/rhvpasswd -os iscsi_data [ 6.2] Opening the overlay [ 20.0] Inspecting the overlay [ 29.4] Checking for sufficient free disk space in the guest [ 29.4] Estimating space required on target for each disk [ 29.4] Converting Windows Server 2012 R2 Standard to run on KVM virt-v2v: warning: /usr/share/virt-tools/pnp_wait.exe is missing. Firstboot scripts may conflict with PnP. virt-v2v: warning: there is no QXL driver for this version of Windows (6.3 x86_64). virt-v2v looks for this driver in /usr/share/virtio-win/virtio-win.iso The guest will be configured to use a basic VGA display driver. virt-v2v: This guest has virtio drivers installed. [ 55.9] Mapping filesystem data to avoid copying unused and blank areas [ 57.0] Closing the overlay [ 57.5] Checking if the guest needs BIOS or UEFI to boot [ 57.5] Assigning disks to buses [ 57.5] Copying disk 1/1 to qemu URI json:{ "file.driver": "nbd", "file.path": "/var/tmp/rhvupload.kim7sf/nbdkit0.sock", "file.export": "/" } (raw) (100.00/100%) [1387.4] Creating output metadata [1408.0] Finishing off 3.2 Power on guest and checkpoints of guest are passed except bug1584678 Scenario4: 4.1 Convert a guest from OVA to rhv's FC data using --rhv-upload by virt-v2v # virt-v2v -i ova esx6.7-rhel7.5-x86_64 -o rhv-upload -oc https://hp-dl360eg8-03.lab.eng.pek2.redhat.com/ovirt-engine/api -os fc_data -op /tmp/rhvpasswd -oo rhv-cafile=/root/ca.pem -of raw --password-file /tmp/passwd -b ovirtmgmt -oa preallocated -oo rhv-cluster=FC [ 0.4] Opening the source -i ova esx6.7-rhel7.5-x86_64 virt-v2v: warning: making OVA directory public readable to work around libvirt bug https://bugzilla.redhat.com/1045069 [ 27.1] Creating an overlay to protect the source from being modified [ 27.6] Initializing the target -o rhv-upload -oa preallocated -oc https://hp-dl360eg8-03.lab.eng.pek2.redhat.com/ovirt-engine/api -op /tmp/rhvpasswd -os fc_data [ 29.7] Opening the overlay [ 36.8] Inspecting the overlay [ 119.6] Checking for sufficient free disk space in the guest [ 119.6] Estimating space required on target for each disk [ 119.6] Converting Red Hat Enterprise Linux Server 7.5 (Maipo) to run on KVM virt-v2v: This guest has virtio drivers installed. [ 366.6] Mapping filesystem data to avoid copying unused and blank areas [ 368.1] Closing the overlay [ 371.4] Checking if the guest needs BIOS or UEFI to boot [ 371.4] Assigning disks to buses [ 371.4] Copying disk 1/1 to qemu URI json:{ "file.driver": "nbd", "file.path": "/var/tmp/rhvupload.Iddu5X/nbdkit0.sock", "file.export": "/" } (raw) (100.00/100%) [2282.2] Creating output metadata [2306.7] Finishing off 4.2 Power on guest and checkpoints of guest are passed except bug1318922 Scenario5: 5.1 Convert a guest from Xen server to rhv using --rhv-upload by virt-v2v # virt-v2v -ic xen+ssh://root.3.21 xen-pv-rhel6.9-x86_64 -o rhv-upload -oc https://ibm-x3250m5-03.rhts.eng.pek2.redhat.com/ovirt-engine/api -os nfs_data -op /tmp/rhvpasswd -oo rhv-cafile=/home/ca.pem -oo rhv-direct -of raw -oa preallocated -b ovirtmgmt [ 0.3] Opening the source -i libvirt -ic xen+ssh://root.3.21 xen-pv-rhel6.9-x86_64 [ 0.9] Creating an overlay to protect the source from being modified [ 1.9] Initializing the target -o rhv-upload -oa preallocated -oc https://ibm-x3250m5-03.rhts.eng.pek2.redhat.com/ovirt-engine/api -op /tmp/rhvpasswd -os nfs_data [ 4.1] Opening the overlay [ 11.5] Inspecting the overlay [ 55.4] Checking for sufficient free disk space in the guest [ 55.4] Estimating space required on target for each disk [ 55.4] Converting Red Hat Enterprise Linux Server release 6.9 (Santiago) to run on KVM virt-v2v: This guest has virtio drivers installed. [ 255.5] Mapping filesystem data to avoid copying unused and blank areas [ 257.0] Closing the overlay [ 258.4] Checking if the guest needs BIOS or UEFI to boot [ 258.4] Assigning disks to buses [ 258.4] Copying disk 1/1 to qemu URI json:{ "file.driver": "nbd", "file.path": "/var/tmp/rhvupload.zYaVeJ/nbdkit1.sock", "file.export": "/" } (raw) (100.00/100%) [1041.8] Creating output metadata [1058.8] Finishing off 5.2 Power on guest and checkpoints of guest are passed Result: According to above test results, move the bug from ON_QA to VERIFIED FWIW Daniel Erez added another fix for -o rhv-upload: https://github.com/libguestfs/libguestfs/commit/2547df8a0de46bb1447396e07ee0989bc3f8f31e We may need this in RHEL 7.6 if we can squeeze it in. And another one: https://github.com/libguestfs/libguestfs/commit/23b62f391b098b74e2de6c2d2a911b8ef91543a2 Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory, and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://access.redhat.com/errata/RHEA-2018:3021 |