Bug 1466563
| Summary: | Libguestfs should pass copyonread flag through to the libvirt XML | |||
|---|---|---|---|---|
| Product: | Red Hat Enterprise Linux 7 | Reporter: | kuwei <kuwei> | |
| Component: | libguestfs | Assignee: | Richard W.M. Jones <rjones> | |
| Status: | CLOSED ERRATA | QA Contact: | Virtualization Bugs <virt-bugs> | |
| Severity: | medium | Docs Contact: | ||
| Priority: | high | |||
| Version: | 7.4 | CC: | dornelas, jherrman, jsuchane, juzhou, kuwei, mtessun, mxie, mzhan, ptoscano, rjones, salmy, tzheng, xiaodwan | |
| Target Milestone: | rc | Keywords: | Regression, ZStream | |
| Target Release: | --- | |||
| Hardware: | x86_64 | |||
| OS: | Linux | |||
| Whiteboard: | V2V | |||
| Fixed In Version: | libguestfs-1.36.5-1.el7 | Doc Type: | Bug Fix | |
| Doc Text: |
Previously, when the libvirt back end to libguestfs was used, the "copyonread" flag was not passed through, which caused performance problems for libguestfs utilities. Notably, the virt-v2v utility was slowed significantly when converting guests from the VMware hypervisor. With this update, "copyonread" is correctly respected when running libguestfs utilities using the libvirt back end. As a result, the described problem no longer occurs.
|
Story Points: | --- | |
| Clone Of: | ||||
| : | 1469902 (view as bug list) | Environment: | ||
| Last Closed: | 2018-04-10 09:15:08 UTC | Type: | Bug | |
| Regression: | --- | Mount Type: | --- | |
| Documentation: | --- | CRM: | ||
| Verified Versions: | Category: | --- | ||
| oVirt Team: | --- | RHEL 7.3 requirements from Atomic Host: | ||
| Cloudforms Team: | --- | Target Upstream Version: | ||
| Embargoed: | ||||
| Bug Depends On: | 1472272 | |||
| Bug Blocks: | 1469902, 1473046 | |||
Upstream commit: https://github.com/libguestfs/libguestfs/commit/d96209ef07fef3db0e411648348f1b1dcf384e70 Try to reproduce this bug with builds: virt-v2v-1.36.3-6.el7.x86_64 libguestfs-1.36.3-6.el7.x86_64 libvirt-client-3.7.0-2.el7.x86_64 qemu-kvm-rhev-2.9.0-16.el7_4.8.x86_64 libguestfs-winsupport-7.2-2.el7.x86_64 virtio-win-1.9.3-1.el7.noarch Steps: 1.virt-v2v -ic vpx://vsphere.local%5cAdministrator.199.71/data/10.73.196.89/?no_verify=1 esx6.5-rhel6.9-i386 --password-file /tmp/passwd -o rhv -os 10.73.131.93:/home/nfs_export > Use time [1425.7] 2.# virt-v2v -ic vpx://vsphere.local%5cAdministrator.199.71/data/10.73.196.89/?no_verify=1 esx6.5-rhel6.9-x86_64 --password-file /tmp/passwd -o rhv -os 10.73.131.93:/home/nfs_export > Use time [3871.0] 3.# virt-v2v -ic vpx://vsphere.local%5cAdministrator.199.71/data/10.73.196.89/?no_verify=1 esx6.5-win2016-x86_64 --password-file /tmp/passwd -o rhv -os 10.73.131.93:/home/nfs_export >Use time [1984.6] 4.#virt-v2v -ic vpx://root.75.182/data/10.73.72.61/?no_verify=1 esx6.0-rhel7.4-x86_64 --password-file /tmp/passwd -o rhv -os 10.73.131.93:/home/nfs_export >Use time [3120.2] 5.# virt-v2v -ic vpx://root.75.182/data/10.73.72.61/?no_verify=1 esx6.0-win2008r2-x86_64 --password-file /tmp/passwd -o local -os /var/tmp >Use time [ 939.6] 6.# virt-v2v -ic vpx://root.75.182/data/10.73.3.19/?no_verify=1 esx5.5-rhel7.3-x86_64 --password-file /tmp/passwd -o rhv -os 10.73.131.93:/home/nfs_export >Use time [2641.2] 7.#virt-v2v -ic vpx://root.75.182/data/10.73.72.75/?no_verify=1 esx5.1-win7-x86_64 --password-file /tmp/passwd -o rhv -os 10.73.131.93:/home/nfs_export >Use time [1196.5] Verify the bug with below builds on same v2v conversion server with reproduced builds: virt-v2v-1.36.6-1.el7.x86_64 libguestfs-1.36.6-1.el7.x86_64 libvirt-client-3.7.0-2.el7.x86_64 qemu-kvm-rhev-2.9.0-16.el7_4.8.x86_64 libguestfs-winsupport-7.2-2.el7.x86_64 virtio-win-1.9.3-1.el7.noarch Steps: Convert the same guests with the ones which were converted by reproduced steps: 1.#virt-v2v -ic vpx://vsphere.local%5cAdministrator.199.71/data/10.73.196.89/?no_verify=1 esx6.5-rhel6.9-i386 --password-file /tmp/passwd -o rhv -os 10.73.131.93:/home/nfs_export > Use time [1616.5] 2.# virt-v2v -ic vpx://vsphere.local%5cAdministrator.199.71/data/10.73.196.89/?no_verify=1 esx6.5-rhel6.9-x86_64 --password-file /tmp/passwd -o rhv -os 10.73.131.93:/home/nfs_export > Use time [1936.6] 3.# virt-v2v -ic vpx://vsphere.local%5cAdministrator.199.71/data/10.73.196.89/?no_verify=1 esx6.5-win2016-x86_64 --password-file /tmp/passwd -o rhv -os 10.73.131.93:/home/nfs_export >Use time [1740.9] 4.#virt-v2v -ic vpx://root.75.182/data/10.73.72.61/?no_verify=1 esx6.0-rhel7.4-x86_64 --password-file /tmp/passwd -o rhv -os 10.73.131.93:/home/nfs_export >Use time [2364.1] 5.# virt-v2v -ic vpx://root.75.182/data/10.73.72.61/?no_verify=1 esx6.0-win2008r2-x86_64 --password-file /tmp/passwd -o local -os /var/tmp >Use time [ 656.4] 6.# virt-v2v -ic vpx://root.75.182/data/10.73.3.19/?no_verify=1 esx5.5-rhel7.3-x86_64 --password-file /tmp/passwd -o rhv -os 10.73.131.93:/home/nfs_export >Use time [1843.0] 7.#virt-v2v -ic vpx://root.75.182/data/10.73.72.75/?no_verify=1 esx5.1-win7-x86_64 --password-file /tmp/passwd -o rhv -os 10.73.131.93:/home/nfs_export >Use time [1837.3] Result: Using virt-v2v-1.36.6-1.el7.x86_64 to convert the guest could save more time than virt-v2v-1.36.3-6.el7.x86_64 sometimes, besides,could find <driver name="qemu".... copy_on_read="on"/> in details v2v log with virt-v2v-1.36.6-1.el7.x86_64 build Hi rjones, According above test result, Using virt-v2v-1.36.6-1.el7.x86_64 to convert the guest doesn't save more time than virt-v2v-1.36.3-6.el7.x86_64 at 100%, do you think the result could be accepted? (In reply to mxie from comment #10) > Using virt-v2v-1.36.6-1.el7.x86_64 to convert the guest could save more > time than virt-v2v-1.36.3-6.el7.x86_64 sometimes, besides,could find > <driver name="qemu".... copy_on_read="on"/> in details v2v log with > virt-v2v-1.36.6-1.el7.x86_64 build The right attribute in the libvirt XML is the important detail here, so it is good it is found as expected. Regarding the conversion times: there is a general reduction of the times, which is expected; note also the actual times depends on various factors, such as: - I/O on the source ESX, on vCenter, and on the destination - network latency ESX<->vCenter, vCenter<->v2v, and v2v<->NFS So if the times increasings only happen few times, I would say it is a local glitch. Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory, and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://access.redhat.com/errata/RHBA-2018:0677 |
Description of problem: This resulted in significant performance degradation (2x-3x slower) when running virt-v2v convert guest from VMware servers to local kvm.We should dropping the add_drive copyonread flag when using the libvirt backend. Version-Release number of selected component (if applicable): qemu-kvm-rhev-2.9.0-14.el7.x86_64 kernel-3.10.0-681.el7.x86_64 libvirt-3.2.0-14.el7.x86_64 virt-v2v-1.36.3-6.el7.x86_64 libvirt-2.0.0-10.el7.x86_64 How reproducible: Steps to Reproduce: Scenario 1: Packages with below: qemu-kvm-rhev-2.9.0-14.el7.x86_64 kernel-3.10.0-681.el7.x86_64 virt-v2v-1.36.3-6.el7.x86_64 libvirt-2.0.0-10.el7.x86_64 1.Convert a rhel7.3 guestfs from vmware to local kvm # virt-v2v -ic vpx://root.75.182/data/10.73.3.19/?no_verify=1 esx5.5-rhel7.3-x86_64 --password-file /tmp/passwd [ 0.0] Opening the source -i libvirt -ic vpx://root.75.182/data/10.73.3.19/?no_verify=1 esx5.5-rhel7.3-x86_64 [ 1.3] Creating an overlay to protect the source from being modified [ 2.7] Initializing the target -o libvirt -os default [ 2.7] Opening the overlay [ 41.7] Inspecting the overlay [ 128.1] Checking for sufficient free disk space in the guest [ 128.1] Estimating space required on target for each disk [ 128.1] Converting Red Hat Enterprise Linux Server 7.3 (Maipo) to run on KVM virt-v2v: This guest has virtio drivers installed. [ 992.7] Mapping filesystem data to avoid copying unused and blank areas [ 995.2] Closing the overlay [ 995.5] Checking if the guest needs BIOS or UEFI to boot [ 995.5] Assigning disks to buses [ 995.5] Copying disk 1/1 to /var/lib/libvirt/images/esx5.5-rhel7.3-x86_64-sda (raw) (100.00/100%) [1648.2] Creating output metadata Pool default refreshed Domain esx5.5-rhel7.3-x86_64 defined from /tmp/v2vlibvirt8d734f.xml [1648.6] Finishing off Scenario 2: Packages with below: qemu-kvm-rhev-2.9.0-14.el7.x86_64 kernel-3.10.0-681.el7.x86_64 libvirt-3.2.0-14.el7.x86_64 virt-v2v-1.36.3-6.el7.x86_64 1.Convert a rhel7.3 guestfs from vmware to local kvm # virt-v2v -ic vpx://root.75.182/data/10.73.3.19/?no_verify=1 esx5.5-rhel7.3-x86_64 --password-file /tmp/passwd [ 0.0] Opening the source -i libvirt -ic vpx://root.75.182/data/10.73.3.19/?no_verify=1 esx5.5-rhel7.3-x86_64 [ 1.2] Creating an overlay to protect the source from being modified [ 1.7] Initializing the target -o libvirt -os default [ 1.7] Opening the overlay [ 56.1] Inspecting the overlay [ 211.6] Checking for sufficient free disk space in the guest [ 211.6] Estimating space required on target for each disk [ 211.6] Converting Red Hat Enterprise Linux Server 7.3 (Maipo) to run on KVM virt-v2v: This guest has virtio drivers installed. [1792.8] Mapping filesystem data to avoid copying unused and blank areas [1820.8] Closing the overlay [1821.3] Checking if the guest needs BIOS or UEFI to boot [1821.3] Assigning disks to buses [1821.3] Copying disk 1/1 to /var/lib/libvirt/images/esx5.5-rhel7.3-x86_64-sda (raw) (100.00/100%) [2461.3] Creating output metadata Pool default refreshed Domain esx5.5-rhel7.3-x86_64 defined from /tmp/v2vlibvirt16025c.xml [2461.9] Finishing off. 2.Compared scenario 1 and scenario 2,we can find that significant performance degradation when running virt-v2v convert guest from VMware servers to local kvm with package libvirt-3.2.0-14.el7.x86_64 Actual results: As above Expected results: Virt-v2v convert guest from VMware server should have a better performance. Additional info: