Bug 1466563
Summary: | Libguestfs should pass copyonread flag through to the libvirt XML | |||
---|---|---|---|---|
Product: | Red Hat Enterprise Linux 7 | Reporter: | kuwei <kuwei> | |
Component: | libguestfs | Assignee: | Richard W.M. Jones <rjones> | |
Status: | CLOSED ERRATA | QA Contact: | Virtualization Bugs <virt-bugs> | |
Severity: | medium | Docs Contact: | ||
Priority: | high | |||
Version: | 7.4 | CC: | dornelas, jherrman, jsuchane, juzhou, kuwei, mtessun, mxie, mzhan, ptoscano, rjones, salmy, tzheng, xiaodwan | |
Target Milestone: | rc | Keywords: | Regression, ZStream | |
Target Release: | --- | |||
Hardware: | x86_64 | |||
OS: | Linux | |||
Whiteboard: | V2V | |||
Fixed In Version: | libguestfs-1.36.5-1.el7 | Doc Type: | Bug Fix | |
Doc Text: |
Previously, when the libvirt back end to libguestfs was used, the "copyonread" flag was not passed through, which caused performance problems for libguestfs utilities. Notably, the virt-v2v utility was slowed significantly when converting guests from the VMware hypervisor. With this update, "copyonread" is correctly respected when running libguestfs utilities using the libvirt back end. As a result, the described problem no longer occurs.
|
Story Points: | --- | |
Clone Of: | ||||
: | 1469902 (view as bug list) | Environment: | ||
Last Closed: | 2018-04-10 09:15:08 UTC | Type: | Bug | |
Regression: | --- | Mount Type: | --- | |
Documentation: | --- | CRM: | ||
Verified Versions: | Category: | --- | ||
oVirt Team: | --- | RHEL 7.3 requirements from Atomic Host: | ||
Cloudforms Team: | --- | Target Upstream Version: | ||
Embargoed: | ||||
Bug Depends On: | 1472272 | |||
Bug Blocks: | 1469902, 1473046 |
Description
kuwei@redhat.com
2017-06-30 02:33:22 UTC
Upstream commit: https://github.com/libguestfs/libguestfs/commit/d96209ef07fef3db0e411648348f1b1dcf384e70 Try to reproduce this bug with builds: virt-v2v-1.36.3-6.el7.x86_64 libguestfs-1.36.3-6.el7.x86_64 libvirt-client-3.7.0-2.el7.x86_64 qemu-kvm-rhev-2.9.0-16.el7_4.8.x86_64 libguestfs-winsupport-7.2-2.el7.x86_64 virtio-win-1.9.3-1.el7.noarch Steps: 1.virt-v2v -ic vpx://vsphere.local%5cAdministrator.199.71/data/10.73.196.89/?no_verify=1 esx6.5-rhel6.9-i386 --password-file /tmp/passwd -o rhv -os 10.73.131.93:/home/nfs_export > Use time [1425.7] 2.# virt-v2v -ic vpx://vsphere.local%5cAdministrator.199.71/data/10.73.196.89/?no_verify=1 esx6.5-rhel6.9-x86_64 --password-file /tmp/passwd -o rhv -os 10.73.131.93:/home/nfs_export > Use time [3871.0] 3.# virt-v2v -ic vpx://vsphere.local%5cAdministrator.199.71/data/10.73.196.89/?no_verify=1 esx6.5-win2016-x86_64 --password-file /tmp/passwd -o rhv -os 10.73.131.93:/home/nfs_export >Use time [1984.6] 4.#virt-v2v -ic vpx://root.75.182/data/10.73.72.61/?no_verify=1 esx6.0-rhel7.4-x86_64 --password-file /tmp/passwd -o rhv -os 10.73.131.93:/home/nfs_export >Use time [3120.2] 5.# virt-v2v -ic vpx://root.75.182/data/10.73.72.61/?no_verify=1 esx6.0-win2008r2-x86_64 --password-file /tmp/passwd -o local -os /var/tmp >Use time [ 939.6] 6.# virt-v2v -ic vpx://root.75.182/data/10.73.3.19/?no_verify=1 esx5.5-rhel7.3-x86_64 --password-file /tmp/passwd -o rhv -os 10.73.131.93:/home/nfs_export >Use time [2641.2] 7.#virt-v2v -ic vpx://root.75.182/data/10.73.72.75/?no_verify=1 esx5.1-win7-x86_64 --password-file /tmp/passwd -o rhv -os 10.73.131.93:/home/nfs_export >Use time [1196.5] Verify the bug with below builds on same v2v conversion server with reproduced builds: virt-v2v-1.36.6-1.el7.x86_64 libguestfs-1.36.6-1.el7.x86_64 libvirt-client-3.7.0-2.el7.x86_64 qemu-kvm-rhev-2.9.0-16.el7_4.8.x86_64 libguestfs-winsupport-7.2-2.el7.x86_64 virtio-win-1.9.3-1.el7.noarch Steps: Convert the same guests with the ones which were converted by reproduced steps: 1.#virt-v2v -ic vpx://vsphere.local%5cAdministrator.199.71/data/10.73.196.89/?no_verify=1 esx6.5-rhel6.9-i386 --password-file /tmp/passwd -o rhv -os 10.73.131.93:/home/nfs_export > Use time [1616.5] 2.# virt-v2v -ic vpx://vsphere.local%5cAdministrator.199.71/data/10.73.196.89/?no_verify=1 esx6.5-rhel6.9-x86_64 --password-file /tmp/passwd -o rhv -os 10.73.131.93:/home/nfs_export > Use time [1936.6] 3.# virt-v2v -ic vpx://vsphere.local%5cAdministrator.199.71/data/10.73.196.89/?no_verify=1 esx6.5-win2016-x86_64 --password-file /tmp/passwd -o rhv -os 10.73.131.93:/home/nfs_export >Use time [1740.9] 4.#virt-v2v -ic vpx://root.75.182/data/10.73.72.61/?no_verify=1 esx6.0-rhel7.4-x86_64 --password-file /tmp/passwd -o rhv -os 10.73.131.93:/home/nfs_export >Use time [2364.1] 5.# virt-v2v -ic vpx://root.75.182/data/10.73.72.61/?no_verify=1 esx6.0-win2008r2-x86_64 --password-file /tmp/passwd -o local -os /var/tmp >Use time [ 656.4] 6.# virt-v2v -ic vpx://root.75.182/data/10.73.3.19/?no_verify=1 esx5.5-rhel7.3-x86_64 --password-file /tmp/passwd -o rhv -os 10.73.131.93:/home/nfs_export >Use time [1843.0] 7.#virt-v2v -ic vpx://root.75.182/data/10.73.72.75/?no_verify=1 esx5.1-win7-x86_64 --password-file /tmp/passwd -o rhv -os 10.73.131.93:/home/nfs_export >Use time [1837.3] Result: Using virt-v2v-1.36.6-1.el7.x86_64 to convert the guest could save more time than virt-v2v-1.36.3-6.el7.x86_64 sometimes, besides,could find <driver name="qemu".... copy_on_read="on"/> in details v2v log with virt-v2v-1.36.6-1.el7.x86_64 build Hi rjones, According above test result, Using virt-v2v-1.36.6-1.el7.x86_64 to convert the guest doesn't save more time than virt-v2v-1.36.3-6.el7.x86_64 at 100%, do you think the result could be accepted? (In reply to mxie from comment #10) > Using virt-v2v-1.36.6-1.el7.x86_64 to convert the guest could save more > time than virt-v2v-1.36.3-6.el7.x86_64 sometimes, besides,could find > <driver name="qemu".... copy_on_read="on"/> in details v2v log with > virt-v2v-1.36.6-1.el7.x86_64 build The right attribute in the libvirt XML is the important detail here, so it is good it is found as expected. Regarding the conversion times: there is a general reduction of the times, which is expected; note also the actual times depends on various factors, such as: - I/O on the source ESX, on vCenter, and on the destination - network latency ESX<->vCenter, vCenter<->v2v, and v2v<->NFS So if the times increasings only happen few times, I would say it is a local glitch. Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory, and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://access.redhat.com/errata/RHBA-2018:0677 |