Created attachment 1693010 [details] bandwidth-option.log Description of problem: Option --bandwidth and --bandwidth-file isn't working as expected during v2v conversion Version-Release number of selected component (if applicable): virt-v2v-1.42.0-3.module+el8.3.0+6497+b190d2a5.x86_64 libguestfs-1.42.0-1.module+el8.3.0+6496+d39ac712.x86_64 package libvirt is not installed qemu-kvm-5.0.0-0.module+el8.3.0+6620+5d5e1420.x86_64 nbdkit-1.20.2-1.module+el8.3.0+6764+cc503f20.x86_64 How reproducible: 100% Steps to Reproduce: 1.Convert a guest from VMware with option --bandwidth by v2v but conversion is failed due to nbdkit error # virt-v2v -ic vpx://root.73.141/data/10.73.75.219/?no_verify=1 -it vddk -io vddk-libdir=/home/vmware-vix-disklib-distrib -io vddk-thumbprint=1F:97:34:5F:B6:C2:BA:66:46:CB:1A:71:76:7D:6B:50:1E:03:00:EA -o rhv-upload -oc https://ibm-x3250m5-03.rhts.eng.pek2.redhat.com/ovirt-engine/api -op /home/rhvpasswd -b ovirtmgmt --password-file /home/passwd -of raw -oo rhv-cluster=Default -os nfs_data Auto-esx6.7-rhel7.6-uefi-raid -oo rhv-verifypeer=true -oo rhv-cafile=/home/ca.pem --bandwidth 200M [ 1.2] Opening the source -i libvirt -ic vpx://root.73.141:443/data/10.73.75.219/?no_verify=1 Auto-esx6.7-rhel7.6-uefi-raid -it vddk -io vddk-libdir=/home/vmware-vix-disklib-distrib -io vddk-thumbprint=1F:97:34:5F:B6:C2:BA:66:46:CB:1A:71:76:7D:6B:50:1E:03:00:EA nbdkit: error: could not parse size string (=200M) virt-v2v: error: nbdkit did not start up. There may be errors printed by nbdkit above. If the messages above are not sufficient to diagnose the problem then add the ‘virt-v2v -v -x’ options and examine the debugging output carefully. If reporting bugs, run virt-v2v with debugging enabled and include the complete output: virt-v2v -v -x [...] 2.Convert a guest from VMware with option --bandwidth-file by v2v 2.1 # cat bandwidth 200M 2.2 # virt-v2v -ic vpx://root.73.141/data/10.73.75.219/?no_verify=1 -it vddk -io vddk-libdir=/home/vmware-vix-disklib-distrib -io vddk-thumbprint=1F:97:34:5F:B6:C2:BA:66:46:CB:1A:71:76:7D:6B:50:1E:03:00:EA -o rhv-upload -oc https://ibm-x3250m5-03.rhts.eng.pek2.redhat.com/ovirt-engine/api -op /home/rhvpasswd -b ovirtmgmt --password-file /home/passwd -of raw -oo rhv-cluster=Default -os nfs_data Auto-esx6.7-win2019-x86_64-efi -oo rhv-verifypeer=true -oo rhv-cafile=/home/ca.pem --bandwidth-file bandwidth -v -x |& tee > bandwidth-file.log 2.3 Check v2v debug log about bandwidth # cat bandwidth-file.log |grep bandwidth LANG=C 'nbdkit' '--exit-with-parent' '--foreground' '--newstyle' '--pidfile' '/tmp/v2vnbdkit.dGtTnD/nbdkit2.pid' '--unix' '/tmp/v2vnbdkit.dGtTnD/nbdkit2.sock' '-D' 'nbdkit.backend.datapath=0' '-D' 'vddk.datapath=0' '--exportname' '/' '--readonly' '--selinux-label' 'system_u:object_r:svirt_socket_t:s0' '--verbose' '--filter' 'rate' '--filter' 'cacheextents' '--filter' 'readahead' '--filter' 'retry' 'vddk' 'server=10.73.73.141' 'user=root' 'password=+/home/passwd' 'vm=moref=vm-640' 'file=[esx6.7-function] Auto-esx6.7-win2019-x86_64-efi/Auto-esx6.7-win2019-x86_64-efi.vmdk' 'libdir=/home/vmware-vix-disklib-distrib' 'thumbprint=1F:97:34:5F:B6:C2:BA:66:46:CB:1A:71:76:7D:6B:50:1E:03:00:EA' 'rate-file==bandwidth' nbdkit: debug: rate: config key=rate-file, value==bandwidth nbdkit: debug: rate: config key=rate-file, value==bandwidth 3.Convert a guest from VMware with option --bandwidth-file and --bandwidth and by v2v, conversion is failed like step1 # virt-v2v -ic vpx://root.73.141/data/10.73.75.219/?no_verify=1 -it vddk -io vddk-libdir=/home/vmware-vix-disklib-distrib -io vddk-thumbprint=1F:97:34:5F:B6:C2:BA:66:46:CB:1A:71:76:7D:6B:50:1E:03:00:EA -o rhv-upload -oc https://ibm-x3250m5-03.rhts.eng.pek2.redhat.com/ovirt-engine/api -op /home/rhvpasswd -b ovirtmgmt --password-file /home/passwd -of raw -oo rhv-cluster=Default -os nfs_data Auto-esx6.7-win2019-x86_64-efi -oo rhv-verifypeer=true -oo rhv-cafile=/home/ca.pem --bandwidth-file bandwidth --bandwidth 180M [ 1.2] Opening the source -i libvirt -ic vpx://root.73.141:443/data/10.73.75.219/?no_verify=1 Auto-esx6.7-win2019-x86_64-efi -it vddk -io vddk-libdir=/home/vmware-vix-disklib-distrib -io vddk-thumbprint=1F:97:34:5F:B6:C2:BA:66:46:CB:1A:71:76:7D:6B:50:1E:03:00:EA nbdkit: error: could not parse size string (=180M) virt-v2v: error: nbdkit did not start up. There may be errors printed by nbdkit above. If the messages above are not sufficient to diagnose the problem then add the ‘virt-v2v -v -x’ options and examine the debugging output carefully. If reporting bugs, run virt-v2v with debugging enabled and include the complete output: virt-v2v -v -x [...] Actual results: As above description Expected results: Option --bandwidth and --bandwidth-file work as expected during v2v conversion Additional info:
Patch posted: https://www.redhat.com/archives/libguestfs/2020-May/thread.html#00105
Fixed upstream with https://github.com/libguestfs/virt-v2v/commit/a89a084b2d0f6d40716c1d34969f6c49ea28e9b3
Verify the bug with below builds: virt-v2v-1.42.0-4.module+el8.3.0+6798+ad6e66be.x86_64 libguestfs-1.42.0-2.module+el8.3.0+6798+ad6e66be.x86_64 libvirt-6.4.0-1.module+el8.3.0+6881+88468c00.x86_64 qemu-kvm-5.0.0-0.module+el8.3.0+6620+5d5e1420.x86_64 nbdkit-1.20.2-1.module+el8.3.0+6764+cc503f20.x86_64 Steps: Scenario1: Convert a guest from VMware to rhv via vddk by virt-v2v 1.1 Check normal conversion speed when don't set any option about bandwidth # virt-v2v -ic vpx://root.73.141/data/10.73.75.219/?no_verify=1 -it vddk -io vddk-libdir=/home/vmware-vix-disklib-distrib -io vddk-thumbprint=1F:97:34:5F:B6:C2:BA:66:46:CB:1A:71:76:7D:6B:50:1E:03:00:EA -o rhv-upload -oc https://ibm-x3250m5-03.rhts.eng.pek2.redhat.com/ovirt-engine/api -op /home/rhvpasswd -b ovirtmgmt --password-file /home/passwd -of raw -oo rhv-cluster=Default -os nfs_data Auto-esx6.7-rhel7.6-uefi-raid -oo rhv-verifypeer=true -oo rhv-cafile=/home/ca.pem -v -x |& tee |ts > vmware-vddk-no-bandwidth.log # cat vmware-vddk-no-bandwidth.log |grep 'virtual copying rate' Jun 08 11:43:23 virtual copying rate: 170.9 M bits/sec Jun 08 11:45:52 virtual copying rate: 287.1 M bits/sec Jun 08 11:49:33 virtual copying rate: 234.5 M bits/sec 1.2 Check conversion speed if set 190M for option --bandwidth # virt-v2v -ic vpx://root.73.141/data/10.73.75.219/?no_verify=1 -it vddk -io vddk-libdir=/home/vmware-vix-disklib-distrib -io vddk-thumbprint=1F:97:34:5F:B6:C2:BA:66:46:CB:1A:71:76:7D:6B:50:1E:03:00:EA -o rhv-upload -oc https://ibm-x3250m5-03.rhts.eng.pek2.redhat.com/ovirt-engine/api -op /home/rhvpasswd -b ovirtmgmt --password-file /home/passwd -of raw -oo rhv-cluster=Default -os nfs_data Auto-esx6.7-rhel7.6-uefi-raid -oo rhv-verifypeer=true -oo rhv-cafile=/home/ca.pem --bandwidth 190M -v -x |& tee |ts > vmware-vddk-bandwidth-190M.log #cat vmware-vddk-bandwidth-190M.log |grep 'virtual copying rate' Jun 08 10:59:15 virtual copying rate: 155.2 M bits/sec Jun 08 11:02:38 virtual copying rate: 209.6 M bits/sec Jun 08 11:07:08 virtual copying rate: 195.6 M bits/sec 1.3 Check conversion speed if set 180M for option --bandwidth-file #virt-v2v -ic vpx://root.73.141/data/10.73.75.219/?no_verify=1 -it vddk -io vddk-libdir=/home/vmware-vix-disklib-distrib -io vddk-thumbprint=1F:97:34:5F:B6:C2:BA:66:46:CB:1A:71:76:7D:6B:50:1E:03:00:EA -o rhv-upload -oc https://ibm-x3250m5-03.rhts.eng.pek2.redhat.com/ovirt-engine/api -op /home/rhvpasswd -b ovirtmgmt --password-file /home/passwd -of raw -oo rhv-cluster=Default -os nfs_data Auto-esx6.7-rhel7.6-uefi-raid -oo rhv-verifypeer=true -oo rhv-cafile=/home/ca.pem --bandwidth-file bandwidth -v -x |& tee |ts > vmware-vddk-bandwidth-file-180M.log # cat vmware-vddk-bandwidth-file-180M.log |grep 'virtual copying rate' Jun 08 12:54:19 virtual copying rate: 153.6 M bits/sec Jun 08 12:57:42 virtual copying rate: 214.1 M bits/sec Jun 08 13:02:16 virtual copying rate: 189.2 M bits/sec 1.4 Check conversion speed if change speed from 180M to 100M for option --bandwidth-file during v2v conversion # virt-v2v -ic vpx://root.73.141/data/10.73.75.219/?no_verify=1 -it vddk -io vddk-libdir=/home/vmware-vix-disklib-distrib -io vddk-thumbprint=1F:97:34:5F:B6:C2:BA:66:46:CB:1A:71:76:7D:6B:50:1E:03:00:EA -o rhv-upload -oc https://ibm-x3250m5-03.rhts.eng.pek2.redhat.com/ovirt-engine/api -op /home/rhvpasswd -b ovirtmgmt --password-file /home/passwd -of raw -oo rhv-cluster=Default -os nfs_data Auto-esx6.7-rhel7.6-uefi-raid -oo rhv-verifypeer=true -oo rhv-cafile=/home/ca.pem --bandwidth-file bandwidth -v -x |& tee |ts > vmware-vddk-bandwidth-file-change-180M-to-100M.log #cat vmware-vddk-bandwidth-file-change-180M-to-100M.log |grep 'virtual copying rate' Jun 08 13:44:28 virtual copying rate: 117.2 M bits/sec Jun 08 13:50:19 virtual copying rate: 119.8 M bits/sec Jun 08 13:57:18 virtual copying rate: 122.6 M bits/sec 1.5 Check conversion speed if set 120M for --bandwidth-file and 170M for --bandwidth during v2v conversion # cat bandwidth 120M # virt-v2v -ic vpx://root.73.141/data/10.73.75.219/?no_verify=1 -it vddk -io vddk-libdir=/home/vmware-vix-disklib-distrib -io vddk-thumbprint=1F:97:34:5F:B6:C2:BA:66:46:CB:1A:71:76:7D:6B:50:1E:03:00:EA -o rhv-upload -oc https://ibm-x3250m5-03.rhts.eng.pek2.redhat.com/ovirt-engine/api -op /home/rhvpasswd -b ovirtmgmt --password-file /home/passwd -of raw -oo rhv-cluster=Default -os nfs_data Auto-esx6.7-rhel7.6-uefi-raid -oo rhv-verifypeer=true -oo rhv-cafile=/home/ca.pem --bandwidth-file bandwidth --bandwidth 170M -v -x |& tee |ts > vmware-vddk-bandwidth-file_120M-bandwidth-170M.log #cat vmware-vddk-bandwidth-file_120M-bandwidth-170M.log |grep 'virtual copying rate' Jun 08 15:48:56 virtual copying rate: 129.3 M bits/sec Jun 08 15:53:45 virtual copying rate: 142.9 M bits/sec Jun 08 15:59:50 virtual copying rate: 143.1 M bits/sec Scenario2: Convert a guest from VMX via ssh by virt-v2v 2.1 Check normal conversion speed when don't set any option about bandwidth #virt-v2v -i vmx -it ssh ssh://root.75.219/vmfs/volumes/esx6.7-function/Auto-esx6.7-win2019-efi-secure-boot/Auto-esx6.7-win2019-efi-secure-boot.vmx -of qcow2 -v -x |& tee |ts > vmx+ssh-no-bandwidth.log # cat vmx+ssh-no-bandwidth.log |grep 'virtual copying rate' Jun 08 21:01:16 virtual copying rate: 256.3 M bits/sec 2.2 Check conversion speed if set 100M for option --bandwidth-file # cat bandwidth 100M # virt-v2v -i vmx -it ssh ssh://root.75.219/vmfs/volumes/esx6.7-function/Auto-esx6.7-win2019-efi-secure-boot/Auto-esx6.7-win2019-efi-secure-boot.vmx -of qcow2 --bandwidth-file bandwidth -v -x |& tee |ts > vmx+ssh-bandwidth-file-100M.log #cat vmx+ssh-bandwidth-file-100M.log |grep 'virtual copying rate' Jun 08 19:26:13 virtual copying rate: 117.1 M bits/sec 2.3 Check conversion speed if set 130M for option --bandwidth #virt-v2v -i vmx -it ssh ssh://root.75.219/vmfs/volumes/esx6.7-function/Auto-esx6.7-win2019-efi-secure-boot/Auto-esx6.7-win2019-efi-secure-boot.vmx -of qcow2 --bandwidth 130M -v -x |& tee |ts > vmx+ssh-bandwidth-130M.log #cat vmx+ssh-bandwidth-130M.log |grep 'virtual copying rate' Jun 08 20:10:08 virtual copying rate: 148.7 M bits/sec Scenario3: Convert a guest from Xen to rhv by virt-v2v 3.1 Check normal conversion speed when don't set any option about bandwidth # virt-v2v -ic xen+ssh://root.224.33 xen-hvm-rhel7.8-x86_64 -of raw -o rhv-upload -oc https://ibm-x3250m5-03.rhts.eng.pek2.redhat.com/ovirt-engine/api -op /home/rhvpasswd -b ovirtmgmt -oo rhv-cluster=Default -os nfs_data -oo rhv-verifypeer=true -oo rhv-cafile=/home/ca.pem -v -x |& tee |ts > xen-no-bandwidth.log #cat xen-no-bandwidth.log |grep 'virtual copying rate' Jun 08 17:36:17 virtual copying rate: 120.8 M bits/sec 3.2 Check conversion speed if set 80M for option --bandwidth # virt-v2v -ic xen+ssh://root.224.33 xen-hvm-rhel7.8-x86_64 -of raw -o rhv-upload -oc https://ibm-x3250m5-03.rhts.eng.pek2.redhat.com/ovirt-engine/api -op /home/rhvpasswd -b ovirtmgmt -oo rhv-cluster=Default -os nfs_data -oo rhv-verifypeer=true -oo rhv-cafile=/home/ca.pem --bandwidth 80M -v -x |& tee |ts > xen-bandwidth-80M.log #cat xen-bandwidth-80M.log |grep 'virtual copying rate' Jun 08 18:04:21 virtual copying rate: 115.2 M bits/sec 3.3 Check conversion speed if set 50M for option --bandwidth-file # cat bandwidth 50M #virt-v2v -ic xen+ssh://root.224.33 xen-hvm-rhel7.8-x86_64 -of raw -o rhv-upload -oc https://ibm-x3250m5-03.rhts.eng.pek2.redhat.com/ovirt-engine/api -op /home/rhvpasswd -b ovirtmgmt -oo rhv-cluster=Default -os nfs_data -oo rhv-verifypeer=true -oo rhv-cafile=/home/ca.pem --bandwidth-file bandwidth -v -x |& tee |ts > xen-bandwidth-file-50M.log # cat xen-bandwidth-file-50M.log |grep 'virtual copying rate' Jun 08 21:35:23 virtual copying rate: 94.1 M bits/sec Scenario4: Convert a guest from VMware to libvirt without vddk by virt-v2v 4.1 Check normal conversion speed when don't set any option about bandwidth # virt-v2v -ic vpx://root.198.169/data/10.73.199.217/?no_verify=1 esx7.0-rhel8.2-x86_64 -ip /home/passwd -of qcow2 -v -x |& tee |ts > vmware-no-vddk-no-bandwidth.log # cat vmware-no-vddk-no-bandwidth.log |grep 'virtual copying rate' Jun 08 22:40:46 virtual copying rate: 297.9 M bits/sec 4.2 Check conversion speed if set 80M for option --bandwidth #virt-v2v -ic vpx://root.198.169/data/10.73.199.217/?no_verify=1 esx7.0-rhel8.2-x86_64 -ip /home/passwd -of qcow2 --bandwidth 70M -v -x |& tee |ts > vmware-no-vddk-bandwidth-70M.log # cat vmware-no-vddk-bandwidth-70M.log |grep 'virtual copying rate' Jun 09 00:48:55 virtual copying rate: 82.3 M bits/sec Hi Richard, According to above results, option --bandwidth-file and --bandwidth can limit the network bandwidth in v2v conversion, but actual network bandwidth is a little not consistent with the set value of bandwidth options,the inconsistency is obvious in xen conversion, below is testing summary(used the final speed of v2v debug log as actual conversion speed), please help to confirm if the results are expected. Conversion mode Normal conversion speed --bandwidth/--bandwidth-file actual conversion speed vpx+vddk 234.5 M bits/sec 190M 195.6 M bits/sec vpx+vddk 234.5 M bits/sec 180M 189.2 M bits/sec vpx+vddk 234.5 M bits/sec 100M 122.6 M bits/sec vpx+vddk 234.5 M bits/sec 120M~170M 143.1 M bits/sec vmx+ssh 256.3 M bits/sec 100M 117.1 M bits/sec vmx+ssh 256.3 M bits/sec 130M 148.7 M bits/sec xen+ssh 120.8 M bits/sec 80M 115.2 M bits/sec xen+ssh 120.8 M bits/sec 50M 94.1 M bits/sec vpx-no-vddk 297.9 M bits/sec 70M 82.3 M bits/sec
A few things here: (1) "virtual copying rate" is the virtual size of the disk divided by the time taken to do the copy. If the disk is sparse and mostly empty then this is not very interesting. For example, if a disk has a virtual size of 100G and contains 100M of data, it may copy very quickly (since virt-v2v will skip over almost all of it), and the virtual copying rate will be amazingly fast because 100G / small number is very big. (2) In the case where you are writing to local libvirt, there should be a "real copying rate" debug message. This is more like the actual copying rate because it only measures the data that was actually copied (eg. 100M in the example above, so 100M / time). (3) "real copying rate" is NOT present for remote outputs like RHV because we cannot measure the size of the output disk. (4) When using --bandwidth-file, look out for the message from nbdkit > rate adjusted from <XXX> to <YYY> whenever you write to the file in order to adjust the bandwidth. I wasn't clear if you were dynamically writing to the file during the test, because that's the purpose of this option. Having said all that, I think the rate filtering results are surprisingly good. The filter is certainly having an effect, and we expect the "virtual copying rate" to always be higher than the real bandwidth because of (1). You could also try much small values, eg. --bandwidth 20M which should slow everything down about 10 times.
Verify more scenarios for bug according to comment6: Packages: virt-v2v-1.42.0-4.module+el8.3.0+6798+ad6e66be.x86_64 libguestfs-1.42.0-2.module+el8.3.0+6798+ad6e66be.x86_64 libvirt-6.4.0-1.module+el8.3.0+6881+88468c00.x86_64 qemu-kvm-5.0.0-0.module+el8.3.0+6620+5d5e1420.x86_64 nbdkit-1.20.2-1.module+el8.3.0+6764+cc503f20.x86_64 Steps: 1.Check v2v debug logs of scenario1 of comment5, can find message'rate adjusted from <XXX> to <YYY> from nbdkit when using option'--bandwidth-file in v2v conversion # cat vmware-vddk-bandwidth-file_120M-bandwidth-170M.log |grep 'rate adjusted from' Jun 08 15:34:30 nbdkit: vddk[2]: debug: rate adjusted from 178257920 to 125829120 Jun 08 15:34:30 nbdkit: vddk[2]: debug: rate adjusted from 178257920 to 125829120 Jun 08 15:34:30 nbdkit: vddk[2]: debug: rate adjusted from 178257920 to 125829120 # cat vmware-vddk-bandwidth-file-change-180M-to-100M.log |grep 'rate adjusted from' Jun 08 13:29:03 nbdkit: vddk[2]: debug: rate adjusted from 0 to 188743680 Jun 08 13:29:03 nbdkit: vddk[2]: debug: rate adjusted from 0 to 188743680 Jun 08 13:29:03 nbdkit: vddk[2]: debug: rate adjusted from 0 to 188743680 Jun 08 13:33:12 nbdkit: vddk[3]: debug: rate adjusted from 188743680 to 104857600 Jun 08 13:44:56 nbdkit: vddk[3]: debug: rate adjusted from 188743680 to 104857600 Jun 08 13:50:31 nbdkit: vddk[3]: debug: rate adjusted from 188743680 to 104857600 2.Check v2v debug logs of scenario2 of comment5 to find the real copying rate # cat vmx+ssh-no-bandwidth.log |grep 'copying rate' Jun 08 21:01:16 virtual copying rate: 256.3 M bits/sec Jun 08 21:01:16 real copying rate: 122.4 M bits/sec # cat vmx+ssh-bandwidth-130M.log |grep 'copying rate' Jun 08 20:10:08 virtual copying rate: 148.7 M bits/sec Jun 08 20:10:08 real copying rate: 71.0 M bits/sec # cat vmx+ssh-bandwidth-file-100M.log |grep 'copying rate' Jun 08 19:26:13 virtual copying rate: 117.1 M bits/sec Jun 08 19:26:13 real copying rate: 55.9 M bits/sec 3.Check v2v debug logs of scenario4 of comment5 to find the real copying rate # cat vmware-no-vddk-no-bandwidth.log |grep 'copying rate' Jun 08 22:40:46 virtual copying rate: 297.9 M bits/sec Jun 08 22:40:46 real copying rate: 113.1 M bits/sec # cat vmware-no-vddk-bandwidth-70M.log |grep 'copying rate' Jun 09 00:48:55 virtual copying rate: 82.3 M bits/sec Jun 09 00:48:55 real copying rate: 33.8 M bits/sec 4.Convert a guest from VMware to rhv via vddk to libvirt by virt-v2v and check copying rate if set 20M for option --bandwidth # virt-v2v -ic vpx://root.73.141/data/10.73.75.219/?no_verify=1 -it vddk -io vddk-libdir=/home/vmware-vix-disklib-distrib -io vddk-thumbprint=1F:97:34:5F:B6:C2:BA:66:46:CB:1A:71:76:7D:6B:50:1E:03:00:EA --password-file /home/passwd -of raw Auto-esx6.7-rhel7.6-uefi-raid --bandwidth 20M -v -x |& tee |ts > vpx-vddk-bandwidth-20M-to-libvirt.log #cat vpx-vddk-bandwidth-20M-to-libvirt.log |grep 'copying rate' Jun 09 14:21:41 virtual copying rate: 26.6 M bits/sec Jun 09 14:21:41 real copying rate: 11.7 M bits/sec Jun 09 14:48:43 virtual copying rate: 25.3 M bits/sec Jun 09 14:48:43 real copying rate: 1.1 M bits/sec Jun 09 15:22:08 virtual copying rate: 25.5 M bits/sec Jun 09 15:22:08 real copying rate: 4.0 M bits/sec 5. Convert a guest from Xen to libvirt by virt-v2v and check copying rate if set 40M for option --bandwidth-file # cat bandwidth 40M # virt-v2v -ic xen+ssh://root.224.33 xen-hvm-rhel7.8-x86_64 -of raw --bandwidth-file bandwidth -v -x |& tee |ts > xen-bandwidth-file-40M-to-libvirt.log #cat xen-bandwidth-file-40M-to-libvirt.log |grep 'copying rate' Jun 09 16:20:37 virtual copying rate: 85.8 M bits/sec Jun 09 16:20:37 real copying rate: 38.7 M bits/sec Hi rjones, (1)Pls check step1, can find message 'rate adjusted from <XXX> to <YYY> from nbdkit when using option --bandwidth-file in v2v conversion, do you think these scenarios are enough to test option --bandwidth-file? (2)Pls check below results, the "virtual copying rate" is always higher than the value of bandwidth option but real copying rate is always much slower than the value of bandwidth option, the inconsistency is obvious in vpx and vmx+ssh v2v conversion, is the result expected ? Conversion mode --bandwidth/--bandwidth-file virtual copying rate real copying rate vpx+vddk 20M 25.5 M bits/sec 4.0 M bits/sec vmx+ssh 100M 117.1 M bits/sec 55.9 M bits/sec vmx+ssh 130M 148.7 M bits/sec 71.0 M bits/sec xen 40M 85.8 M bits/sec 38.7 M bits/sec vpx-no-vddk 70M 82.3 M bits/sec 33.8 M bits/sec
> (1)Pls check step1, can find message 'rate adjusted from <XXX> to <YYY> > from nbdkit when using option --bandwidth-file in v2v conversion, do > you think these scenarios are enough to test option --bandwidth-file? Yes these messages are what I would expect to see. (2) is quite difficult to test. The problem is that what the --bandwidth* feature does not really match what either "virtual copying rate" or "real copying rate" measure, or in fact what it's possible to measure from virt-v2v. Consider: VMware server ------------> nbdkit ----> virt-v2v ---> output A B C B is a loopback Unix domain socket between two local processes, so bandwidth is unimportant. C is either writing to a local file, or writing to the target. Virt-v2v should normally run close to the target, so bandwidth used here doesn't really matter. The --bandwidth* feature controls A, which is the more distant connection to the remote VMware server. The virtual and real copying rates measure C. Also virt-v2v has sparsified the guest so C < A. However even A is hard to measure from virt-v2v because we should skip over sparse bits of the source disk and cache reads, and virt-v2v does not even see this. So I don't know of a good way to test this except to somehow measure the actual network bandwidth used between the virt-v2v server and the VMware server. But the results you got could be consistent with correct results.
Thanks rjones, move the bug from ON_QA to VERIFIED according to comment5 to comment8
Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory (virt:8.3 bug fix and enhancement update), and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://access.redhat.com/errata/RHBA-2020:5137