Bug 1927111

Summary: Slow conversion speed while converting VMware VMs to RHV
Product: Red Hat Enterprise Linux 8 Reporter: nijin ashok <nashok>
Component: virt-v2vAssignee: Richard W.M. Jones <rjones>
Status: CLOSED CANTFIX QA Contact: tingting zheng <tzheng>
Severity: high Docs Contact:
Priority: low    
Version: 8.3CC: branpise, fdelorey, juzhou, kkiwi, mxie, tyan, tzheng, xiaodwan
Target Milestone: rcKeywords: Triaged
Target Release: 8.0   
Hardware: All   
OS: Linux   
Whiteboard:
Fixed In Version: Doc Type: If docs needed, set a value
Doc Text:
Story Points: ---
Clone Of: Environment:
Last Closed: 2021-11-23 11:03:01 UTC Type: Bug
Regression: --- Mount Type: ---
Documentation: --- CRM:
Verified Versions: Category: ---
oVirt Team: --- RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: --- Target Upstream Version:
Embargoed:

Description nijin ashok 2021-02-10 05:05:49 UTC
Description of problem:

While converting VMware VMs to RHV, the user is observing low speed. Some of the operations below are taking a huge time which is causing the delay in the conversion.

The generation of new initramfs image took 15+ minutes.

===
[ 1449.362481] dracut[1457] Executing: /sbin/dracut --verbose --add-drivers "virtio virtio_ring virtio_blk virtio_scsi virtio_net virtio_pci" /boot/initramfs-3.10.0-1062.18.1.el7.x86_64.img 3.10.0-1062.18.1.el7.x86_64^M
*** Creating initramfs image file '/boot/initramfs-3.10.0-1062.18.1.el7.x86_64.img' done ***^M
[ 2406.482080] dracut[1457] *** Creating initramfs image file '/boot/initramfs-3.10.0-1062.18.1.el7.x86_64.img' done ***^M
commandrvf: stdout=n stderr=n flags=0x0^M
commandrvf: umount /sysroot/sys^M
commandrvf: stdout=n stderr=n flags=0x0^M
commandrvf: umount /sysroot/proc^M
commandrvf: stdout=n stderr=n flags=0x0^M
commandrvf: umount /sysroot/dev/pts^M
commandrvf: stdout=n stderr=n flags=0x0^M
commandrvf: umount /sysroot/dev^M
renaming /sysroot/etc/6k7w5pmk to /sysroot/etc/resolv.conf^M
guestfsd: => command (0x32) took 1046.94 secs^M
===

The "rpm -e open-vm-tools" command took 10+ minutes to complete.

===
commandrvf: rpm -e open-vm-tools^M
libguestfs: trace: v2v: command = ""
libguestfs: trace: v2v: aug_load
guestfsd: => command (0x32) took 608.77 secs^M
===

And the selinux relabel took around 5 minutes.

===
commandrvf: setfiles -F -e /sysroot/dev -e /sysroot/proc -e /sysroot/selinux -e /sysroot/sys -m -r /sysroot -v /sysroot/etc/selinux/targeted/contexts/files/file_contexts /sysroot/^M
libguestfs: trace: v2v: rm_f "/.autorelabel"
guestfsd: => selinux_relabel (0x1d3) took 273.91 secs^M
====

I can see the below error randomly getting logged between above jobs.

===
nbdkit: curl[2]: error: pread: curl_easy_perform: HTTP response code said error: The requested URL returned error: 401
nbdkit: curl[2]: debug: pread failed: original errno = 5
nbdkit: curl[2]: debug: retry 1: waiting 2 seconds before retrying
nbdkit: curl[2]: debug: curl: reopen readonly=1 exportname="/"
nbdkit: curl[2]: debug: curl: finalize
nbdkit: curl[2]: debug: curl: close
===

To test, the customer exported the same VM as OVA, copied it to the hypervisor, and then did the convert and the conversion speed was as expected and the issue is only while directly importing from VMware.

The customer also check for possible errors in VMware and was not able to find any.

Version-Release number of selected component (if applicable):

nbdkit-1.22.0-2.module
libguestfs-1.42.0-2.module+el8.3.0+6798+ad6e66be.x86_64
virt-v2v-1.42.0-6.module+el8.3.0+7898+13f907d5.x86_64


How reproducible:

100% for the customer.


Steps to Reproduce:

1. Convert a VMware VM to RHV.
2.
3.

Actual results:

Slow conversion speed while converting VMware VMs to RHV

Expected results:


Additional info:

Comment 2 Richard W.M. Jones 2021-02-10 09:36:26 UTC
The biggest single change the customer could make to dramatically improve
performance would be to use VDDK instead of the HTTPS method.  Are they
running these conversions by hand?

Comment 3 nijin ashok 2021-02-11 02:55:31 UTC
They are directly importing through RHV web portal which doesn't have the option to use VDDK. I think to use VDDK, they have to either convert by hand or has to use IMS.

Comment 4 nijin ashok 2021-03-03 10:03:30 UTC
The customer tried virt-v2v manually using VDDK transport and was able to achieve good performance (10 minutes vs. 1 hour and 10 minutes).

The request to use VDDK in RHV  as rejected in bug  1933656. If there is nothing else to do, we can close this bug.

Comment 9 Richard W.M. Jones 2021-11-23 11:03:01 UTC
Closing cantfix since this is an in inherent problem with VMware and
alternatives are available even if RHV doesn't want to use them.