Bug 2040778 - There is nbdkit curl error info if convert a guest from VMware without vddk by administrator account [rhel-av-8.5.z]
Summary: There is nbdkit curl error info if convert a guest from VMware without vddk b...
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: Red Hat Enterprise Linux Advanced Virtualization
Classification: Red Hat
Component: virt-v2v
Version: 8.6
Hardware: x86_64
OS: Unspecified
urgent
medium
Target Milestone: rc
: 8.5
Assignee: Richard W.M. Jones
QA Contact: mxie@redhat.com
URL:
Whiteboard:
Depends On: 2018173 2040772
Blocks:
TreeView+ depends on / blocked
 
Reported: 2022-01-14 16:44 UTC by RHEL Program Management Team
Modified: 2023-04-03 00:10 UTC (History)
12 users (show)

Fixed In Version: virt-v2v-1.42.0-16.el8_5
Doc Type: If docs needed, set a value
Doc Text:
Clone Of: 2040772
Environment:
Last Closed: 2022-02-02 08:47:58 UTC
Type: ---
Target Upstream Version:
Embargoed:


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
Red Hat Issue Tracker RHELPLAN-108210 0 None None None 2022-01-14 16:45:56 UTC
Red Hat Knowledge Base (Solution) 7005599 0 None None None 2023-04-03 00:10:50 UTC
Red Hat Product Errata RHSA-2022:0397 0 None None None 2022-02-02 08:48:13 UTC

Comment 2 Richard W.M. Jones 2022-01-15 09:59:10 UTC
For verification, please make sure you also have
nbdkit-1.24.0-3.el8_5 and the rest of the virt module
up to date in RHEL 8.5-z.

To tell if the patch is in use, check that the
"cookie-script" option has been passed to nbdkit
(while virt-v2v is running).  If nbdkit is running
but "cookie-script" is not one of the arguments, then it
may be that the patch has not been applied for some
reason or that the wrong thing is being tested.

Comment 3 mxie@redhat.com 2022-01-17 13:57:31 UTC
Reproduce the bug with below builds:
virt-v2v-1.42.0-15.module+el8.5.0+12264+1ee0d523.x86_64
libguestfs-1.44.0-3.module+el8.5.0+10681+17a9b157.x86_64
nbdkit-1.24.0-1.module+el8.4.0+9341+96cf2672.x86_64
libvirt-libs-7.6.0-6.module+el8.5.0+13051+7ddbe958.x86_64
qemu-img-6.0.0-33.module+el8.5.0+13041+05be2dc6.x86_6


Steps to reproduce:
1.Convert a guest from ESXi7.0 without vddk by administrator account and regular account which has suffix "@vsphere.client" 
1.1 # virt-v2v  -ic vpx://vsphere.local%5cAdministrator.198.169/data/10.73.199.217/?no_verify=1 -ip /home/passwd esx7.0-win11-x86_64 -o rhv-upload -of qcow2 -oc https://dell-per740-48.lab.eng.pek2.redhat.com/ovirt-engine/api -op /home/rhvpasswd  -oo rhv-cluster=NFS -os nfs_data -b ovirtmgmt
[   1.0] Opening the source -i libvirt -ic vpx://vsphere.local%5cAdministrator.198.169/data/10.73.199.217/?no_verify=1 esx7.0-win11-x86_64
[   3.8] Creating an overlay to protect the source from being modified
[   4.5] Opening the overlay
[  55.3] Inspecting the overlay
[ 425.8] Checking for sufficient free disk space in the guest
[ 425.8] Estimating space required on target for each disk
[ 425.8] Converting Windows 10 Enterprise to run on KVM
virt-v2v: This guest has virtio drivers installed.
[ 622.9] Mapping filesystem data to avoid copying unused and blank areas
[ 625.8] Closing the overlay
[ 626.1] Assigning disks to buses
[ 626.1] Checking if the guest needs BIOS or UEFI to boot
[ 626.1] Initializing the target -o rhv-upload -oc https://dell-per740-48.lab.eng.pek2.redhat.com/ovirt-engine/api -op /home/rhvpasswd -os nfs_data
[ 627.6] Copying disk 1/1 to qemu URI json:{ "file.driver": "nbd", "file.path": "/tmp/v2vnbdkit.VEqzQv/nbdkit4.sock", "file.export": "/" } (qcow2)
nbdkit: curl[3]: error: pread: curl_easy_perform: HTTP response code said error: The requested URL returned error: 503 Service Unavailable
nbdkit: curl[3]: error: problem doing HEAD request to fetch size of URL [https://10.73.198.169/folder/esx7.0-win11-x86%5f64/esx7.0-win11-x86%5f64-flat.vmdk?dcPath=data&dsName=esx7.0-matrix]: HTTP response code said error: The requested URL returned error: 503 Service Unavailable
.....
.....
nbdkit: curl[3]: error: problem doing HEAD request to fetch size of URL [https://10.73.198.169/folder/esx7.0-win11-x86%5f64/esx7.0-win11-x86%5f64-flat.vmdk?dcPath=data&dsName=esx7.0-matrix]: HTTP response code said error: The requested URL returned error: 503 Service Unavailable
qemu-img: error while reading at byte 1543307264: Cannot send after transport endpoint shutdown

nbdkit: python[1]: error: /tmp/v2v.jrC4Gx/rhv-upload-plugin.py: flush: error: Traceback (most recent call last):
   File "/tmp/v2v.jrC4Gx/rhv-upload-plugin.py", line 94, in wrapper
    return func(h, *args)
   File "/tmp/v2v.jrC4Gx/rhv-upload-plugin.py", line 350, in flush
    r = http.getresponse()
   File "/usr/lib64/python3.6/http/client.py", line 1361, in getresponse
    response.begin()
   File "/usr/lib64/python3.6/http/client.py", line 311, in begin
    version, status, reason = self._read_status()
   File "/usr/lib64/python3.6/http/client.py", line 280, in _read_status
    raise RemoteDisconnected("Remote end closed connection without"
 http.client.RemoteDisconnected: Remote end closed connection without response

virt-v2v: error: qemu-img command failed, see earlier errors

If reporting bugs, run virt-v2v with debugging enabled and include the 
complete output:

  virt-v2v -v -x [...]


1.2 # virt-v2v  -ic vpx://vsphere.local%5cmxie.198.169/data/10.73.199.217/?no_verify=1 -ip /home/passwd esx7.0-win11-x86_64 -o rhv-upload -of qcow2 -oc https://dell-per740-48.lab.eng.pek2.redhat.com/ovirt-engine/api -op /home/rhvpasswd  -oo rhv-cluster=NFS -os nfs_data -b ovirtmgmt
[   0.9] Opening the source -i libvirt -ic vpx://vsphere.local%5cmxie.198.169/data/10.73.199.217/?no_verify=1 esx7.0-win11-x86_64
[   3.7] Creating an overlay to protect the source from being modified
[   4.4] Opening the overlay
[  38.4] Inspecting the overlay
[ 401.2] Checking for sufficient free disk space in the guest
[ 401.2] Estimating space required on target for each disk
[ 401.2] Converting Windows 10 Enterprise to run on KVM
virt-v2v: This guest has virtio drivers installed.
[ 592.2] Mapping filesystem data to avoid copying unused and blank areas
[ 595.0] Closing the overlay
[ 595.3] Assigning disks to buses
[ 595.3] Checking if the guest needs BIOS or UEFI to boot
[ 595.3] Initializing the target -o rhv-upload -oc https://dell-per740-48.lab.eng.pek2.redhat.com/ovirt-engine/api -op /home/rhvpasswd -os nfs_data
[ 596.8] Copying disk 1/1 to qemu URI json:{ "file.driver": "nbd", "file.path": "/tmp/v2vnbdkit.TBciJb/nbdkit4.sock", "file.export": "/" } (qcow2)
nbdkit: curl[3]: error: pread: curl_easy_perform: HTTP response code said error: The requested URL returned error: 503 Service Unavailable
nbdkit: curl[3]: error: problem doing HEAD request to fetch size of URL [https://10.73.198.169/folder/esx7.0-win11-x86%5f64/esx7.0-win11-x86%5f64-flat.vmdk?dcPath=data&dsName=esx7.0-matrix]: HTTP response code said error: The requested URL returned error: 503 Service Unavailable
.....
.....
nbdkit: curl[3]: error: problem doing HEAD request to fetch size of URL [https://10.73.198.169/folder/esx7.0-win11-x86%5f64/esx7.0-win11-x86%5f64-flat.vmdk?dcPath=data&dsName=esx7.0-matrix]: HTTP response code said error: The requested URL returned error: 503 Service Unavailable
qemu-img: error while reading at byte 1562181632: Cannot send after transport endpoint shutdown

nbdkit: python[1]: error: /tmp/v2v.3QDIxc/rhv-upload-plugin.py: flush: error: Traceback (most recent call last):
   File "/tmp/v2v.3QDIxc/rhv-upload-plugin.py", line 94, in wrapper
    return func(h, *args)
   File "/tmp/v2v.3QDIxc/rhv-upload-plugin.py", line 350, in flush
    r = http.getresponse()
   File "/usr/lib64/python3.6/http/client.py", line 1361, in getresponse
    response.begin()
   File "/usr/lib64/python3.6/http/client.py", line 311, in begin
    version, status, reason = self._read_status()
   File "/usr/lib64/python3.6/http/client.py", line 280, in _read_status
    raise RemoteDisconnected("Remote end closed connection without"
 http.client.RemoteDisconnected: Remote end closed connection without response

virt-v2v: error: qemu-img command failed, see earlier errors

If reporting bugs, run virt-v2v with debugging enabled and include the 
complete output:

  virt-v2v -v -x [...]



Verify the bug with below builds:
virt-v2v-1.42.0-16.module+el8.5.0+13900+a08c0464.x86_64
libguestfs-1.44.0-3.module+el8.5.0+10681+17a9b157.x86_64
nbdkit-1.24.0-3.module+el8.5.0+13900+a08c0464.x86_64
libvirt-libs-7.6.0-6.module+el8.5.0+13051+7ddbe958.x86_64
qemu-img-6.0.0-33.module+el8.5.0+13740+349232b6.2.x86_64


Steps:
1.Convert a guest from ESXi7.0 without vddk by administrator account and regular account which has suffix "@vsphere.client" 

1.1 # virt-v2v  -ic vpx://vsphere.local%5cAdministrator.198.169/data/10.73.199.217/?no_verify=1 -ip /home/passwd esx7.0-rhel8.5-x86_64 -o rhv-upload -of qcow2 -oc https://dell-per740-48.lab.eng.pek2.redhat.com/ovirt-engine/api -op /home/rhvpasswd  -oo rhv-cluster=NFS -os nfs_data -b ovirtmgmt
[   0.7] Opening the source -i libvirt -ic vpx://vsphere.local%5cAdministrator.198.169/data/10.73.199.217/?no_verify=1 esx7.0-rhel8.5-x86_64
[   3.5] Creating an overlay to protect the source from being modified
[   4.5] Opening the overlay
[  38.3] Inspecting the overlay
[ 449.1] Checking for sufficient free disk space in the guest
[ 449.1] Estimating space required on target for each disk
[ 449.1] Converting Red Hat Enterprise Linux 8.5 (Ootpa) to run on KVM
virt-v2v: This guest has virtio drivers installed.
[2364.7] Mapping filesystem data to avoid copying unused and blank areas
[2368.0] Closing the overlay
[2368.3] Assigning disks to buses
[2368.3] Checking if the guest needs BIOS or UEFI to boot
[2368.3] Initializing the target -o rhv-upload -oc https://dell-per740-48.lab.eng.pek2.redhat.com/ovirt-engine/api -op /home/rhvpasswd -os nfs_data
[2369.8] Copying disk 1/1 to qemu URI json:{ "file.driver": "nbd", "file.path": "/tmp/v2vnbdkit.VrJ2C3/nbdkit4.sock", "file.export": "/" } (qcow2)
    (100.00/100%)
[4105.9] Creating output metadata
[4107.8] Finishing off

1.2 # virt-v2v  -ic vpx://vsphere.local%5cmxie.198.169/data/10.73.199.217/?no_verify=1 -ip /home/passwd esx7.0-win11-x86_64 -o rhv-upload -of qcow2 -oc https://dell-per740-48.lab.eng.pek2.redhat.com/ovirt-engine/api -op /home/rhvpasswd  -oo rhv-cluster=NFS -os nfs_data -b ovirtmgmt
[   0.7] Opening the source -i libvirt -ic vpx://vsphere.local%5cmxie.198.169/data/10.73.199.217/?no_verify=1 esx7.0-win11-x86_64
[   3.5] Creating an overlay to protect the source from being modified
[   4.5] Opening the overlay
[  33.8] Inspecting the overlay
[ 340.2] Checking for sufficient free disk space in the guest
[ 340.2] Estimating space required on target for each disk
[ 340.2] Converting Windows 10 Enterprise to run on KVM
virt-v2v: This guest has virtio drivers installed.
[ 503.2] Mapping filesystem data to avoid copying unused and blank areas
[ 505.1] Closing the overlay
[ 505.4] Assigning disks to buses
[ 505.4] Checking if the guest needs BIOS or UEFI to boot
[ 505.4] Initializing the target -o rhv-upload -oc https://dell-per740-48.lab.eng.pek2.redhat.com/ovirt-engine/api -op /home/rhvpasswd -os nfs_data
[ 506.9] Copying disk 1/1 to qemu URI json:{ "file.driver": "nbd", "file.path": "/tmp/v2vnbdkit.mmioKK/nbdkit4.sock", "file.export": "/" } (qcow2)
    (100.00/100%)
[4521.9] Creating output metadata
[4523.7] Finishing off

1.3 Check guests after v2v conversion, checkpoints of guests are passed

2.Convert a guest from ESXi6.7 without vddk by administrator account and regular account which has suffix "@vsphere.client" 

2.1 # virt-v2v  -ic vpx://vsphere.local%5cAdministrator.73.141/data/10.73.75.219/?no_verify=1 -ip /home/passwd esx6.7-rhel8.4-x86_64 -o rhv-upload -of qcow2 -oc https://dell-per740-48.lab.eng.pek2.redhat.com/ovirt-engine/api -op /home/rhvpasswd  -oo rhv-cluster=NFS -os nfs_data -b ovirtmgmt
[   0.7] Opening the source -i libvirt -ic vpx://vsphere.local%5cAdministrator.73.141/data/10.73.75.219/?no_verify=1 esx6.7-rhel8.4-x86_64
[   3.7] Creating an overlay to protect the source from being modified
[   4.7] Opening the overlay
[  44.1] Inspecting the overlay
[ 359.0] Checking for sufficient free disk space in the guest
[ 359.0] Estimating space required on target for each disk
[ 359.0] Converting Red Hat Enterprise Linux 8.4 (Ootpa) to run on KVM
virt-v2v: This guest has virtio drivers installed.
[1315.2] Mapping filesystem data to avoid copying unused and blank areas
[1330.9] Closing the overlay
[1331.2] Assigning disks to buses
[1331.2] Checking if the guest needs BIOS or UEFI to boot
[1331.2] Initializing the target -o rhv-upload -oc https://dell-per740-48.lab.eng.pek2.redhat.com/ovirt-engine/api -op /home/rhvpasswd -os nfs_data
[1332.6] Copying disk 1/1 to qemu URI json:{ "file.driver": "nbd", "file.path": "/tmp/v2vnbdkit.PjEAu1/nbdkit4.sock", "file.export": "/" } (qcow2)
    (100.00/100%)
[2365.9] Creating output metadata
[2367.7] Finishing off

2.2 # virt-v2v  -ic vpx://vsphere.local%5cmxie.73.141/data/10.73.75.219/?no_verify=1 -ip /home/passwd esx6.7-win2022-x86_64  -o rhv-upload -oc https://dell-per740-48.lab.eng.pek2.redhat.com/ovirt-engine/api -op /home/rhvpasswd  -oo rhv-cluster=NFS -os nfs_data -b ovirtmgmt
[   0.7] Opening the source -i libvirt -ic vpx://vsphere.local%5cmxie.73.141/data/10.73.75.219/?no_verify=1 esx6.7-win2022-x86_64
[   3.7] Creating an overlay to protect the source from being modified
[   4.8] Opening the overlay
[  40.4] Inspecting the overlay
[ 442.6] Checking for sufficient free disk space in the guest
[ 442.6] Estimating space required on target for each disk
[ 442.6] Converting Windows Server 2022 Standard to run on KVM
virt-v2v: This guest has virtio drivers installed.
[ 666.4] Mapping filesystem data to avoid copying unused and blank areas
[ 668.4] Closing the overlay
[ 668.7] Assigning disks to buses
[ 668.7] Checking if the guest needs BIOS or UEFI to boot
[ 668.7] Initializing the target -o rhv-upload -oc https://dell-per740-48.lab.eng.pek2.redhat.com/ovirt-engine/api -op /home/rhvpasswd -os nfs_data
[ 670.2] Copying disk 1/1 to qemu URI json:{ "file.driver": "nbd", "file.path": "/tmp/v2vnbdkit.u5Lijt/nbdkit4.sock", "file.export": "/" } (raw)
    (100.00/100%)
[3990.7] Creating output metadata
[3992.6] Finishing off

2.3 Check guests after v2v conversion, checkpoints of guests are passed


Result:
    The bug has been fixed with virt-v2v-1.42.0-16.module+el8.5.0+13900

Comment 6 mxie@redhat.com 2022-01-18 01:52:12 UTC
The bug has been fixed according to comment3, move bug from ON_QA to VERIFIED

Comment 8 errata-xmlrpc 2022-02-02 08:47:58 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory (Low: virt:av and virt-devel:av security and bug fix update), and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHSA-2022:0397


Note You need to log in before you can comment on or make changes to this bug.