RHEL Engineering is moving the tracking of its product development work on RHEL 6 through RHEL 9 to Red Hat Jira (issues.redhat.com). If you're a Red Hat customer, please continue to file support cases via the Red Hat customer portal. If you're not, please head to the "RHEL project" in Red Hat Jira and file new tickets here. Individual Bugzilla bugs in the statuses "NEW", "ASSIGNED", and "POST" are being migrated throughout September 2023. Bugs of Red Hat partners with an assigned Engineering Partner Manager (EPM) are migrated in late September as per pre-agreed dates. Bugs against components "kernel", "kernel-rt", and "kpatch" are only migrated if still in "NEW" or "ASSIGNED". If you cannot log in to RH Jira, please consult article #7032570. That failing, please send an e-mail to the RH Jira admins at rh-issues@redhat.com to troubleshoot your issue as a user management inquiry. The email creates a ServiceNow ticket with Red Hat. Individual Bugzilla bugs that are migrated will be moved to status "CLOSED", resolution "MIGRATED", and set with "MigratedToJIRA" in "Keywords". The link to the successor Jira issue will be found under "Links", have a little "two-footprint" icon next to it, and direct you to the "RHEL project" in Red Hat Jira (issue links are of type "https://issues.redhat.com/browse/RHEL-XXXX", where "X" is a digit). This same link will be available in a blue banner at the top of the page informing you that that bug has been migrated.
Bug 1591960 - v2v: Disable Nagle in -o rhv-upload mode
Summary: v2v: Disable Nagle in -o rhv-upload mode
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: Red Hat Enterprise Linux 7
Classification: Red Hat
Component: libguestfs
Version: 7.6
Hardware: Unspecified
OS: Unspecified
unspecified
unspecified
Target Milestone: rc
: ---
Assignee: Richard W.M. Jones
QA Contact: Virtualization Bugs
URL:
Whiteboard: V2V
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2018-06-15 21:07 UTC by Richard W.M. Jones
Modified: 2022-01-20 10:37 UTC (History)
10 users (show)

Fixed In Version: libguestfs-1.38.2-6.el7
Doc Type: If docs needed, set a value
Doc Text:
Clone Of:
Environment:
Last Closed: 2018-10-30 07:45:56 UTC
Target Upstream Version:
Embargoed:


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
Red Hat Product Errata RHEA-2018:3021 0 None None None 2018-10-30 07:46:59 UTC

Description Richard W.M. Jones 2018-06-15 21:07:27 UTC
Description of problem:

Please apply the following patch to RHEL 7.6+.  It only
needs to be applied downstream because we're using Python 3
everywhere else.

https://www.redhat.com/archives/libguestfs/2018-June/msg00070.html

Version-Release number of selected component (if applicable):

libguestfs 1.38.2

Comment 2 Richard W.M. Jones 2018-06-18 16:36:37 UTC
There was a syntactic problem in the patch as posted.  This patch
should work and is also rebased against RHEL 7:

https://github.com/libguestfs/libguestfs/commit/5bee82d72e9bfe3b358e7780e63fbce0c55d09da

Comment 3 Nir Soffer 2018-06-18 18:10:50 UTC
Richard, do you know when we will have a build for testing?

Comment 4 Richard W.M. Jones 2018-06-18 19:02:39 UTC
This patch is included in the -6.12 package here:

https://people.redhat.com/~rjones/virt-v2v-RHEL-7.5-rhv-preview/

Comment 7 mlehrer 2018-06-26 08:22:44 UTC
@Nir thanks for the heads up, testing was done on this and is following Richards drops per mailing list (last testing on 1.36.10-6.14) results will be shared soon.

Removing needinfo.

Comment 9 mxie@redhat.com 2018-07-09 09:42:25 UTC
Hi rjones,

  (1)Could you please tell me where below log info could be found, is it engine.log? I can't find such info in our engine.log

.....
2018-06-12 17:04:01,750 INFO    (Thread-2) [images] Writing 52736 bytes
at offset 0 flush False to /path/to/image for ticket
374bec27-930d-4097-8e41-e4bc23324eb0

2018-06-12 17:04:01,790 INFO    (Thread-2) [directio] Operation stats:
<Clock(total=0.04, read=0.04, write=0.00)>
.....


 (2)I found below info in engine.log with virt-v2v-1.36.10-6.10.rhvpreview.el7ev.x86_64, does below info have same meaning with above info?
.....
2018-07-09 16:20:54,679+08 INFO  [org.ovirt.engine.core.vdsbroker.irsbroker.CreateImageVDSCommand] (default task-4) [ccb7dc73-1362-4d29-acb7-070c24a814d7] -- executeIrsBrokerCommand: calling 'createVolume' with two new parameters: description and UUID

2018-07-09 16:20:55,024+08 INFO  [org.ovirt.engine.core.vdsbroker.irsbroker.CreateImageVDSCommand] (default task-4) [ccb7dc73-1362-4d29-acb7-070c24a814d7] FINISH, CreateImageVDSCommand, return: 72836cc3-10d3-4f78-b8a6-55ab77391d83, log id: 597f3f1f
.....


  (3)I could find many info which are delayed over 40 milliseconds in engine.log during v2v conversion, does the bug want to fix all info delayed less than 40 milliseconds or just fix the delay during creating image? If what I thought are not right, could you please give me some suggestion?

Comment 10 Richard W.M. Jones 2018-07-09 09:53:45 UTC
(1) imageio logs are in /var/log/ovirt-imageio-daemon/ on the
ovirt node.

I don't know about (2) & (3).  If they are still a problem after
looking in the imageio log, then it's best to ask Nir Soffer.

Comment 11 mxie@redhat.com 2018-07-10 06:57:21 UTC
Try to reproduce the bug with builds:
virt-v2v-1.36.10-6.10.rhvpreview.el7ev.x86_64
libguestfs-1.36.10-6.10.rhvpreview.el7ev.x86_64
libvirt-3.9.0-14.el7_5.4.x86_64
qemu-kvm-rhev-2.10.0-21.el7_5.4.x86_64

Reproduce steps:
1.Convert a guest to rhv4.2'data domain using --rhv-upload by virt-v2v, copying use time 2308.6s
# virt-v2v -ic vpx://vsphere.local%5cAdministrator.75.182/data/10.73.72.61/?no_verify=1  esx6.0-win2016-x86_64 -o rhv-upload -oc https://ibm-x3250m5-03.rhts.eng.pek2.redhat.com/ovirt-engine/api -os nfs_data -op /tmp/rhvpasswd -oo rhv-cafile=/home/ca.pem  -oo rhv-direct -of raw --password-file /tmp/passwd -b ovirtmgmt
[   0.3] Opening the source -i libvirt -ic vpx://vsphere.local%5cAdministrator.75.182/data/10.73.72.61/?no_verify=1 esx6.0-win2016-x86_64
[   2.1] Creating an overlay to protect the source from being modified
[   3.0] Initializing the target -o rhv-upload -oc https://ibm-x3250m5-03.rhts.eng.pek2.redhat.com/ovirt-engine/api -op /tmp/rhvpasswd -os nfs_data
[   4.6] Opening the overlay
[  80.3] Inspecting the overlay
[ 198.6] Checking for sufficient free disk space in the guest
[ 198.6] Estimating space required on target for each disk
[ 198.6] Converting Windows Server 2016 Standard to run on KVM
virt-v2v: warning: /usr/share/virt-tools/pnp_wait.exe is missing.  
Firstboot scripts may conflict with PnP.
virt-v2v: warning: there is no QXL driver for this version of Windows (10.0 
x86_64).  virt-v2v looks for this driver in /usr/share/virtio-win

The guest will be configured to use a basic VGA display driver.
virt-v2v: This guest has virtio drivers installed.
[ 231.0] Mapping filesystem data to avoid copying unused and blank areas
[ 233.8] Closing the overlay
[ 234.4] Checking if the guest needs BIOS or UEFI to boot
[ 234.4] Assigning disks to buses
[ 234.4] Copying disk 1/1 to qemu URI json:{ "file.driver": "nbd", "file.path": "/var/tmp/rhvupload.g6ABRe/nbdkit1.sock", "file.export": "/" } (raw)
    (100.00/100%)
[2308.6] Creating output metadata
Traceback (most recent call last):
  File "/var/tmp/rhvupload.g6ABRe/rhv-upload-createvm.py", line 95, in <module>
    data = ovf,
  File "/usr/lib64/python2.7/site-packages/ovirtsdk4/services.py", line 33829, in add
    return self._internal_add(vm, headers, query, wait)
  File "/usr/lib64/python2.7/site-packages/ovirtsdk4/service.py", line 232, in _internal_add
    return future.wait() if wait else future
  File "/usr/lib64/python2.7/site-packages/ovirtsdk4/service.py", line 55, in wait
    return self._code(response)
  File "/usr/lib64/python2.7/site-packages/ovirtsdk4/service.py", line 229, in callback
    self._check_fault(response)
  File "/usr/lib64/python2.7/site-packages/ovirtsdk4/service.py", line 132, in _check_fault
    self._raise_error(response, body)
  File "/usr/lib64/python2.7/site-packages/ovirtsdk4/service.py", line 118, in _raise_error
    raise error
ovirtsdk4.Error: Fault reason is "Operation Failed". Fault detail is "failed to parse a given ovf configuration ovf error: [empty name]: cannot read '//*/disksection' with value: null". HTTP response code is 400.
virt-v2v: error: failed to create virtual machine, see earlier errors

If reporting bugs, run virt-v2v with debugging enabled and include the 
complete output:

  virt-v2v -v -x [...]

2.Check daemon.log in /var/log/ovirt-imageio-daemon on registered host of rhv4.2, can find lots of PUT requests delayed over 40 milliseconds during virt-v2v upload to ovirt

....
2018-07-10 10:49:31,502 INFO    (Thread-32) [images] Writing 1671168 bytes at offset 10780672 flush False to /rhev/data-center/mnt/10.66.144.40:_home_nfs__data/484255cb-cf10-4d22-ba81-32fdb29f0d21/images/5315df64-3322-423a-b0c8-9119e905688e/faeb50b2-6ec9-420f-970a-4463c2328f47 for ticket 0f097855-3081-48de-8ace-98701b69f461

2018-07-10 10:49:31,592 INFO    (Thread-32) [directio] Operation stats: <Clock(total=0.09, read=0.03, write=0.06)>
....




Verify the bug with builds
virt-v2v-1.38.2-6.el7.x86_64
libguestfs-1.38.2-6.el7.x86_64
libvirt-4.5.0-2.el7.x86_64
qemu-kvm-rhev-2.12.0-7.el7.x86_64

Steps:
1.Convert a guest to rhv4.2'data domain using --rhv-upload by virt-v2v, copying use time 2139.4s
# virt-v2v -ic vpx://vsphere.local%5cAdministrator.75.182/data/10.73.72.61/?no_verify=1  esx6.0-win2016-x86_64 -o rhv-upload -oc https://ibm-x3250m5-03.rhts.eng.pek2.redhat.com/ovirt-engine/api -os nfs_data -op /tmp/rhvpasswd -oo rhv-cafile=/home/ca.pem  -oo rhv-direct -of raw --password-file /tmp/passwd -b ovirtmgmt
[   0.3] Opening the source -i libvirt -ic vpx://vsphere.local%5cAdministrator.75.182/data/10.73.72.61/?no_verify=1 esx6.0-win2016-x86_64
[   2.0] Creating an overlay to protect the source from being modified
[   3.0] Initializing the target -o rhv-upload -oc https://ibm-x3250m5-03.rhts.eng.pek2.redhat.com/ovirt-engine/api -op /tmp/rhvpasswd -os nfs_data
[   4.6] Opening the overlay
[  25.6] Inspecting the overlay
[ 136.2] Checking for sufficient free disk space in the guest
[ 136.2] Estimating space required on target for each disk
[ 136.2] Converting Windows Server 2016 Standard to run on KVM
virt-v2v: warning: /usr/share/virt-tools/pnp_wait.exe is missing.  
Firstboot scripts may conflict with PnP.
virt-v2v: warning: there is no QXL driver for this version of Windows (10.0 
x86_64).  virt-v2v looks for this driver in 
/usr/share/virtio-win/virtio-win.iso

The guest will be configured to use a basic VGA display driver.
virt-v2v: This guest has virtio drivers installed.
[ 166.6] Mapping filesystem data to avoid copying unused and blank areas
[ 168.7] Closing the overlay
[ 169.5] Checking if the guest needs BIOS or UEFI to boot
[ 169.5] Assigning disks to buses
[ 169.5] Copying disk 1/1 to qemu URI json:{ "file.driver": "nbd", "file.path": "/var/tmp/rhvupload.dW7QbF/nbdkit1.sock", "file.export": "/" } (raw)
    (100.00/100%)
[2118.2] Creating output metadata
[2139.4] Finishing off



2.Check daemon.log in /var/log/ovirt-imageio-daemon on registered host of rhv4.2, still can find lots of PUT requests delayed over 40 milliseconds during virt-v2v upload to ovirt,  pls check logs "daemon-v2v-.log and daemon-new-v2v.log.1"
....
2018-07-10 13:47:08,549 INFO    (Thread-35100) [images] Writing 1288192 bytes at offset 8214528 flush False to /rhev/data-center/mnt/10.66.144.40:_home_nfs__data/484255cb-cf10-4d22-ba81-32fdb29f0d21/images/22fa42ac-4621-4032-9a4d-3c549e717cea/c2215e84-6a61-483d-a7bb-d7c08b4d5e29 for ticket da1b2c43-cd7c-4e59-8d33-133ab1fd47db

2018-07-10 13:47:08,652 INFO    (Thread-35100) [directio] Operation stats: <Clock(total=0.10, read=0.02, write=0.08)>
....



Hi Nir,

   Could please help to check above result, it seems the fixing didn't save much time during v2v converting to rhv4.2 with rhv-upload, and there are still  lots of PUT requests delayed over 40 milliseconds in daemon log

Thanks

Comment 12 Nir Soffer 2018-07-10 08:09:33 UTC
(In reply to mxie from comment #11)
> 2.Check daemon.log in /var/log/ovirt-imageio-daemon on registered host of
> rhv4.2, still can find lots of PUT requests delayed over 40 milliseconds

Request taking more than 40 milliseconds is not the issue fixed by disabling
nagle algorithm.

> 2018-07-10 13:47:08,549 INFO    (Thread-35100) [images] Writing 1288192
> bytes at offset 8214528 flush False to
> /rhev/data-center/mnt/10.66.144.40:_home_nfs__data/484255cb-cf10-4d22-ba81-
> 32fdb29f0d21/images/22fa42ac-4621-4032-9a4d-3c549e717cea/c2215e84-6a61-483d-
> a7bb-d7c08b4d5e29 for ticket da1b2c43-cd7c-4e59-8d33-133ab1fd47db
> 
> 2018-07-10 13:47:08,652 INFO    (Thread-35100) [directio] Operation stats:
> <Clock(total=0.10, read=0.02, write=0.08)>

What we see here a write request of bytes. imageio spent about 20
milliseconds (61 MiB/s read rate), and then spent about 80 milliseconds writing
to storage (15.3 MiB/s). The only issue seen here s extremely slow storage.

An example of the delay in older versions is:
- Zero request taking about 40 milliseconds to read the payload
- Small write request taking about 40 milliseconds to read the payload

With imageio 1.3 (what you tested) it is quite hard to see this issue. You need to
look at mulitple logs for same request:

2018-06-12 17:16:22,374 INFO    (Thread-17332) [web] START [10.35.68.26] PATCH /images/374bec27-930d-4097-8e41-e4bc23324eb0
2018-06-12 17:16:22,413 INFO    (Thread-17332) [images] Zeroing 24576 bytes at offset 4294418432 flush False to /rhev/data-center/mnt/blockSD/e30bfac2-8e13-479d-8cd6-c6da5e306c4e/images/80d9019c-df47-4f22-8b3e-13528c65eb2b/dc57f66b-c769-4f79-9bd4-c8e8b32756bc for ticket 374bec27-930d-4097-8e41-e4bc23324eb0
2018-06-12 17:16:22,414 INFO    (Thread-17332) [directio] Operation stats: <Clock(total=0.00, write=0.00)>

We an see that a PATCH requests started at 17:16:22,374, but the next log was
at 17:16:22,413 - about 40 milliseconds later. During this time the server is 
reading the tiny json payload sent for a PATCH/zero request.

Since imageio 1.4.0 we have improved logs, showing how time was spent during
entire request.

Here is an example PATCH request logs in imageio 1.4.2 (should be available in 
4.2.5):

2018-07-08 19:23:23,854 INFO    (Thread-30) [web] START: [^@] PATCH /images/test
2018-07-08 19:23:23,855 INFO    (Thread-30) [images] Zeroing 65536 bytes at offset 1882193920 flush False to /rhev/data-center/a0011271-88a4-491f-a566-aec38b2000e9/08272182-9fb1-4609-bd3b-02
46b66eafa3/images/30cadbb6-4537-4edf-9b7b-12fc9da818ab/6fe1f29e-6533-432d-8e70-0b31a95fd999 for ticket test
2018-07-08 19:23:23,867 INFO    (Thread-30) [web] FINISH [^@] PATCH /images/test: [200] 0 [request=0.012983, operation=0.011807, zero=0.008980]

And example PUT request:

2018-07-08 18:53:49,139 INFO    (Thread-3) [web] START: [^@] PUT /images/test
2018-07-08 18:53:49,140 INFO    (Thread-3) [images] Writing 196608 bytes at offset 168296448 flush False to /rhev/data-center/a0011271-88a4-491f-a566-aec38b2000e9/08272182-9fb1-4609-bd3b-0246b66eafa3/images/30cadbb6-4537-4edf-9b7b-12fc9da818ab/6fe1f29e-6533-432d-8e70-0b31a95fd999 for ticket test
2018-07-08 18:53:49,182 INFO    (Thread-3) [web] FINISH [^@] PUT /images/test: [200] 0 [request=0.042371, operation=0.039534, read=0.000173, write=0.036076]

As you can see the entire request times are reported in the FINISH log.

Comment 13 mxie@redhat.com 2018-07-10 09:38:17 UTC
Thanks Nir's detailed explanation

Try to reproduce the bug with builds:
virt-v2v-1.36.10-6.10.rhvpreview.el7ev.x86_64
libguestfs-1.36.10-6.10.rhvpreview.el7ev.x86_64
libvirt-3.9.0-14.el7_5.4.x86_64
qemu-kvm-rhev-2.10.0-21.el7_5.4.x86_64
ovirt-imageio-daemon-1.3.0-0.el7ev.noarch

Reproduce steps:
1.Convert a guest to rhv4.2'data domain using --rhv-upload by virt-v2v
# virt-v2v -ic vpx://vsphere.local%5cAdministrator.75.182/data/10.73.72.61/?no_verify=1  esx6.0-win2016-x86_64 -o rhv-upload -oc https://ibm-x3250m5-03.rhts.eng.pek2.redhat.com/ovirt-engine/api -os nfs_data -op /tmp/rhvpasswd -oo rhv-cafile=/home/ca.pem  -oo rhv-direct -of raw --password-file /tmp/passwd -b ovirtmgmt

2.Check daemon.log in /var/log/ovirt-imageio-daemon on registered host of rhv4.2

2.1 Found almost every zero request taking about 40 milliseconds to read the payload, such as:
....
2018-07-10 10:49:28,951 INFO    (Thread-3) [web] START [10.73.72.77] PATCH /images/0f097855-3081-48de-8ace-98701b69f461
2018-07-10 10:49:28,991 INFO    (Thread-3) [images] Zeroing 65024 bytes at offset 512 flush False to /rhev/data-center/mnt/10.66.144.40:_home_nfs__data/484255cb-cf10-4d22-ba81-32fdb29f0d21/images/5315df64-3322-423a-b0c8-9119e905688e/faeb50b2-6ec9-420f-970a-4463c2328f47 for ticket 0f097855-3081-48de-8ace-98701b69f461
2018-07-10 10:49:29,016 INFO    (Thread-3) [directio] Operation stats: <Clock(total=0.03, write=0.02)>
2018-07-10 10:49:29,017 INFO    (Thread-3) [web] FINISH [10.73.72.77] PATCH /images/0f097855-3081-48de-8ace-98701b69f461: [200] 0 (0.07s)
....

2.2 But I didn't find any write request has been delayed over 2 milliseconds to read the payload, such as:
....
2018-07-10 10:49:35,136 INFO    (Thread-64) [web] START [10.73.72.77] PUT /images/0f097855-3081-48de-8ace-98701b69f461
2018-07-10 10:49:35,136 INFO    (Thread-64) [images] Writing 3584 bytes at offset 64208896 flush False to /rhev/data-center/mnt/10.66.144.40:_home_nfs__data/484255cb-cf10-4d22-ba81-32fdb29f0d21/images/5315df64-3322-423a-b0c8-9119e905688e/faeb50b2-6ec9-420f-970a-4463c2328f47 for ticket 0f097855-3081-48de-8ace-98701b69f461
2018-07-10 10:49:35,151 INFO    (Thread-64) [directio] Operation stats: <Clock(total=0.01, read=0.00, write=0.01)>
2018-07-10 10:49:35,152 INFO    (Thread-64) [web] FINISH [10.73.72.77] PUT /images/0f097855-3081-48de-8ace-98701b69f461: [200] 0 (0.01s)
.....



Verify the bug with builds:
virt-v2v-1.38.2-6.el7.x86_64
libguestfs-1.38.2-6.el7.x86_64
libvirt-4.5.0-2.el7.x86_64
qemu-kvm-rhev-2.12.0-7.el7.x86_64
ovirt-imageio-daemon-1.3.0-0.el7ev.noarch

Reproduce steps:
1.Convert a guest to rhv4.2'data domain using --rhv-upload by virt-v2v
# virt-v2v -ic vpx://vsphere.local%5cAdministrator.75.182/data/10.73.72.61/?no_verify=1  esx6.0-win2016-x86_64 -o rhv-upload -oc https://ibm-x3250m5-03.rhts.eng.pek2.redhat.com/ovirt-engine/api -os nfs_data -op /tmp/rhvpasswd -oo rhv-cafile=/home/ca.pem  -oo rhv-direct -of raw --password-file /tmp/passwd -b ovirtmgmt

2.Check daemon.log in /var/log/ovirt-imageio-daemon on registered host of rhv4.2

2.1 All zero request taking less than 2 milliseconds to read the payload, such as:
....
2018-07-10 13:36:55,225 INFO    (Thread-34617) [web] START [10.73.72.77] PATCH /images/da1b2c43-cd7c-4e59-8d33-133ab1fd47db
2018-07-10 13:36:55,225 INFO    (Thread-34617) [images] Zeroing 33554432 bytes at offset 134217728 flush False to /rhev/data-cente/mnt/10.66.144.40:_home_nfs__data/484255cb-cf10-4d22-ba81-32fdb29f0d21/images/22fa42ac-4621-4032-9a4d-3c549e717cea/c2215e84-6a61-483d-a7bb-d7c08b4d5e29 for ticket da1b2c43-cd7c-4e59-8d33-133ab1fd47db
2018-07-10 13:36:56,588 INFO    (Thread-34617) [directio] Operation stats: <Clock(total=1.36, write=1.36)>
2018-07-10 13:36:56,588 INFO    (Thread-34617) [web] FINISH [10.73.72.77] PATCH /images/da1b2c43-cd7c-4e59-8d33-133ab1fd47db: [200] 0 (1.36s)
....

2.2 All write request taking less than 2 milliseconds to read the payload,such as:
....
2018-07-10 14:02:30,862 INFO    (Thread-43891) [web] START [10.73.72.77] PUT /images/da1b2c43-cd7c-4e59-8d33-133ab1fd47db
2018-07-10 14:02:30,862 INFO    (Thread-43891) [images] Writing 41984 bytes at offset 9752967680 flush False to /rhev/data-center/mnt/10.66.144.40:_home_nfs__data/484255cb-cf10-4d22-ba81-32fdb29f0d21/images/22fa42ac-4621-4032-9a4d-3c549e717cea/c2215e84-6a61-483d-a7bb-d7c08b4d5e29 for ticket da1b2c43-cd7c-4e59-8d33-133ab1fd47db
2018-07-10 14:02:30,895 INFO    (Thread-43891) [directio] Operation stats: <Clock(total=0.03, read=0.00, write=0.03)>
2018-07-10 14:02:30,895 INFO    (Thread-43891) [web] FINISH [10.73.72.77] PUT /images/da1b2c43-cd7c-4e59-8d33-133ab1fd47db: [200] 0 (0.04s)
....


Hi Nir,
  
   After comparing reproduce result and verify result, I think the bug wants to fix the problem "zero request taking about 40 milliseconds to read the payload" by disabling Nagle, it seems there is always no problem in write request,am I right?

   Could you please help to check whether above steps are enough to verify the bug?

Comment 14 Nir Soffer 2018-07-10 10:19:39 UTC
(In reply to mxie from comment #13)
> ovirt-imageio-daemon-1.3.0-0.el7ev.noarch

I don't know why you are testing this version, instead of latest version 
available in RHV 4.2.5 build (1.4.1).

>    Could you please help to check whether above steps are enough to verify
> the bug?

Looks good.

Comment 15 mxie@redhat.com 2018-07-10 14:40:31 UTC
(In reply to Nir Soffer from comment #14)
> (In reply to mxie from comment #13)
> > ovirt-imageio-daemon-1.3.0-0.el7ev.noarch
> 
> I don't know why you are testing this version, instead of latest version 
> available in RHV 4.2.5 build (1.4.1).

I want to confirm the bug is fixed from virt-v2v side,anyway,thanks for your reminding, I will verify the bug again after updating ovirt-imageio-daemon



Verify the bug with builds:
virt-v2v-1.38.2-6.el7.x86_64
libguestfs-1.38.2-6.el7.x86_64
libvirt-4.5.0-2.el7.x86_64
qemu-kvm-rhev-2.12.0-7.el7.x86_64
ovirt-imageio-daemon-1.4.1-0.el7ev.noarch


Steps:
1.Convert a guest to rhv4.2'data domain using --rhv-upload by virt-v2v
# virt-v2v -ic vpx://vsphere.local%5cAdministrator.75.182/data/10.73.72.61/?no_verify=1  esx6.0-win10-i386 -o rhv-upload -oc https://ibm-x3250m5-03.rhts.eng.pek2.redhat.com/ovirt-engine/api -os nfs_data -op /tmp/rhvpasswd -oo rhv-cafile=/home/ca.pem  -oo rhv-direct -of raw --password-file /tmp/passwd -b ovirtmgmt

2.Check daemon.log in /var/log/ovirt-imageio-daemon on registered host of rhv4.2

2.1 All zero request taking less than 2 milliseconds to read the payload, such as:
....
2018-07-10 22:15:46,024 INFO    (Thread-13) [web] START: [10.73.72.77] PATCH /images/3683cc01-6773-4c1a-a103-7a8ec34d8e79
2018-07-10 22:15:46,025 INFO    (Thread-13) [images] Zeroing 33554432 bytes at offset 234881024 flush False to /rhev/data-center/mnt/10.66.144.40:_home_nfs__data/484255cb-cf10-4d22-ba81-32fdb29f0d21/images/281a3f15-590f-47f5-805f-5ec7ddcfdc39/b208a1d7-3b15-472a-a2cc-4c80b788e770 for ticket 3683cc01-6773-4c1a-a103-7a8ec34d8e79
2018-07-10 22:15:46,517 INFO    (Thread-13) [web] FINISH [10.73.72.77] PATCH /images/3683cc01-6773-4c1a-a103-7a8ec34d8e79: [200] 0 [request=0.492542, operation=0.491749, write=0.490305]
....

2.2 All write request taking less than 2 milliseconds to read the payload,such as:
....
2018-07-10 22:20:30,296 INFO    (Thread-627) [web] START: [10.73.72.77] PUT /images/3683cc01-6773-4c1a-a103-7a8ec34d8e79
2018-07-10 22:20:30,296 INFO    (Thread-627) [images] Writing 2097152 bytes at offset 350879744 flush False to /rhev/data-center/mnt/10.66.144.40:_home_nfs__data/484255cb-cf10-4d22-ba81-32fdb29f0d21/images/281a3f15-590f-47f5-805f-5ec7ddcfdc39/b208a1d7-3b15-472a-a2cc-4c80b788e770 for ticket 3683cc01-6773-4c1a-a103-7a8ec34d8e79
2018-07-10 22:20:30,441 INFO    (Thread-627) [web] FINISH [10.73.72.77] PUT /images/3683cc01-6773-4c1a-a103-7a8ec34d8e79: [200] 0 [request=0.144560, operation=0.143920, read=0.101612, write=0.040294]
....

Result:
   According to comment 13 ~comment15, move the bug from ON_QA to VERIFIED

Comment 17 errata-xmlrpc 2018-10-30 07:45:56 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHEA-2018:3021

Comment 18 Laszlo Ersek 2022-01-20 08:45:35 UTC
Nir, can you explain *precisely* what fields to look at, in what log messages, to prove the request time improvement? (IOW, to demonstrate that latency due to Nagle's algorithm was eliminated?) Thank you.

Comment 19 Nir Soffer 2022-01-20 10:07:10 UTC
(In reply to Laszlo Ersek from comment #18)
> Nir, can you explain *precisely* what fields to look at, in what log
> messages, to prove the request time improvement? (IOW, to demonstrate that
> latency due to Nagle's algorithm was eliminated?) Thank you.

This was fixed in 2018, so I cannot give much details. Comment 12
should have the info.

You can find more info in this imageio patch:
https://github.com/oVirt/ovirt-imageio/commit/2e1359a8ff73641ca8b745887119a012ed317053

The virt-v2v fix was based on this fix.

Comment 20 Nir Soffer 2022-01-20 10:18:26 UTC
Note the fix was needed only for python 2.7, and on python 3 imageio uses
http.client.HTTPSConnection.connect():
https://github.com/oVirt/ovirt-imageio/blob/eebcfabce65e8089987364559cce0d652087fe46/ovirt_imageio/_internal/backends/http.py#L593

So rhv-upload-plugin can drop this change now.

Comment 21 Richard W.M. Jones 2022-01-20 10:37:36 UTC
> So rhv-upload-plugin can drop this change now.

FWIW I don't believe the patch was ever added upstream.  I think
it was added in RHEL 7 downstream only, and dropped in RHEL 8.  (This
would be the right thing to do since RHEL 8 has Python >= 3.6).


Note You need to log in before you can comment on or make changes to this bug.