Bug 1676713 - ovirt-iso-uploader is very slow
Summary: ovirt-iso-uploader is very slow
Keywords:
Status: CLOSED CURRENTRELEASE
Alias: None
Product: ovirt-iso-uploader
Classification: oVirt
Component: Core
Version: 4.2.0
Hardware: Unspecified
OS: Unspecified
medium
medium
Target Milestone: ovirt-4.3.3
: 4.3.1
Assignee: Sandro Bonazzola
QA Contact: Petr Matyáš
URL:
Whiteboard:
Depends On:
Blocks: 1627661
TreeView+ depends on / blocked
 
Reported: 2019-02-12 23:22 UTC by Greg Sheremeta
Modified: 2019-04-16 13:58 UTC (History)
6 users (show)

Fixed In Version: ovirt-iso-uploader-4.3.1-1
Clone Of:
Environment:
Last Closed: 2019-04-16 13:58:10 UTC
oVirt Team: Integration
Embargoed:
sbonazzo: ovirt-4.3?
lleistne: testing_ack+


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
oVirt gerrit 98653 0 master MERGED nfs: use async option 2019-03-19 14:48:08 UTC
oVirt gerrit 98654 0 ovirt-iso-uploader-4.3 MERGED nfs: use async option 2019-03-19 14:48:17 UTC

Description Greg Sheremeta 2019-02-12 23:22:57 UTC
Description of problem:
ovirt-iso-uploader is very slow

Originally reported by multiple people on ovirt users list
[ovirt-users] engine-iso-uploader very slow
[ovirt-users] engine-iso-uploader taking a REALLY long time

Reproduced by Sandro:
"""
I reproduced as well on my system, looks like something is making the copy of the file over NFS very slow. iso uploader copies 16k at a time from source to destination using a NFS mount. But the transfer rate is ridiculously slow and on iso uploader side I don't see any obvious issue. I feel like something is limiting the transfer at network layer but can't see what it is.
"""

Version-Release number of selected component (if applicable):
oVirt 4.2.7 (running on CentOS  7.6.1810)

How reproducible:
very

Steps to Reproduce:
1. upload a single ISO using ovirt-iso-uploader

Actual results:
slow

Expected results:
fast

Additional info:
yes, ovirt-iso-uploader is deprecated along with ISO domains.
https://bugzilla.redhat.com/show_bug.cgi?id=1673059 tracks adding deprecation messages, but the slowness is still a valid bug

Comment 1 Greg Sheremeta 2019-02-12 23:25:35 UTC
@Dominik, any ideas? Could this be a network problem?

Comment 2 Dominik Holler 2019-02-13 09:10:42 UTC
(In reply to Greg Sheremeta from comment #1)
> @Dominik, any ideas? Could this be a network problem?

> I reproduced as well on my system, looks like something is making the copy of the file over NFS very slow.

I cannot reproduce locally.
Sandro, would you please share the relevant line of you /etc/exports,
the verbose output of engine-iso-uploader,
the relevant output of mount during the upload,
and the output of nfsstat before and after the upload?

Is the performance better if you change from sync to async in
engine-iso-uploader's NFS_MOUNT_OPTS in
/usr/lib/python2.7/site-packages/ovirt_iso_uploader/__main__.py ?

Comment 3 Dominik Holler 2019-03-06 09:01:36 UTC
restoring needinfo

Comment 4 Sandro Bonazzola 2019-03-18 15:58:38 UTC
(In reply to Dominik Holler from comment #2)
> (In reply to Greg Sheremeta from comment #1)
> > @Dominik, any ideas? Could this be a network problem?
> 
> > I reproduced as well on my system, looks like something is making the copy of the file over NFS very slow.
> 
> I cannot reproduce locally.
> Sandro, would you please share the relevant line of you /etc/exports,

[root@mididell ~]# cat /etc/exports
/export/data  *(rw,no_root_squash)
/export/iso  *(rw,no_root_squash)


> the verbose output of engine-iso-uploader,
> the relevant output of mount during the upload,

[root@ovirt4 ~]# ovirt-iso-uploader -i iso --verbose upload virtio-win-0.1.160.iso 
Please provide the REST API password for the admin@internal oVirt Engine user (CTRL+D to abort): 
DEBUG: API Vendor(None)	API Version(4.3.0)
DEBUG: id=422c62c5-5a33-4bee-8952-da3cb2a67938 address=mididell.home path=/export/iso
Uploading, please wait...
DEBUG: local NFS mount point is /tmp/tmpPnGCmz
DEBUG: NFS mount command (/bin/mount -t nfs -o rw,sync,soft mididell.home:/export/iso /tmp/tmpPnGCmz)
DEBUG: /bin/mount -t nfs -o rw,sync,soft mididell.home:/export/iso /tmp/tmpPnGCmz
DEBUG: _cmds(['/bin/mount', '-t', 'nfs', '-o', 'rw,sync,soft', 'mididell.home:/export/iso', '/tmp/tmpPnGCmz'])
DEBUG: returncode(0)
DEBUG: STDOUT()
DEBUG: STDERR()
INFO: Start uploading virtio-win-0.1.160.iso 
DEBUG: Size of virtio-win-0.1.160.iso:	322842624 bytes	315276.01K-blocks	307.0 MB
DEBUG: Available space in /tmp/tmpPnGCmz/422c62c5-5a33-4bee-8952-da3cb2a67938/images/11111111-1111-1111-1111-111111111111:	93589602304 bytes	91396096.01K-blocks	89254.0 MB
DEBUG: euid(0) egid(0)


Mount shows:
mididell.home:/export/iso on /tmp/tmpPnGCmz type nfs4 (rw,relatime,sync,vers=4.1,rsize=1048576,wsize=1048576,namlen=255,soft,proto=tcp,timeo=600,retrans=2,sec=sys,clientaddr=192.168.1.112,local_lock=none,addr=192.168.1.107)



> and the output of nfsstat before and after the upload?

Before upload:
[root@mididell ~]# nfsstat
Server rpc stats:
calls      badcalls   badclnt    badauth    xdrcall
76190      0          0          0          0       

Server nfs v4:
null         compound     
3         0% 76187    99% 

Server nfs v4 operations:
op0-unused   op1-unused   op2-future   access       close        commit       
0         0% 0         0% 0         0% 18167     6% 13779     4% 36        0% 
create       delegpurge   delegreturn  getattr      getfh        link         
76        0% 0         0% 17        0% 57040    20% 659       0% 0         0% 
lock         lockt        locku        lookup       lookup_root  nverify      
0         0% 0         0% 0         0% 505       0% 0         0% 0         0% 
open         openattr     open_conf    open_dgrd    putfh        putpubfh     
13782     4% 0         0% 0         0% 0         0% 76289    27% 0         0% 
putrootfh    read         readdir      readlink     remove       rename       
10        0% 17465     6% 103       0% 0         0% 227       0% 128       0% 
renew        restorefh    savefh       secinfo      setattr      setcltid     
0         0% 0         0% 128       0% 0         0% 155       0% 0         0% 
setcltidconf verify       write        rellockowner bc_ctl       bind_conn    
0         0% 0         0% 4264      1% 0         0% 0         0% 0         0% 
exchange_id  create_ses   destroy_ses  free_stateid getdirdeleg  getdevinfo   
3         0% 3         0% 2         0% 0         0% 0         0% 0         0% 
getdevlist   layoutcommit layoutget    layoutreturn secinfononam sequence     
0         0% 0         0% 0         0% 0         0% 5         0% 76175    27% 
set_ssv      test_stateid want_deleg   destroy_clid reclaim_comp 
0         0% 0         0% 0         0% 2         0% 3         0% 

After upload: I'll send it later, it will take a while :-)


> 
> Is the performance better if you change from sync to async in
> engine-iso-uploader's NFS_MOUNT_OPTS in
> /usr/lib/python2.7/site-packages/ovirt_iso_uploader/__main__.py ?

I'll try this once the upload finishes.

Comment 5 Sandro Bonazzola 2019-03-18 16:28:50 UTC
After upload:
[root@mididell ~]# nfsstat
Server rpc stats:
calls      badcalls   badclnt    badauth    xdrcall
124681     0          0          0          0       

Server nfs v4:
null         compound     
4         0% 124677   99% 

Server nfs v4 operations:
op0-unused   op1-unused   op2-future   access       close        commit       
0         0% 0         0% 0         0% 20509     4% 15573     3% 36        0% 
create       delegpurge   delegreturn  getattr      getfh        link         
76        0% 0         0% 17        0% 103149   21% 667       0% 0         0% 
lock         lockt        locku        lookup       lookup_root  nverify      
0         0% 0         0% 0         0% 512       0% 0         0% 0         0% 
open         openattr     open_conf    open_dgrd    putfh        putpubfh     
15576     3% 0         0% 0         0% 0         0% 124773   26% 0         0% 
putrootfh    read         readdir      readlink     remove       rename       
12        0% 19740     4% 104       0% 0         0% 227       0% 129       0% 
renew        restorefh    savefh       secinfo      setattr      setcltid     
0         0% 0         0% 129       0% 0         0% 169       0% 0         0% 
setcltidconf verify       write        rellockowner bc_ctl       bind_conn    
0         0% 0         0% 43599     9% 0         0% 0         0% 0         0% 
exchange_id  create_ses   destroy_ses  free_stateid getdirdeleg  getdevinfo   
4         0% 4         0% 3         0% 0         0% 0         0% 0         0% 
getdevlist   layoutcommit layoutget    layoutreturn secinfononam sequence     
0         0% 0         0% 0         0% 0         0% 6         0% 124661   26% 
set_ssv      test_stateid want_deleg   destroy_clid reclaim_comp 
0         0% 0         0% 0         0% 3         0% 4         0% 

using async instead of sync is much faster, seems to use full bandwidth.

Comment 6 Petr Matyáš 2019-04-04 09:06:16 UTC
Verified on ovirt-iso-uploader-4.3.1-1.el7ev.noarch

[root@engine ~]# ll test.iso -h
-rw-r--r--. 1 root root 379M Apr  4 10:56 test.iso

With sync:
[root@engine ~]# time ovirt-iso-uploader -i iso upload test.iso
Please provide the REST API password for the admin@internal oVirt Engine user (CTRL+D to abort):
Uploading, please wait...
INFO: Start uploading test.iso
Uploading: [########################################] 100%
INFO: test.iso uploaded successfully

real    0m54.642s
user    0m0.884s
sys     0m2.116s

With async:
[root@engine ~]# time ovirt-iso-uploader -i iso upload test.iso
Please provide the REST API password for the admin@internal oVirt Engine user (CTRL+D to abort):
Uploading, please wait...
INFO: Start uploading test.iso
Uploading: [########################################] 100%
INFO: test.iso uploaded successfully

real    0m8.114s
user    0m0.415s
sys     0m0.591s

Comment 7 Sandro Bonazzola 2019-04-16 13:58:10 UTC
This bugzilla is included in oVirt 4.3.3 release, published on April 16th 2019.

Since the problem described in this bug report should be
resolved in oVirt 4.3.3 release, it has been closed with a resolution of CURRENT RELEASE.

If the solution does not work for you, please open a new bug report.


Note You need to log in before you can comment on or make changes to this bug.