Description of problem: imageio daemon and proxy do not use keep-alive connections. Clients communicating with the daemon or proxy need to open a new connection for every request. Opening a new connection is transparent in python HTTPSConnection, but this causes unnecessary delay for every request, and increase cpu time on both client and server. Version-Release number of selected component (if applicable): Any How reproducible: 100% Steps to Reproduce: - Profile imageio, see call for do_handshake' of '_ssl._SSLSocket for every request - Probably can be checked with tcpdump/wireshark. Actual results: - using one connection per request - slower uploads - high cpu usage. Expected results: - using same connection for all client requests - faster uploads - lower cpu usage More info: I uploaded this image: [root@leopard04 upload]# qemu-img info RHEL7_1-001.raw image: RHEL7_1-001.raw file format: raw virtual size: 4.0G (4294967296 bytes) disk size: 1.4G [root@leopard04 upload]# python image_info.py RHEL7_1-001.raw total segments: 10167 data segments: 5084 min data: 4096 max data: 56295424 avg data: 301793 zero segments: 5083 min zero: 4096 max zero: 2146324480 avg zero: 543113 ## Uploading using https: [root@leopard04 upload]# time ../ovirt-imageio/examples/upload RHEL7_1-001.raw https://leopard04.qa.lab.tlv.redhat.com:54322/images/test real 1m23.171s user 0m16.288s sys 0m5.022s $ profile-stats -c ovirt-imageio-daemon-https.prof Fri Jun 15 02:45:34 2018 ovirt-imageio-daemon-https.prof 11169553 function calls (11184822 primitive calls) in 48.031 seconds Ordered by: internal time List reduced from 392 to 20 due to restriction <20> ncalls tottime percall cumtime percall filename:lineno(function) 10169 13.907 0.001 13.907 0.001 __builtin__:0(<method 'do_handshake' of '_ssl._SSLSocket' objects>) 105277 1.866 0.000 1.866 0.000 __builtin__:0(<method 'read' of '_ssl._SSLSocket' objects>) 5139 0.751 0.000 0.751 0.000 __builtin__:0(<method 'write' of 'mmap.mmap' objects>) 10477 0.734 0.000 0.734 0.000 __builtin__:0(<method 'write' of '_io.FileIO' objects>) 40677 0.730 0.000 0.730 0.000 __builtin__:0(<method 'write' of '_ssl._SSLSocket' objects>) 165284 0.690 0.000 0.690 0.000 __builtin__:0(<method 'write' of 'cStringIO.StringO' objects>) 30509 0.615 0.000 1.238 0.000 __init__.py:242(LogRecord.__init__) 66101 0.573 0.000 1.307 0.000 socket.py:406(_fileobject.readline) 61018 0.493 0.000 2.597 0.000 __init__.py:451(Formatter.format) 4130 0.471 0.000 0.471 0.000 __builtin__:0(<method 'getvalue' of 'cStringIO.StringO' objects>) 61018 0.460 0.000 0.460 0.000 time:0(localtime) 10170 0.416 0.000 1.517 0.000 rfc822.py:122(Message.readheaders) 10223 0.399 0.000 3.742 0.000 socket.py:336(_fileobject.read) 10395/10398 0.397 0.000 0.496 0.000 select:0(select) 10167 0.391 0.000 0.391 0.000 __builtin__:0(<method 'close' of 'mmap.mmap' objects>) 10167 0.367 0.000 0.367 0.000 directio.py:378(aligned_buffer) 61018 0.349 0.000 0.988 0.000 __init__.py:311(LogRecord.getMessage) 61018 0.322 0.000 0.958 0.000 __init__.py:404(Formatter.formatTime) 340903 0.304 0.000 0.399 0.000 __builtin__:0(getattr) 81361 0.303 0.000 1.547 0.000 socket.py:292(_fileobject.flush) Ordered by: internal time List reduced from 392 to 20 due to restriction <20> $ profile-stats -c -s cumtime ovirt-imageio-daemon-https.prof Fri Jun 15 02:45:34 2018 ovirt-imageio-daemon-https.prof 11169553 function calls (11184822 primitive calls) in 48.031 seconds Ordered by: cumulative time List reduced from 392 to 20 due to restriction <20> ncalls tottime percall cumtime percall filename:lineno(function) 10169/10170 0.075 0.000 29.921 0.003 threading.py:754(Thread.run) 10169/10170 0.055 0.000 29.846 0.003 SocketServer.py:586(WSGIServer.process_request_thread) 10169/10170 0.117 0.000 29.112 0.003 SocketServer.py:332(WSGIServer.finish_request) 10169/10170 0.081 0.000 28.913 0.003 SocketServer.py:643(WSGIRequestHandler.__init__) 10169/10170 0.162 0.000 28.252 0.003 wsgi.py:37(WSGIRequestHandler.handle) 10169/10170 0.067 0.000 24.715 0.002 handlers.py:76(ServerHandler.run) 10169/10170 0.176 0.000 18.344 0.002 web.py:92(Application.__call__) 10170 0.073 0.000 17.350 0.002 SocketServer.py:282(WSGIServer._handle_request_noblock) 10169 0.035 0.000 15.622 0.002 SocketServer.py:457(WSGIServer.get_request) 10169 0.104 0.000 15.587 0.002 ssl.py:876(SSLSocket.accept) 10169 0.068 0.000 15.054 0.001 ssl.py:342(SSLContext.wrap_socket) 10169 0.247 0.000 14.986 0.001 ssl.py:524(SSLSocket.__init__) 10169 0.141 0.000 14.164 0.001 ssl.py:826(SSLSocket.do_handshake) 10169/10170 0.175 0.000 14.131 0.001 web.py:111(Application.dispatch) 10169 13.907 0.001 13.907 0.001 __builtin__:0(<method 'do_handshake' of '_ssl._SSLSocket' objects>) 5084 0.131 0.000 8.567 0.002 server.py:229(Images.put) 10168 0.048 0.000 7.518 0.001 tickets.py:92(Ticket.run) 30509 0.151 0.000 7.343 0.000 __init__.py:1139(Logger.info) 10168 0.051 0.000 7.338 0.001 directio.py:76(Receive.run) 30509 0.146 0.000 7.092 0.000 __init__.py:1249(Logger._log) ## Same upload using unix socket [root@leopard04 upload]# time ../ovirt-imageio/examples/upload RHEL7_1-001.raw https://leopard04.qa.lab.tlv.redhat.com:54322/images/test real 0m40.063s user 0m4.843s sys 0m2.262s $ profile-stats -c ovirt-imageio-daemon-unix.prof Fri Jun 15 03:04:34 2018 ovirt-imageio-daemon-unix.prof 9782672 function calls (9797941 primitive calls) in 28.829 seconds Ordered by: internal time List reduced from 377 to 20 due to restriction <20> ncalls tottime percall cumtime percall filename:lineno(function) 5139 0.807 0.000 0.807 0.000 __builtin__:0(<method 'write' of 'mmap.mmap' objects>) 10477 0.781 0.000 0.781 0.000 __builtin__:0(<method 'write' of '_io.FileIO' objects>) 30509 0.749 0.000 1.372 0.000 __init__.py:242(LogRecord.__init__) 84606 0.577 0.000 0.577 0.000 __builtin__:0(<method 'write' of 'cStringIO.StringO' objects>) 66101 0.573 0.000 1.195 0.000 socket.py:406(_fileobject.readline) 23149 0.513 0.000 0.513 0.000 __builtin__:0(<method 'recv' of '_socket.socket' objects>) 61018 0.496 0.000 2.604 0.000 __init__.py:451(Formatter.format) 61018 0.458 0.000 0.458 0.000 time:0(localtime) 10170 0.419 0.000 1.566 0.000 rfc822.py:122(Message.readheaders) 10167 0.408 0.000 0.408 0.000 __builtin__:0(<method 'close' of 'mmap.mmap' objects>) 61018 0.359 0.000 0.999 0.000 __init__.py:311(LogRecord.getMessage) 61018 0.317 0.000 0.950 0.000 __init__.py:404(Formatter.formatTime) 10167 0.310 0.000 0.310 0.000 directio.py:378(aligned_buffer) 10269/10272 0.297 0.000 0.389 0.000 select:0(select) 101691 0.293 0.000 0.293 0.000 __builtin__:0(<method 'acquire' of 'thread.lock' objects>) 10170 0.292 0.000 0.581 0.000 simple_server.py:73(WSGIRequestHandler.get_environ) 218635 0.290 0.000 0.407 0.000 descriptors.py:47(fget) 4694 0.286 0.000 0.286 0.000 __builtin__:0(<method 'getvalue' of 'cStringIO.StringO' objects>) 81361 0.273 0.000 0.497 0.000 socket.py:292(_fileobject.flush) 30509 0.272 0.000 0.272 0.000 __builtin__:0(<method 'flush' of 'file' objects>) Ordered by: internal time List reduced from 377 to 20 due to restriction <20> $ profile-stats -c -s cumtime ovirt-imageio-daemon-unix.prof Fri Jun 15 03:04:34 2018 ovirt-imageio-daemon-unix.prof 9782672 function calls (9797941 primitive calls) in 28.829 seconds Ordered by: cumulative time List reduced from 377 to 20 due to restriction <20> ncalls tottime percall cumtime percall filename:lineno(function) 10169/10170 0.060 0.000 25.987 0.003 threading.py:754(Thread.run) 10169/10170 0.053 0.000 25.928 0.003 SocketServer.py:586(WSGIServer.process_request_thread) 10169/10170 0.095 0.000 25.547 0.003 SocketServer.py:332(WSGIServer.finish_request) 10169/10170 0.080 0.000 25.418 0.002 SocketServer.py:643(WSGIRequestHandler.__init__) 10169/10170 0.162 0.000 24.985 0.002 wsgi.py:37(WSGIRequestHandler.handle) 10169/10170 0.071 0.000 21.509 0.002 handlers.py:76(ServerHandler.run) 10169/10170 0.185 0.000 16.235 0.002 web.py:92(Application.__call__) 10169/10170 0.180 0.000 11.982 0.001 web.py:111(Application.dispatch) 30509 0.150 0.000 7.475 0.000 __init__.py:1139(Logger.info) 30509 0.152 0.000 7.227 0.000 __init__.py:1249(Logger._log) 5084 0.136 0.000 6.158 0.001 server.py:229(Images.put) 30509 0.094 0.000 5.340 0.000 __init__.py:1270(Logger.handle) 10168 0.050 0.000 5.216 0.001 tickets.py:92(Ticket.run) 30509 0.115 0.000 5.209 0.000 __init__.py:1302(Logger.callHandlers) 30509 0.161 0.000 5.094 0.000 __init__.py:736(RotatingFileHandler.handle) 10168 0.055 0.000 5.038 0.000 directio.py:76(Receive.run) 10170 0.078 0.000 4.967 0.000 handlers.py:117(ServerHandler.finish_response) 5084 0.030 0.000 4.715 0.001 server.py:306(Images.patch) 30509 0.128 0.000 4.423 0.000 handlers.py:68(RotatingFileHandler.emit) 5083 0.100 0.000 3.435 0.001 server.py:322(Images._zero) The issue is the design of python WSGIRequestHandler, handling one request per connection instead of many requests like BaseHTTPRequestHandler. We may find a better WSGI server (maybe uwsgi?), but we need a server that lets use listen to multiple sockets, using https and http over unix socket. Even if we find a better server, WSGI does not fit our use case. We need lower level server that let us access to the underling socket for both upload and download instead of the annoying app_iter. I think we should replace WSGI, with BaseHTTPServer. We also considered rewriting in Go which have great http infrastructure. See https://github.com/nirs/imageio
I got keep alive connections working with python builtin WSGI server. The patch is not ready yet, but the results are very interesting. ## https Without keep alive connections: # time ./upload RHEL7_1-001.raw https://host:54322/images/test real 1m2.492s user 0m16.232s sys 0m5.619s With keep alive connections: # time -p ./upload RHEL7_1-001.raw https://host:54322/images/test real 29.40 user 7.24 sys 1.58 ## unix socket Without keep alive connections: # time ./upload RHEL7_1-001.raw https://host:54322/images/test real 0m29.474s user 0m4.796s sys 0m2.280s With keep alive connections # time -p ./upload RHEL7_1-001.raw https://host:54322/images/test real 25.82 user 4.40 sys 1.65 ## Profiles with keep alive connections # profile-stats -c small-requests/upload-keep-alive-https-cpu.prof Sun Jun 17 21:18:26 2018 small-requests/upload-keep-alive-https-cpu.prof 9542164 function calls (9557434 primitive calls) in 27.672 seconds Ordered by: internal time List reduced from 378 to 20 due to restriction <20> ncalls tottime percall cumtime percall filename:lineno(function) 105279 1.621 0.000 1.621 0.000 __builtin__:0(<method 'read' of '_ssl._SSLSocket' objects>) 165283 0.891 0.000 0.891 0.000 __builtin__:0(<method 'write' of 'cStringIO.StringO' objects>) 5139 0.735 0.000 0.735 0.000 __builtin__:0(<method 'write' of 'mmap.mmap' objects>) 10477 0.712 0.000 0.712 0.000 __builtin__:0(<method 'write' of '_io.FileIO' objects>) 30509 0.569 0.000 1.183 0.000 __init__.py:242(LogRecord.__init__) 66103 0.564 0.000 1.214 0.000 socket.py:406(_fileobject.readline) 4133 0.543 0.000 0.543 0.000 __builtin__:0(<method 'getvalue' of 'cStringIO.StringO' objects>) 40677 0.493 0.000 0.493 0.000 __builtin__:0(<method 'write' of '_ssl._SSLSocket' objects>) 61018 0.461 0.000 2.475 0.000 __init__.py:451(Formatter.format) 61018 0.444 0.000 0.444 0.000 time:0(localtime) 10170 0.420 0.000 1.567 0.000 rfc822.py:122(Message.readheaders) 10223 0.396 0.000 3.858 0.000 socket.py:336(_fileobject.read) 10167 0.368 0.000 0.368 0.000 __builtin__:0(<method 'close' of 'mmap.mmap' objects>) 61018 0.362 0.000 0.537 0.000 threading.py:186(_RLock.release) 30509 0.354 0.000 0.354 0.000 __builtin__:0(<method 'flush' of 'file' objects>) 5084 0.338 0.000 5.938 0.001 directio.py:168(Receive._run) 61018 0.330 0.000 0.941 0.000 __init__.py:311(LogRecord.getMessage) 61018 0.304 0.000 0.920 0.000 __init__.py:404(Formatter.formatTime) 218635 0.299 0.000 0.420 0.000 descriptors.py:47(fget) 473237 0.276 0.000 0.276 0.000 __builtin__:0(<method 'get' of 'dict' objects>) We can see that the SSL hanshake is gone (looking at the bottom of the profile we can find only 2 calls). $ profile-stats -c -s cumtime small-requests/upload-keep-alive-https-cpu.prof Sun Jun 17 21:18:26 2018 small-requests/upload-keep-alive-https-cpu.prof 9542164 function calls (9557434 primitive calls) in 27.672 seconds Ordered by: cumulative time List reduced from 378 to 20 due to restriction <20> ncalls tottime percall cumtime percall filename:lineno(function) 2/3 0.000 0.000 27.658 9.219 threading.py:754(Thread.run) 2/3 0.000 0.000 27.658 9.219 SocketServer.py:586(WSGIServer.process_request_thread) 2/3 0.000 0.000 27.658 9.219 SocketServer.py:332(WSGIServer.finish_request) 2/3 0.000 0.000 27.658 9.219 SocketServer.py:643(WSGIRequestHandler.__init__) 2/3 0.028 0.014 27.657 9.219 wsgi.py:97(WSGIRequestHandler.handle) 10172/10173 0.150 0.000 27.629 0.003 wsgi.py:48(UnixWSGIRequestHandler.handle_one_request) 10169/10170 0.062 0.000 24.212 0.002 handlers.py:76(ServerHandler.run) 10169/10170 0.171 0.000 18.208 0.002 web.py:92(Application.__call__) 10169/10170 0.166 0.000 14.182 0.001 web.py:111(Application.dispatch) 5084 0.126 0.000 8.651 0.002 server.py:229(Images.put) 30509 0.179 0.000 7.489 0.000 __init__.py:1139(Logger.info) 10168 0.043 0.000 7.466 0.001 tickets.py:92(Ticket.run) 10168 0.048 0.000 7.300 0.001 directio.py:76(Receive.run) 30509 0.143 0.000 7.216 0.000 __init__.py:1249(Logger._log) 5084 0.338 0.000 5.938 0.001 directio.py:168(Receive._run) 10170 0.069 0.000 5.726 0.001 handlers.py:117(ServerHandler.finish_response) 30509 0.092 0.000 5.536 0.000 __init__.py:1270(Logger.handle) 30509 0.109 0.000 5.410 0.000 __init__.py:1302(Logger.callHandlers) 30509 0.148 0.000 5.301 0.000 __init__.py:736(RotatingFileHandler.handle) 5139 0.124 0.000 5.095 0.001 directio.py:198(Receive._receive_chunk) We can see that the next target for optimization is logging. Calling log.info() took 7.4 seconds of of 27.6 seconds (26.8%). # profile-stats -c small-requests/upload-keep-alive-unix-cpu.prof Sun Jun 17 10:23:40 2018 small-requests/upload-keep-alive-unix-cpu.prof 8671042 function calls (8686312 primitive calls) in 22.018 seconds Ordered by: internal time List reduced from 378 to 20 due to restriction <20> ncalls tottime percall cumtime percall filename:lineno(function) 5139 0.725 0.000 0.725 0.000 __builtin__:0(<method 'write' of 'mmap.mmap' objects>) 10477 0.687 0.000 0.687 0.000 __builtin__:0(<method 'write' of '_io.FileIO' objects>) 30509 0.556 0.000 1.135 0.000 __init__.py:242(LogRecord.__init__) 66104 0.512 0.000 0.964 0.000 socket.py:406(_fileobject.readline) 61018 0.442 0.000 2.368 0.000 __init__.py:451(Formatter.format) 61018 0.410 0.000 0.410 0.000 time:0(localtime) 10170 0.369 0.000 1.397 0.000 rfc822.py:122(Message.readheaders) 10167 0.348 0.000 0.348 0.000 __builtin__:0(<method 'close' of 'mmap.mmap' objects>) 82469 0.331 0.000 0.331 0.000 __builtin__:0(<method 'write' of 'cStringIO.StringO' objects>) 61018 0.322 0.000 0.925 0.000 __init__.py:311(LogRecord.getMessage) 24014 0.298 0.000 0.298 0.000 __builtin__:0(<method 'recv' of '_socket.socket' objects>) 218635 0.285 0.000 0.398 0.000 descriptors.py:47(fget) 61018 0.281 0.000 0.855 0.000 __init__.py:404(Formatter.formatTime) 10170 0.266 0.000 0.529 0.000 simple_server.py:73(WSGIRequestHandler.get_environ) 473237 0.257 0.000 0.257 0.000 __builtin__:0(<method 'get' of 'dict' objects>) 50860 0.241 0.000 0.451 0.000 socket.py:292(_fileobject.flush) 10167 0.240 0.000 0.240 0.000 directio.py:378(aligned_buffer) 5084 0.235 0.000 2.754 0.001 directio.py:168(Receive._run) 30509 0.232 0.000 0.232 0.000 __builtin__:0(<method 'flush' of 'file' objects>) 61018 0.217 0.000 0.317 0.000 threading.py:146(_RLock.acquire) # profile-stats -c -s cumtime small-requests/upload-keep-alive-unix-cpu.prof Sun Jun 17 10:23:40 2018 small-requests/upload-keep-alive-unix-cpu.prof 8671042 function calls (8686312 primitive calls) in 22.018 seconds Ordered by: cumulative time List reduced from 378 to 20 due to restriction <20> ncalls tottime percall cumtime percall filename:lineno(function) 2/3 0.000 0.000 22.005 7.335 threading.py:754(Thread.run) 2/3 0.000 0.000 22.005 7.335 SocketServer.py:586(WSGIServer.process_request_thread) 2/3 0.000 0.000 22.005 7.335 SocketServer.py:332(WSGIServer.finish_request) 2/3 0.000 0.000 22.005 7.335 SocketServer.py:643(WSGIRequestHandler.__init__) 2/3 0.029 0.015 22.004 7.335 wsgi.py:97(WSGIRequestHandler.handle) 10172/10173 0.151 0.000 21.975 0.002 wsgi.py:48(UnixWSGIRequestHandler.handle_one_request) 10169/10170 0.062 0.000 18.968 0.002 handlers.py:76(ServerHandler.run) 10169/10170 0.159 0.000 14.121 0.001 web.py:92(Application.__call__) 10169/10170 0.159 0.000 10.356 0.001 web.py:111(Application.dispatch) 30509 0.138 0.000 6.665 0.000 __init__.py:1139(Logger.info) 30509 0.128 0.000 6.437 0.000 __init__.py:1249(Logger._log) 5084 0.116 0.000 5.157 0.001 server.py:229(Images.put) 30509 0.086 0.000 4.839 0.000 __init__.py:1270(Logger.handle) 30509 0.101 0.000 4.721 0.000 __init__.py:1302(Logger.callHandlers) 30509 0.137 0.000 4.619 0.000 __init__.py:736(RotatingFileHandler.handle) 10170 0.068 0.000 4.572 0.000 handlers.py:117(ServerHandler.finish_response) 10168 0.041 0.000 4.249 0.000 tickets.py:92(Ticket.run) 5084 0.024 0.000 4.163 0.001 server.py:306(Images.patch) 10168 0.043 0.000 4.092 0.000 directio.py:76(Receive.run) 30509 0.115 0.000 4.013 0.000 handlers.py:68(RotatingFileHandler.emit) Again, logging took 6.6 seconds of 22.8 seconds (28.9%).
Created attachment 1452430 [details] Logs and profiles
virt-v2v supports now keep alive connections: https://github.com/libguestfs/libguestfs/commit/f4e0a8342dbeb2c779c76e1807a37b24a0c96feb
ovirt-imageio 1.4.2 will support keep alive connections for direct uploads. Supporting proxy uploads requires more work since the proxy duplicated the daemon web server code. There is no change in behavior expect improved performance. The best way to test this it to use imageio example upload script here: https://github.com/oVirt/ovirt-imageio/blob/master/examples/upload with imageio 1.4.1 (does not support keep alive) and imageio 1.4.2. This script support both unix socket and HTTPS, but it will use unix socket automatically. To test HTTPS, disable unix socket support by changing: if "unix_socket" in server_options: con = UnixHTTPConnection(server_options["unix_socket"]) To: if False and "unix_socket" in server_options: con = UnixHTTPConnection(server_options["unix_socket"]) (We may add a flag to make this easier). To test: 1. Create a sparse image using virt-builder: virt-builder fedora-27 2. Create a 6g volume in a storage domain in engine. 3. If the volume is on FC/iSCSI storage, find the volume and activate it: lvs -o vg_name,lv_name,tags | grep disk-uuid lvchange -ay vg_name,lv_name 4. Create this ticket: $ cat ticket.json { "uuid": "test", "size": 6442450944, "url": "file:///dev/vg_name/lv_name", "timeout": 3000, "ops": ["write"] } If the volume is on file storage, use the path to the volume in /rhev/data-center/... 5. Add the ticket to the imageio daemon: $ curl --unix-socket /run/vdsm/ovirt-imageio-daemon.sock \ -X PUT \ --upload-file ticket.json \ http://localhost/tickets/test 6. Upload the image: time examples/upload fedora-27.img https://host:54322/images/test Expected behavior: - sparse upload using HTTPS should be about 2X faster compared with 1.4.1. - sparse upload using unix socket should be about 1.15X faster compared with 1.4.1. Same improvement is expected with virt-v2v -o rhv-upload import, but it is hard to change virt-v2v to use HTTPS instead of unix socket, so you are likely to test only the unix socket, and will not see the improvement in HTTPS upload. Uploading using oVirt SDK examples will not be improved since they are not using multiple requests or support sparse uploads. We will update the examples to use sparse uploads in the future.
This bug is referenced in ovirt-imageio release to be shipped in oVirt 4.2.5. Can you please check Target Milestone for this bug? I also see that the single patch attached to this BZ has been merged, can you check if this should be in MODIFIED state?
(In reply to Sandro Bonazzola from comment #5) Thanks Sandro, I changed the milestone to 4.2.5, and moved to modified.
This feature is included in ovirt-imageio-*-1.4.2, which is part of 4.2.5.
Verified, checked during performance the log and the threads stay's the same through the upload and not closing and opening : 2018-07-30 14:19:32,834 INFO (Thread-10) [web] START: [10.35.68.26] OPTIONS /images/1ce0f599-0458-48e5-8529-4ca009e80962 31 2018-07-30 14:19:32,835 INFO (Thread-10) [web] FINISH [10.35.68.26] OPTIONS /images/1ce0f599-0458-48e5-8529-4ca009e80962: [200] 74 [request=0.000693] 32 2018-07-30 14:19:32,873 INFO (Thread-11) [web] START: [^@] PUT /images/1ce0f599-0458-48e5-8529-4ca009e80962 33 2018-07-30 14:19:32,874 INFO (Thread-11) [images] Writing 54784 bytes at offset 0 flush False to /rhev/data-center/mnt/blockSD/bee83d01-aab7-4581-921f-d0145235c757/images/d19e887a-182d-4723 -912c-2dafa588ab77/27d58f77-5fae-4262-9d04-dc70adea51c2 for ticket 1ce0f599-0458-48e5-8529-4ca009e80962 34 2018-07-30 14:19:32,878 INFO (Thread-11) [web] FINISH [^@] PUT /images/1ce0f599-0458-48e5-8529-4ca009e80962: [200] 0 [request=0.004747, operation=0.004061, read=0.000096, write=0.003067] 35 2018-07-30 14:19:32,879 INFO (Thread-11) [web] START: [^@] PATCH /images/1ce0f599-0458-48e5-8529-4ca009e80962 36 2018-07-30 14:19:32,879 INFO (Thread-11) [images] Zeroing 10752 bytes at offset 54784 flush False to /rhev/data-center/mnt/blockSD/bee83d01-aab7-4581-921f-d0145235c757/images/d19e887a-182d- 4723-912c-2dafa588ab77/27d58f77-5fae-4262-9d04-dc70adea51c2 for ticket 1ce0f599-0458-48e5-8529-4ca009e80962
This bugzilla is included in oVirt 4.2.5 release, published on July 30th 2018. Since the problem described in this bug report should be resolved in oVirt 4.2.5 release, it has been closed with a resolution of CURRENT RELEASE. If the solution does not work for you, please open a new bug report.