Bug 1026137
Summary: | Volume download speed is slow | ||||||
---|---|---|---|---|---|---|---|
Product: | Red Hat Enterprise Linux 6 | Reporter: | Luwen Su <lsu> | ||||
Component: | libvirt | Assignee: | Martin Kletzander <mkletzan> | ||||
Status: | CLOSED NEXTRELEASE | QA Contact: | Virtualization Bugs <virt-bugs> | ||||
Severity: | medium | Docs Contact: | |||||
Priority: | medium | ||||||
Version: | 6.6 | CC: | acathrow, berrange, chhu, cross, cwei, dallan, david.pravec, dyuan, gsun, jsuchane, jyang, mjenner, mzhan, vbudikov, yanyang, ydu, zhwang | ||||
Target Milestone: | rc | Keywords: | Reopened | ||||
Target Release: | --- | ||||||
Hardware: | Unspecified | ||||||
OS: | Unspecified | ||||||
Whiteboard: | |||||||
Fixed In Version: | Doc Type: | Bug Fix | |||||
Doc Text: | Story Points: | --- | |||||
Clone Of: | 1026136 | Environment: | |||||
Last Closed: | 2015-06-10 08:47:03 UTC | Type: | Bug | ||||
Regression: | --- | Mount Type: | --- | ||||
Documentation: | --- | CRM: | |||||
Verified Versions: | Category: | --- | |||||
oVirt Team: | --- | RHEL 7.3 requirements from Atomic Host: | |||||
Cloudforms Team: | --- | Target Upstream Version: | |||||
Embargoed: | |||||||
Bug Depends On: | 1026136 | ||||||
Bug Blocks: | |||||||
Attachments: |
|
Description
Luwen Su
2013-11-04 02:43:53 UTC
It's speed also slowly on RHLE6 , with libvirt-0.10.2-29.el6 qemu-kvm-rhev-0.12.1.2-2.415.el6 kernel-2.6.32-429.el6 So cloned here. @ mkletzan : The original bug 1026136 was closed as NOTABUG, so can we close the bug. (In reply to yangyang from comment #3) That's true, thanks. Closing as such. I think we have the exact same problem described in this issue and looked into libvirt source. We can produce this with CentOS 6 libvirt as well as git HEAD. The vol-download uses two functions from src/rpc/virnetclientstream.c: virNetClientStreamQueuePacket() and virNetClientStreamRecvPacket(). The QueuePacket() (I shorten the names a bit) function is the source for data (the VM image in case of vol-download) and RecvPacket() is the sink. With fast system QueuePacket() function can produce data faster than RecvPacket() can consume and RecvPacket() starts to do huge amounts of memmove(). This makes vol-download really slow. For example in our case the vol-download starts with ~80MB/s (as seen from iotop) and after a while drops to below 1MB/s because the buffer starts to fill up. CPU utilisation of virsh is 100% after this issue happens. The source from RecvPacket(): 358 int virNetClientStreamRecvPacket(virNetClientStreamPtr st, 359 virNetClientPtr client, 360 char *data, 361 size_t nbytes, 362 bool nonblock) 363 { ... 399 if (st->incomingOffset) { 400 int want = st->incomingOffset; 401 if (want > nbytes) 402 want = nbytes; 403 memcpy(data, st->incoming, want); 404 if (want < st->incomingOffset) { 405 memmove(st->incoming, st->incoming + want, st->incomingOffset - want); 406 st->incomingOffset -= want; 407 } else { 408 VIR_FREE(st->incoming); 409 st->incomingOffset = st->incomingLength = 0; 410 } 411 rv = want; 412 } else { 413 rv = 0; 414 } The st->incomingOffset is usually (always?) bigger than nbytes (comparison in line 401). nbytes is defined as 64kB (src/libvirt-stream.c: virStreamRecvAll()). So want is same as nbytes (64kB) and thus smaller than st->incomingOffset (comparison in line 404) and the execution goes into lines 405-406. On line 405 memmove() removes already memcpy()ed data from st->incoming. Consider a case where st->incoming contains for example 512MB of data. Then every 64kB handled by the RecvPacket() causes 512MB-64kB memmove() operation. Already half a gig of data requires 8192 memmove()s to resolve and thus poor performance. And the QueuePacket() function just keeps pushing more data into st->incoming. Increasing the nbytes from 64kB to 1MB made vol-download perform better in our environment, but it doesn't remove the possibility of running in the same issue in the future. I hope this helps :) Yours, Ossi Herrala Codenomicon Oy Forgot to say: Please, consider reopening this issue. Thanks for that, I think your analysis is sound. I'm re-opening this bug, so we can consider what, if anything, we can do to improve this situation in general. For example, rather than storing one giant 'st->incoming' array, it might be better it we use an iovec, so when reading more data off the wire, we just add entries to the iovec. Then virNetClientStreamRecvPacket could just read data from the iovecs, and if it did need to memmove, it'd only be moving a small amount of data from one iovec - most of the others would be unchanged. Created attachment 1035176 [details] Vector I/O version Use I/O vector (iovec) instead of one huge memory buffer as suggested in https://bugzilla.redhat.com/show_bug.cgi?id=1026137#c7. This avoids doing memmove() to big buffers and performance doesn't degrade if source (virNetClientStreamQueuePacket()) is faster than sink (virNetClientStreamRecvPacket()). Thank you for posting the patch. Would you mind sending it to the upstream list in order to speed up the inclusion in libvirt? Patch sent to list: https://www.redhat.com/archives/libvir-list/2015-June/msg00284.html This will be handled in RHEL7 releases, bug 1026136. As RHEL6 is at the end of production 1 phase we would need a valid business justification. Thank you. |