Bug 633963
| Summary: | Replace virtio-net TX timer mitigation with bottom half handler | ||
|---|---|---|---|
| Product: | Red Hat Enterprise Linux 6 | Reporter: | RHEL Program Management <pm-rhel> |
| Component: | qemu-kvm | Assignee: | Virtualization Maintenance <virt-maint> |
| Status: | CLOSED ERRATA | QA Contact: | Virtualization Bugs <virt-bugs> |
| Severity: | urgent | Docs Contact: | |
| Priority: | urgent | ||
| Version: | 6.0 | CC: | akong, alex.williamson, bcao, chrisw, ddumas, ehabkost, Jes.Sorensen, jwest, llim, michen, mjenner, mkenneth, mwagner, pm-eus, syeghiay, tburke, virt-maint, wliao |
| Target Milestone: | rc | Keywords: | ZStream |
| Target Release: | --- | ||
| Hardware: | All | ||
| OS: | Linux | ||
| Whiteboard: | |||
| Fixed In Version: | qemu-kvm-0.12.1.2-2.113.el6_0.1 | Doc Type: | Bug Fix |
| Doc Text: |
Prior to this update, virtio-net used a packet transmission algorithm that was using a timer to delay a transmission in an attempt to batch multiple packets together. However, this typically resulted in a higher latency. With this update, the default algorithm has been changed to use an asynchronous bottom half transmitter, improving the performance.
|
Story Points: | --- |
| Clone Of: | Environment: | ||
| Last Closed: | 2010-11-10 18:59:50 UTC | Type: | --- |
| Regression: | --- | Mount Type: | --- |
| Documentation: | --- | CRM: | |
| Verified Versions: | Category: | --- | |
| oVirt Team: | --- | RHEL 7.3 requirements from Atomic Host: | |
| Cloudforms Team: | --- | Target Upstream Version: | |
| Embargoed: | |||
| Bug Depends On: | 624767 | ||
| Bug Blocks: | |||
|
Description
RHEL Program Management
2010-09-14 19:15:54 UTC
Compare UDP throughput between the regular userspace virtio-net under build qemu-kvm-0.12.1.2-2.113.el6_0.1.x86_64 and build qemu-kvm-0.12.1.2-2.109.el6.x86_64 by using the following steps: 1. start guest by: /usr/libexec/qemu-kvm -enable-kvm -m 2048 -smp 2 -name rhel6 -uuid `uuidgen` -rtc base=utc -boot c -drive file=/var/lib/libvirt/images/rhel6.img,if=none,cache=none,id=drive-virtio-disk0,format=raw,boot=on -device virtio-blk-pci,bus=pci.0,addr=0x3,drive=drive-virtio-disk0,id=virtio-disk0 -netdev tap,id=hostnet0 -device virtio-net-pci,netdev=hostnet0,id=net0,mac=52:54:00:e9:19:a9,bus=pci.0,addr=0x7 -usb -device usb-tablet,id=input0 -vnc :1 -device virtio-balloon-pci,id=balloon0,bus=pci.0,addr=0x6 -monitor stdio 2. 2vcpus, pin each of them to host cpu. tasket -pc 0 $thread_id1 tasket -pc 1 $thread_id2 3. on guest, pin the interrupt to the second vcpu. echo $i > /proc/irq/$irqcur/smp_affinity 4. run netperf from guest to external host: # for b in 32 64 128 256 512 1024 2048 4096 8192 16834 32768 65507; do netperf -t UDP_STREAM -f m -H 192.162.0.3 -P 0 -l 10 -- -m $b; done 5. got the average throughput qemu-kvm-0.12.1.2-2.109.el6.x86_64: Send Send Elasped Throughput Socket Message Time Size Size (bytes) (bytes) (secs) (10^6 bits/sec) 124928 32 10 42.00 124928 64 10 85.19 124928 128 10 167.27 124928 256 10 341.25 124928 512 10 643.83 124928 1024 10 931.37 124928 2048 10 932.34 124928 4096 10 952.12 124928 8192 10 952.23 124928 16834 10 955.88 124928 32768 10 956.31 124928 65507 10 968.33 qemu-kvm-0.12.1.2-2.113.el6_0.1.x86_64: Send Send Elasped Throughput Speedup Socket Message Time Size Size (bytes) (bytes) (secs)(10^6bps) (percentage) 124928 32 10 71.71 70.74% 124928 64 10 146.12 71.52% 124928 128 10 292.51 74.87% 124928 256 10 607.26 77.95% 124928 512 10 1066.01 65.57% 124928 1024 10 899.22 -3.45% 124928 2048 10 936.58 0.45% 124928 4096 10 953.62 0.16% 124928 8192 10 955.28 0.32% 124928 16834 10 958.77 0.30% 124928 32768 10 962.41 0.64% 124928 65507 10 970.40 0.21% Summary: The bottom half transmitter improves the tx throughput in UDP_STREAM.
Technical note added. If any revisions are required, please edit the "Technical Notes" field
accordingly. All revisions will be proofread by the Engineering Content Services team.
New Contents:
Prior to this update, virtio-net used a packet transmission algorithm that was using a timer to delay a transmission in an attempt to batch multiple packets together. However, this typically resulted in a higher latency. With this update, the default algorithm has been changed to use an asynchronous bottom half transmitter, improving the performance.
An advisory has been issued which should help the problem described in this bug report. This report is therefore being closed with a resolution of ERRATA. For more information on therefore solution and/or where to find the updated files, please follow the link below. You may reopen this bug report if the solution does not work for you. http://rhn.redhat.com/errata/RHBA-2010-0855.html |