Bug 665299
| Summary: | load vhost-net by default | ||||||||
|---|---|---|---|---|---|---|---|---|---|
| Product: | Red Hat Enterprise Linux 6 | Reporter: | Michael S. Tsirkin <mst> | ||||||
| Component: | qemu-kvm | Assignee: | Michael S. Tsirkin <mst> | ||||||
| Status: | CLOSED ERRATA | QA Contact: | Virtualization Bugs <virt-bugs> | ||||||
| Severity: | medium | Docs Contact: | |||||||
| Priority: | low | ||||||||
| Version: | 6.1 | CC: | ehabkost, juzhang, lihuang, mkenneth, mwagner, tburke, virt-maint, wquan | ||||||
| Target Milestone: | rc | ||||||||
| Target Release: | 6.1 | ||||||||
| Hardware: | Unspecified | ||||||||
| OS: | Unspecified | ||||||||
| Whiteboard: | |||||||||
| Fixed In Version: | qemu-kvm-0.12.1.2-2.136.el6 | Doc Type: | Bug Fix | ||||||
| Doc Text: |
Cause: vhost-net kernel module was not being loaded automatically, so vhost-net was not being used by default.
Consequence: lower performance than what is possible when using vhost-net.
Fix: removed vhost-net from /etc/modprobe.d/blacklist-kvm.conf
Result: vhost-net is used by default by qemu-kvm.
|
Story Points: | --- | ||||||
| Clone Of: | Environment: | ||||||||
| Last Closed: | 2011-05-19 11:34:28 UTC | Type: | --- | ||||||
| Regression: | --- | Mount Type: | --- | ||||||
| Documentation: | --- | CRM: | |||||||
| Verified Versions: | Category: | --- | |||||||
| oVirt Team: | --- | RHEL 7.3 requirements from Atomic Host: | |||||||
| Cloudforms Team: | --- | Target Upstream Version: | |||||||
| Embargoed: | |||||||||
| Bug Depends On: | 643050 | ||||||||
| Bug Blocks: | 580951 | ||||||||
| Attachments: |
|
||||||||
|
Description
Michael S. Tsirkin
2010-12-23 07:16:48 UTC
Note: this can only be safely done after libvirt gains the ability to control vhost on/off status as requested by bz 643050: without this the change would be too risky. Set the dependency appropriately. Created attachment 490701 [details]
rhel6.1 vhost-net vs virtio-net
Attach the test result of rhel6.1 vhost-net vs viriot-net.
See obvious performance degradation with UDP (R) and TCP (S) under guest <-> host scenario compared with rhel6.1 virtio-net.
Created attachment 493114 [details] rhel6.1 vhost-net vs virtio-net megabyte/ cpu sheet 1.Test by rate-limiting the stream to within 10% and 30% packet drops with virtio-net and vhost-net respectively ,there are no regression. Lost packet rate = 10% vhost-net:/root/tool/netperf-2.4.5/src/netperf -w 1 -b 78 -H 192.168.0.13 -l 60 -t UDP_STREAM -- -m 1460 UDP UNIDIRECTIONAL SEND TEST from 0.0.0.0 (0.0.0.0) port 0 AF_INET to 192.168.0.13 (192.168.0.13) port 0 AF_INET : interval Socket Message Elapsed Messages Size Size Time Okay Errors Throughput bytes bytes secs # # 10^6bits/sec 124928 1460 60.00 468000 0 91.10 124928 60.00 420001 81.76 virtio-net: /root/tool/netperf-2.4.5/src/netperf -w 1 -b 78 -H 192.168.0.13 -l 60 -t UDP_STREAM -- -m 1460 /root/tool/netperf-2.4.5/src/netperf -w 1 -b 78 -H 192.168.0.13 -l 60 -t UDP_STREAM -- -m 1460 UDP UNIDIRECTIONAL SEND TEST from 0.0.0.0 (0.0.0.0) port 0 AF_INET to 192.168.0.13 (192.168.0.13) port 0 AF_INET : interval Socket Message Elapsed Messages Size Size Time Okay Errors Throughput bytes bytes secs # # 10^6bits/sec 124928 1460 60.00 468000 0 91.10 124928 60.00 420003 81.76 Lost packet rate = 30% vhost-net:/root/tool/netperf-2.4.5/src/netperf -w 1 -b 100 -H 192.168.0.13 -l 60 -t UDP_STREAM -- -m 1460 UDP UNIDIRECTIONAL SEND TEST from 0.0.0.0 (0.0.0.0) port 0 AF_INET to 192.168.0.13 (192.168.0.13) port 0 AF_INET : interval Socket Message Elapsed Messages Size Size Time Okay Errors Throughput bytes bytes secs # # 10^6bits/sec 124928 1460 60.00 600000 0 116.80 124928 60.00 420122 81.78 virtio-net:/root/tool/netperf-2.4.5/src/netperf -w 1 -b 100 -H 192.168.0.13 -l 60 -t UDP_STREAM -- -m 1460 UDP UNIDIRECTIONAL SEND TEST from 0.0.0.0 (0.0.0.0) port 0 AF_INET to 192.168.0.13 (192.168.0.13) port 0 AF_INET : interval Socket Message Elapsed Messages Size Size Time Okay Errors Throughput bytes bytes secs # # 10^6bits/sec 124928 1460 60.00 600000 0 116.80 124928 60.00 420029 81.77 2. attach megabyte/cpu sheet base on comment #8's sheet. except following regesstions,others vhost-net's performance looks good scenarios Message size protocols drop% guest <-> ext guest > MTU UDP R -35.23%~ -29.5% guest <-> ext guest 512 TCP S -8.94% guest <-> ext host 9000 32768 UDP R -15%,-51.71% guest <-> ext host 256 ~2048 TCP S -14.01~ -9.94% guest <-> host all UDP R -77.87%~ -5.73% guest <-> host 256 ~10834 TCP S -31.24% ~ -6.15% guest <-> host 32,512~2048 TCP R -5.58% ,44.40%~ -13.48% (In reply to comment #15) Hi MST, i just file a bug 698541 for tracking UDP (R) performance optimization from guest to host . For other pointed performance issue,what's your comments? as vhost-net have good performance in most of the secarions ,could i verify it as pass or something else? We added a comment on udp to virt guide. I think we can close this. base on comment # 15 #16 #17 ,change bug status to verified.
Technical note added. If any revisions are required, please edit the "Technical Notes" field
accordingly. All revisions will be proofread by the Engineering Content Services team.
New Contents:
Cause: vhost-net kernel module was not being loaded automatically, so vhost-net was not being used by default.
Consequence: lower performance than what is possible when using vhost-net.
Fix: removed vhost-net from /etc/modprobe.d/blacklist-kvm.conf
Result: vhost-net is used by default by qemu-kvm.
An advisory has been issued which should help the problem described in this bug report. This report is therefore being closed with a resolution of ERRATA. For more information on therefore solution and/or where to find the updated files, please follow the link below. You may reopen this bug report if the solution does not work for you. http://rhn.redhat.com/errata/RHSA-2011-0534.html An advisory has been issued which should help the problem described in this bug report. This report is therefore being closed with a resolution of ERRATA. For more information on therefore solution and/or where to find the updated files, please follow the link below. You may reopen this bug report if the solution does not work for you. http://rhn.redhat.com/errata/RHSA-2011-0534.html |