RHEL Engineering is moving the tracking of its product development work on RHEL 6 through RHEL 9 to Red Hat Jira (issues.redhat.com). If you're a Red Hat customer, please continue to file support cases via the Red Hat customer portal. If you're not, please head to the "RHEL project" in Red Hat Jira and file new tickets here. Individual Bugzilla bugs in the statuses "NEW", "ASSIGNED", and "POST" are being migrated throughout September 2023. Bugs of Red Hat partners with an assigned Engineering Partner Manager (EPM) are migrated in late September as per pre-agreed dates. Bugs against components "kernel", "kernel-rt", and "kpatch" are only migrated if still in "NEW" or "ASSIGNED". If you cannot log in to RH Jira, please consult article #7032570. That failing, please send an e-mail to the RH Jira admins at rh-issues@redhat.com to troubleshoot your issue as a user management inquiry. The email creates a ServiceNow ticket with Red Hat. Individual Bugzilla bugs that are migrated will be moved to status "CLOSED", resolution "MIGRATED", and set with "MigratedToJIRA" in "Keywords". The link to the successor Jira issue will be found under "Links", have a little "two-footprint" icon next to it, and direct you to the "RHEL project" in Red Hat Jira (issue links are of type "https://issues.redhat.com/browse/RHEL-XXXX", where "X" is a digit). This same link will be available in a blue banner at the top of the page informing you that that bug has been migrated.
Bug 665299 - load vhost-net by default
Summary: load vhost-net by default
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: Red Hat Enterprise Linux 6
Classification: Red Hat
Component: qemu-kvm
Version: 6.1
Hardware: Unspecified
OS: Unspecified
low
medium
Target Milestone: rc
: 6.1
Assignee: Michael S. Tsirkin
QA Contact: Virtualization Bugs
URL:
Whiteboard:
Depends On: VhostToggle
Blocks: Rhel6KvmTier1
TreeView+ depends on / blocked
 
Reported: 2010-12-23 07:16 UTC by Michael S. Tsirkin
Modified: 2018-11-27 19:22 UTC (History)
8 users (show)

Fixed In Version: qemu-kvm-0.12.1.2-2.136.el6
Doc Type: Bug Fix
Doc Text:
Cause: vhost-net kernel module was not being loaded automatically, so vhost-net was not being used by default. Consequence: lower performance than what is possible when using vhost-net. Fix: removed vhost-net from /etc/modprobe.d/blacklist-kvm.conf Result: vhost-net is used by default by qemu-kvm.
Clone Of:
Environment:
Last Closed: 2011-05-19 11:34:28 UTC
Target Upstream Version:
Embargoed:


Attachments (Terms of Use)
rhel6.1 vhost-net vs virtio-net (57.72 KB, text/html)
2011-04-08 05:26 UTC, Quan Wenli
no flags Details
rhel6.1 vhost-net vs virtio-net megabyte/ cpu sheet (60.89 KB, text/html)
2011-04-19 07:28 UTC, Quan Wenli
no flags Details


Links
System ID Private Priority Status Summary Last Updated
Red Hat Product Errata RHSA-2011:0534 0 normal SHIPPED_LIVE Important: qemu-kvm security, bug fix, and enhancement update 2011-05-19 11:20:36 UTC

Description Michael S. Tsirkin 2010-12-23 07:16:48 UTC
vhost-net is currently blacklisted
whitelist and load vhost-net by default
with kvm so it gets used.

Must do so after performance evaluation.

Comment 2 Michael S. Tsirkin 2010-12-23 07:21:02 UTC
Note: this can only be safely done after libvirt gains the ability
to control vhost on/off status as requested by bz 643050:
without this the change would be too risky.

Set the dependency appropriately.

Comment 8 Quan Wenli 2011-04-08 05:26:29 UTC
Created attachment 490701 [details]
rhel6.1 vhost-net vs virtio-net

Attach the test result of rhel6.1 vhost-net vs viriot-net.
See obvious performance degradation with UDP (R) and TCP (S) under guest <-> host scenario compared with rhel6.1 virtio-net.

Comment 15 Quan Wenli 2011-04-19 07:28:11 UTC
Created attachment 493114 [details]
rhel6.1 vhost-net vs virtio-net megabyte/ cpu sheet

1.Test by rate-limiting the stream to within 10% and 30% packet drops with virtio-net and vhost-net respectively ,there are no regression.

Lost packet rate = 10% 

vhost-net:/root/tool/netperf-2.4.5/src/netperf -w 1 -b 78 -H 192.168.0.13 -l 60 -t UDP_STREAM -- -m 1460 
UDP UNIDIRECTIONAL SEND TEST from 0.0.0.0 (0.0.0.0) port 0 AF_INET to 192.168.0.13 (192.168.0.13) port 0 AF_INET : interval
Socket  Message  Elapsed      Messages                
Size    Size     Time         Okay Errors   Throughput
bytes   bytes    secs            #      #   10^6bits/sec
124928    1460   60.00      468000      0      91.10
124928           60.00      420001             81.76

virtio-net:  /root/tool/netperf-2.4.5/src/netperf -w 1 -b 78 -H 192.168.0.13 -l 60 -t UDP_STREAM -- -m 1460
/root/tool/netperf-2.4.5/src/netperf -w 1 -b 78 -H 192.168.0.13 -l 60 -t UDP_STREAM -- -m 1460
UDP UNIDIRECTIONAL SEND TEST from 0.0.0.0 (0.0.0.0) port 0 AF_INET to 192.168.0.13 (192.168.0.13) port 0 AF_INET : interval
Socket  Message  Elapsed      Messages                
Size    Size     Time         Okay Errors   Throughput
bytes   bytes    secs            #      #   10^6bits/sec
124928    1460   60.00      468000      0      91.10
124928           60.00      420003             81.76

Lost packet rate = 30% 

vhost-net:/root/tool/netperf-2.4.5/src/netperf -w 1 -b 100 -H 192.168.0.13 -l 60 -t UDP_STREAM -- -m 1460 
UDP UNIDIRECTIONAL SEND TEST from 0.0.0.0 (0.0.0.0) port 0 AF_INET to 192.168.0.13 (192.168.0.13) port 0 AF_INET : interval
Socket  Message  Elapsed      Messages                
Size    Size     Time         Okay Errors   Throughput
bytes   bytes    secs            #      #   10^6bits/sec

124928    1460   60.00      600000      0     116.80
124928           60.00      420122             81.78


virtio-net:/root/tool/netperf-2.4.5/src/netperf -w 1 -b 100 -H 192.168.0.13 -l 60 -t UDP_STREAM -- -m 1460  
UDP UNIDIRECTIONAL SEND TEST from 0.0.0.0 (0.0.0.0) port 0 AF_INET to 192.168.0.13 (192.168.0.13) port 0 AF_INET : interval
Socket  Message  Elapsed      Messages                
Size    Size     Time         Okay Errors   Throughput
bytes   bytes    secs            #      #   10^6bits/sec

124928    1460   60.00      600000      0     116.80
124928           60.00      420029             81.77

2. attach  megabyte/cpu sheet base on comment #8's sheet. except following regesstions,others vhost-net's performance looks good 
     scenarios            Message size       protocols         drop% 
   guest <-> ext guest      > MTU             UDP R      -35.23%~ -29.5% 
   guest <-> ext guest      512               TCP S      -8.94%
   guest <-> ext host       9000 32768        UDP R      -15%,-51.71% 
   guest <-> ext host       256 ~2048         TCP S      -14.01~ -9.94% 
   guest <-> host           all               UDP R      -77.87%~ -5.73%
   guest <-> host          256 ~10834         TCP S      -31.24% ~ -6.15%
   guest <-> host          32,512~2048        TCP R      -5.58% ,44.40%~ -13.48%

Comment 16 Quan Wenli 2011-04-21 08:36:56 UTC
(In reply to comment #15)
Hi MST,

i just file a bug 698541 for tracking UDP (R) performance optimization from guest to host .
For other pointed performance issue,what's your comments? as vhost-net have good performance in most of the secarions ,could i verify it as pass or something else?

Comment 17 Michael S. Tsirkin 2011-04-26 10:11:13 UTC
We added a comment on udp to virt guide.
I think we can close this.

Comment 18 Quan Wenli 2011-04-26 10:35:28 UTC
base on comment # 15 #16 #17 ,change bug status to verified.

Comment 19 Eduardo Habkost 2011-05-05 14:34:21 UTC
    Technical note added. If any revisions are required, please edit the "Technical Notes" field
    accordingly. All revisions will be proofread by the Engineering Content Services team.
    
    New Contents:
Cause: vhost-net kernel module was not being loaded automatically, so vhost-net was not being used by default.

Consequence: lower performance than what is possible when using vhost-net.

Fix: removed vhost-net from /etc/modprobe.d/blacklist-kvm.conf

Result: vhost-net is used by default by qemu-kvm.

Comment 20 errata-xmlrpc 2011-05-19 11:34:28 UTC
An advisory has been issued which should help the problem
described in this bug report. This report is therefore being
closed with a resolution of ERRATA. For more information
on therefore solution and/or where to find the updated files,
please follow the link below. You may reopen this bug report
if the solution does not work for you.

http://rhn.redhat.com/errata/RHSA-2011-0534.html

Comment 21 errata-xmlrpc 2011-05-19 13:00:44 UTC
An advisory has been issued which should help the problem
described in this bug report. This report is therefore being
closed with a resolution of ERRATA. For more information
on therefore solution and/or where to find the updated files,
please follow the link below. You may reopen this bug report
if the solution does not work for you.

http://rhn.redhat.com/errata/RHSA-2011-0534.html


Note You need to log in before you can comment on or make changes to this bug.