RHEL Engineering is moving the tracking of its product development work on RHEL 6 through RHEL 9 to Red Hat Jira (issues.redhat.com). If you're a Red Hat customer, please continue to file support cases via the Red Hat customer portal. If you're not, please head to the "RHEL project" in Red Hat Jira and file new tickets here. Individual Bugzilla bugs in the statuses "NEW", "ASSIGNED", and "POST" are being migrated throughout September 2023. Bugs of Red Hat partners with an assigned Engineering Partner Manager (EPM) are migrated in late September as per pre-agreed dates. Bugs against components "kernel", "kernel-rt", and "kpatch" are only migrated if still in "NEW" or "ASSIGNED". If you cannot log in to RH Jira, please consult article #7032570. That failing, please send an e-mail to the RH Jira admins at rh-issues@redhat.com to troubleshoot your issue as a user management inquiry. The email creates a ServiceNow ticket with Red Hat. Individual Bugzilla bugs that are migrated will be moved to status "CLOSED", resolution "MIGRATED", and set with "MigratedToJIRA" in "Keywords". The link to the successor Jira issue will be found under "Links", have a little "two-footprint" icon next to it, and direct you to the "RHEL project" in Red Hat Jira (issue links are of type "https://issues.redhat.com/browse/RHEL-XXXX", where "X" is a digit). This same link will be available in a blue banner at the top of the page informing you that that bug has been migrated.
Bug 596114 - Huge perforrmance regression in virtio_net bridged guests
Summary: Huge perforrmance regression in virtio_net bridged guests
Keywords:
Status: CLOSED WONTFIX
Alias: None
Product: Red Hat Enterprise Linux 6
Classification: Red Hat
Component: qemu-kvm
Version: 6.0
Hardware: All
OS: Linux
low
high
Target Milestone: rc
: ---
Assignee: Michael S. Tsirkin
QA Contact: Virtualization Bugs
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2010-05-26 10:56 UTC by Mark Wagner
Modified: 2013-01-09 22:37 UTC (History)
7 users (show)

Fixed In Version:
Doc Type: Bug Fix
Doc Text:
Clone Of:
Environment:
Last Closed: 2010-05-30 21:02:20 UTC
Target Upstream Version:
Embargoed:


Attachments (Terms of Use)

Description Mark Wagner 2010-05-26 10:56:46 UTC
Description of problem:

The RHEL6 virtion-net bridged performance has a huge performance regression from RHEL5.5 in terms of throughput. In RHEL6, the throughput does not go above 1.7 Gbits / sec while on a RHEL5.5 host the throughput is well over 7Gbits/sec (all untuned)

Version-Release number of selected component (if applicable):
RHEL6 tree

How reproducible:

Every time

Steps to Reproduce:
1.Setup a RHEL6 system for KVM with a a vitio-net bridge 
2. Run netperf from the external box -> guest over a 10Gbit link (note guest <-> host will also show this issue )
3. Run netperf from the guest -> external box

  
Actual results:
Guest -> External  Rate	
565.9
947.15
1132.56
1272.2
1752.98
1838.04
1692.69
1841.42
1649.98
1857.2
1790.28
1618.05

External -> Guest Rate
637.18
1037.32
1596.08
1707.34
1746.41
1695.59
1746.88
1619.26
1764.72
1584.79
1609.06
1642.17


Expected results:   (RHEL5.5 Data, same RHEL6 guest, same HW)

RHEL55 Guest -> External
546.11
1070.9
1764.09
3543.35
9164.43
9242.26
6994.58
5335.49
6947.82
8681.91
7660.41
7747.86


RHEL55 External -> Guest
768.24
1356.97
2201.17
3315.93
4147.38
4046.24
6611.18
6767.22
6183.71
4821.95
6607.25
7319.15




Additional info:

Comment 2 Dor Laor 2010-05-26 12:14:04 UTC
We neglected virtio-net userspace in rhel6 since the default is vhost.
Practically there is no need to use virtio-net at all.

So, I don't think we should diverge from upstream for this. It is nice to fix so we'll have a userspace solution too. Let's do it on upstream first.

Until that happens, just ignore virtio-net on rhel6 and use vhost only.

Comment 3 Mark Wagner 2010-05-26 14:51:54 UTC
So, I'm not sure that I agree with Dors assessment of the need to continue to neglect this.

This is a 75% performance regression from the previous release. 

This performance makes this *non-competitive* 
    in fact we got exceptions at the end of the RHEL5.4 release to add changes that brought similar performance close to the levels we see now.  

If we are no longer going to support this functionality then drop it from this release. Otherwise it needs to get corrected. 

Other factors to consider:
1) while vhost_net is planned to be the default in RHEL6, I am assuming that will only apply to new guests, not existing configurations. So to not fix this implies the need for tools to update existing RHEL5 based configurations to use vhost net. Are there plans to do that ? 

1a) Any solution to 1 needs to include RHEV-M, libvirt, and command line usage.

Comment 4 Dor Laor 2010-05-27 13:43:43 UTC
(In reply to comment #3)
> So, I'm not sure that I agree with Dors assessment of the need to continue to
> neglect this.
> 
> This is a 75% performance regression from the previous release. 
> 
> This performance makes this *non-competitive* 
>     in fact we got exceptions at the end of the RHEL5.4 release to add changes
> that brought similar performance close to the levels we see now.  
> 
> If we are no longer going to support this functionality then drop it from this
> release. Otherwise it needs to get corrected. 
> 
> Other factors to consider:
> 1) while vhost_net is planned to be the default in RHEL6, I am assuming that
> will only apply to new guests, not existing configurations. So to not fix this
> implies the need for tools to update existing RHEL5 based configurations to use
> vhost net. Are there plans to do that ? 


vhost-net and virtio-net are backends. It is 100% transparent for the guest. So we can easily switch to using vhost without *any* guest change.
This is the plan.

> 
> 1a) Any solution to 1 needs to include RHEV-M, libvirt, and command line usage.    

Use vhost and only vhost.

Comment 5 Andrew Cathrow 2010-05-27 19:32:39 UTC
is there any reason why we'd keep virtio-net? If it's performance is degraded and we've effectively deprecated it, is there any reason to keep it?

Comment 6 RHEL Program Management 2010-05-28 11:55:38 UTC
This request was evaluated by Red Hat Product Management for inclusion in a Red
Hat Enterprise Linux major release.  Product Management has requested further
review of this request by Red Hat Engineering, for potential inclusion in a Red
Hat Enterprise Linux Major release.  This request is not yet committed for
inclusion.

Comment 7 Dor Laor 2010-05-30 21:02:20 UTC
(In reply to comment #5)
> is there any reason why we'd keep virtio-net? If it's performance is degraded
> and we've effectively deprecated it, is there any reason to keep it?    

There isn't. That's why I'm closing it as won't fix.
Mark, all components - libvirt/vdsm will use vhost by default. It is transparent to the guest. If we'll have any motivation to fix it, I'll be the first to ask for this fix.

Comment 8 Richard W.M. Jones 2010-06-01 10:42:38 UTC
To what extent are you proposing to drop virtio-net?  As a kernel
module or from KVM?  We have guests where virtio-net is hard-coded,
eg if you have ever installed RHEL 5 as a guest on top of KVM:

# guestfish -i RHEL55x64 --ro
><fs> cat /etc/modprobe.conf 
alias scsi_hostadapter ata_piix
alias eth0 virtio_net
alias scsi_hostadapter1 virtio_blk

Comment 9 Michael S. Tsirkin 2010-06-01 10:53:39 UTC
Let me clarify: the issue is that the compatibility -net flag is used.
offloads are currently silently disabled unless the new -netdev flag is used.
This is what hurts performance.

So the proposal is to exit unless user explicitly disabled offloads.


Note You need to log in before you can comment on or make changes to this bug.