Bug 1224348 - RFE: network: use MTU=9000 (jumboframes) for isolated network
Summary: RFE: network: use MTU=9000 (jumboframes) for isolated network
Keywords:
Status: CLOSED UPSTREAM
Alias: None
Product: Virtualization Tools
Classification: Community
Component: libvirt
Version: unspecified
Hardware: Unspecified
OS: Unspecified
unspecified
unspecified
Target Milestone: ---
Assignee: Laine Stump
QA Contact: yalzhang@redhat.com
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2015-05-22 15:29 UTC by David Juran
Modified: 2017-05-02 08:52 UTC (History)
14 users (show)

Fixed In Version:
Doc Type: Bug Fix
Doc Text:
Clone Of:
Environment:
Last Closed: 2017-05-02 08:18:38 UTC
Embargoed:


Attachments (Terms of Use)

Description David Juran 2015-05-22 15:29:49 UTC
Description of problem:
When creating an isolated network in libvirt, i.e. one with no external interfaces connected to, it would be nice if it was possible to configure the associated linux bridge with jumboframes, i.e. MTU set to 9000.

Additional info:
Setting MTU to 9000 could also be considered a default

Comment 1 Ondřej Svoboda 2017-01-22 10:57:28 UTC
In automated testing (CI) of oVirt/RHEV*, we would also appreciate jumbo frames.

We would like to test at least VM migration between two hypervisors, and make use of a dedicated storage network (NFS/Gluster). Both these use cases would benefit if jumbo frames were a possibility.

I have only been able to reconfigure the libvirt bridge (usually virbr* but we use a randomized name) and its ports (vnet*) by ifdowing them, setting MTU 9000 manually, and ifupping again. I would appreciate if XML definition for networks and domains would accept an MTU setting. You can check out this patch for OST: https://gerrit.ovirt.org/#/c/70679/

* This is a task of ovirt-system-tests (OST), based on the Lago project, in which we use a local, virtualized environment of 3/4 nodes with an oVirt engine, storage and (nested) virtualization hosts – all facilitated by libvirt.

Comment 2 Laine Stump 2017-01-23 15:41:11 UTC
I don't know if changing the default to 9000 would be a good idea, since that would be a very noticeable behavior change that might have unexpected effects.

There have been a few requests for supporting an MTU setting in libvirt networks in the last several months, and I'd been thinking about making a patch for it, buut hadn't taken action until Comment 1 jarred my memory.

While it isn't possible to set the MTU of an empty bridge to anything higher than 1500, what *can* be done is to simply set the MTU of the "dummy" tap device ($(brname}-nic) created by libvirt to the desired MTU just prior to attaching it to the bridge - since this is the first device added to the bridge, the bridge will take on that MTU, and since libvirt is already setting the MTU of all other tap devices added after that to whatever is the current MTU of the bridge, all devices will "magically" get the higher MTU.

   <network>
      ...
      <bridge name='virbr0' mtu='9000'/>
      ...

I just posted patches to do this:

  http://www.redhat.com/archives/libvir-list/2017-January/msg00946.html

They appear to work (traffic does show up in MTU-sized packets, although it's still necessary to manually set the MTU within the guest). Performance of a simple scp doesn't seem to change on my system (about 1.4Gb/sec for both 1500 and 9000 MTU), but I haven't tried any real network performance benchmarks. Since the reporters seem to have a desire for higher MTU, possibly they can suggest some good tests and/or show performance figures with and without the MTU change these patches make possible.

Comment 3 yalzhang@redhat.com 2017-05-02 08:52:48 UTC
Have checked on downstream libvirt-3.2.0-3.el7.x86_64, the patches already implemented.


Note You need to log in before you can comment on or make changes to this bug.