Description of problem: When creating an isolated network in libvirt, i.e. one with no external interfaces connected to, it would be nice if it was possible to configure the associated linux bridge with jumboframes, i.e. MTU set to 9000. Additional info: Setting MTU to 9000 could also be considered a default
In automated testing (CI) of oVirt/RHEV*, we would also appreciate jumbo frames. We would like to test at least VM migration between two hypervisors, and make use of a dedicated storage network (NFS/Gluster). Both these use cases would benefit if jumbo frames were a possibility. I have only been able to reconfigure the libvirt bridge (usually virbr* but we use a randomized name) and its ports (vnet*) by ifdowing them, setting MTU 9000 manually, and ifupping again. I would appreciate if XML definition for networks and domains would accept an MTU setting. You can check out this patch for OST: https://gerrit.ovirt.org/#/c/70679/ * This is a task of ovirt-system-tests (OST), based on the Lago project, in which we use a local, virtualized environment of 3/4 nodes with an oVirt engine, storage and (nested) virtualization hosts – all facilitated by libvirt.
I don't know if changing the default to 9000 would be a good idea, since that would be a very noticeable behavior change that might have unexpected effects. There have been a few requests for supporting an MTU setting in libvirt networks in the last several months, and I'd been thinking about making a patch for it, buut hadn't taken action until Comment 1 jarred my memory. While it isn't possible to set the MTU of an empty bridge to anything higher than 1500, what *can* be done is to simply set the MTU of the "dummy" tap device ($(brname}-nic) created by libvirt to the desired MTU just prior to attaching it to the bridge - since this is the first device added to the bridge, the bridge will take on that MTU, and since libvirt is already setting the MTU of all other tap devices added after that to whatever is the current MTU of the bridge, all devices will "magically" get the higher MTU. <network> ... <bridge name='virbr0' mtu='9000'/> ... I just posted patches to do this: http://www.redhat.com/archives/libvir-list/2017-January/msg00946.html They appear to work (traffic does show up in MTU-sized packets, although it's still necessary to manually set the MTU within the guest). Performance of a simple scp doesn't seem to change on my system (about 1.4Gb/sec for both 1500 and 9000 MTU), but I haven't tried any real network performance benchmarks. Since the reporters seem to have a desire for higher MTU, possibly they can suggest some good tests and/or show performance figures with and without the MTU change these patches make possible.
Have checked on downstream libvirt-3.2.0-3.el7.x86_64, the patches already implemented.