Bug 469153 - RHEL 5.4: Improve virtio_net external->guest performance
RHEL 5.4: Improve virtio_net external->guest performance
Status: CLOSED DUPLICATE of bug 473120
Product: Red Hat Enterprise Linux 5
Classification: Red Hat
Component: kernel (Show other bugs)
5.4
All Linux
high Severity medium
: rc
: ---
Assigned To: Mark McLoughlin
Martin Jenner
:
Depends On:
Blocks:
  Show dependency treegraph
 
Reported: 2008-10-30 06:10 EDT by Mark McLoughlin
Modified: 2009-02-11 15:56 EST (History)
3 users (show)

See Also:
Fixed In Version:
Doc Type: Bug Fix
Doc Text:
Story Points: ---
Clone Of:
Environment:
Last Closed: 2009-02-10 02:52:27 EST
Type: ---
Regression: ---
Mount Type: ---
Documentation: ---
CRM:
Verified Versions:
Category: ---
oVirt Team: ---
RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: ---


Attachments (Terms of Use)
virtio-net-mergeable-rx-buffers.patch (9.99 KB, patch)
2008-10-30 06:11 EDT, Mark McLoughlin
no flags Details | Diff

  None (edit)
Description Mark McLoughlin 2008-10-30 06:10:05 EDT
RHEL 5.3 virtio_net uses a receive buffer allocation scheme where it supplies the host with max-sized packets. This in contrast with the xen scheme where the guest supplies page-sized packets and the host can merge those buffers to form a larger buffer for GSO packets if needed.

This is an issue in the case of external->guest where the host is transferring lots of MTU sized packets to the guest. In this case bandwidth is constrained by the number of packets the ring can hold - 12 packets with a 256 entry ring.

A new receive buffer allocation is queued upstream for 2.6.29 and we should include this in 5.4

Roughly speaking, this improves external->guest bandwidth from 1Gb/s to 2Gb/s over a 10Gb/s link

Links:

  http://marc.info/?l=linux-kernel&m=122349465913313
  http://kerneltrap.org/mailarchive/linux-kvm/2008/10/8/3554774

changelog:

If segmentation offload is enabled by the host, we currently allocate
maximum sized packet buffers and pass them to the host. This uses up
20 ring entries, allowing us to supply only 20 packet buffers to the
host with a 256 entry ring. This is a huge overhead when receiving
small packets, and is most keenly felt when receiving MTU sized
packets from off-host.

The VIRTIO_NET_F_MRG_RXBUF feature flag is set by hosts which support
using receive buffers which are smaller than the maximum packet size.
In order to transfer large packets to the guest, the host merges
together multiple receive buffers to form a larger logical buffer.
The number of merged buffers is returned to the guest via a field in
the virtio_net_hdr.

Make use of this support by supplying single page receive buffers to
the host. On receive, we extract the virtio_net_hdr, copy 128 bytes of
the payload to the skb's linear data buffer and adjust the fragment
offset to point to the remaining data. This ensures proper alignment
and allows us to not use any paged data for small packets. If the
payload occupies multiple pages, we simply append those pages as
fragments and free the associated skbs.

This scheme allows us to be efficient in our use of ring entries
while still supporting large packets. Benchmarking using netperf from
an external machine to a guest over a 10Gb/s network shows a 100%
improvement from ~1Gb/s to ~2Gb/s. With a local host->guest benchmark
with GSO disabled on the host side, throughput was seen to increase
from 700Mb/s to 1.7Gb/s.
Comment 1 Mark McLoughlin 2008-10-30 06:11:31 EDT
Created attachment 321905 [details]
virtio-net-mergeable-rx-buffers.patch
Comment 2 Mark McLoughlin 2009-02-10 02:52:27 EST
How bizarre; after filing this bug, I later filed bug #473120 and got the patch into 5.3 (merged in 2.6.18-126.el5)

*** This bug has been marked as a duplicate of bug 473120 ***

Note You need to log in before you can comment on or make changes to this bug.