Bug 821493 - [RFE] Multiple queue NICs Feature - RHEV support
Summary: [RFE] Multiple queue NICs Feature - RHEV support
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: Red Hat Enterprise Virtualization Manager
Classification: Red Hat
Component: vdsm
Version: unspecified
Hardware: Unspecified
OS: Unspecified
low
low
Target Milestone: ---
: 3.5.0
Assignee: Dan Kenigsberg
QA Contact: Michael Burman
URL:
Whiteboard: network
Depends On: 821489 821490 821492
Blocks: 818212 rhev3.5beta 1156165 1529493
TreeView+ depends on / blocked
 
Reported: 2012-05-14 16:38 UTC by Karen Noel
Modified: 2019-04-28 08:40 UTC (History)
14 users (show)

Fixed In Version:
Doc Type: Enhancement
Doc Text:
When a multi-processor virtual machine communicates with other virtual machines on the same host, its CPU may generate traffic faster than a single virtio-net queue can consume it. This feature aims to avoid this bottle neck by allowing multiple queues per virtual network interface. Note that this is effective only when the host runs a Red Hat Enterprise Linux 7 kernel >= 3.10.0-9.el7.
Clone Of: 818212
Environment:
Last Closed: 2015-02-11 21:09:57 UTC
oVirt Team: Network
Target Upstream Version:
Embargoed:
sgrinber: Triaged+
nyechiel: Triaged+


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
Red Hat Product Errata RHBA-2015:0159 0 normal SHIPPED_LIVE vdsm 3.5.0 - bug fix and enhancement update 2015-02-12 01:35:58 UTC
oVirt gerrit 25390 0 'None' MERGED interface xml: allow Engine to specify number of queues 2021-01-10 16:17:58 UTC

Description Karen Noel 2012-05-14 16:38:24 UTC
+++ This bug was initially created as a clone of Bug #818212 +++

Description:

Currently virtio_net has only one queue, so the network related applications could not benefited from SMP guests. Multiple queue virtio_net provides better scalability as each vCPUs could be used during packet transmission and reception because they use their own queues without influencing each other. Multiple queue virtio_net NICs is the virtualized counterpart of physical multiqueues. 

This is a pure performance issue.  Performance improvement by a factor of up to 2 for intra-host networking, where the limiting factor is not the speed of the external physical network speed.

Network applications run on smp guests with physical multiqueue nics.


PRD Requirements:

    * 10.1.1.1.16: Improve throughput for SMP guests through SR-IOV multi-queue NIC 7.0

Enable multiple queue support for macvtap/tap
Enable multiple queue capable guest drivers

--- Additional comment from bsarathy on 2012-05-02 10:28:46 EDT ---

Reasoning:

Virtual networks are increasing becoming the bottleneck, as the number of vCPUs continue to increase. The network performance does not scale as the number of vCPUs increasing. A guest cannot transmit or retrieve packets in parallel as virtio-net have only one TX and RX, virtio-net drivers must be synchronized before sending and receiving packets. Multi queue NICs feature is needed to provide better scalability and improve performance.

Comment 3 Dan Kenigsberg 2014-01-28 23:05:40 UTC
Michael, would you confirm libvirt's suggestion of http://libvirt.org/formatdomain.html#elementsControllers

"""
An optional sub-element driver can specify the driver specific options. Currently it only supports attribute queues (1.0.5, QEMU and KVM only), which specifies the number of queues for the controller. For best performance, it's recommended to specify a value matching the number of vCPUs.
"""

What are the down sides for always doing so?

Comment 4 Dan Kenigsberg 2014-02-26 09:32:44 UTC
On Tue, Feb 25, 2014 at 06:33:18PM +0200, Michael S. Tsirkin wrote:
> ATM the # of queues for tun is capped at 8 in kernel.
> We'll likely increase them at some point, but ATM
> this will fail with -E2BIG.
> 
> If you are doing mostly guest<->external, there's no
> use to go higher than # of queues of the NIC.
> If you also go a lot of guest<>guest on same host,
> going as high as #vcpus might be helpful.
> 
> 
> > The down side is memory consuption? How bad is it?
> 
> ~100Kbyte locked host memory per queue

Comment 5 Dan Kenigsberg 2014-03-04 23:15:43 UTC
Since the cost is not negligible, and the usefulness is limited, Vdsm would not turn it on by its own volition. We plan to expose this as a built-in custom property for end users to enable.

Comment 6 lpeer 2014-03-11 07:15:54 UTC
danken - can you please add a doc text that describes where  this custom property is useful.

Comment 7 Michael Burman 2014-08-03 09:30:39 UTC
Hi dan
We need additional info about this feature.
-Is there a configure value? is it part of the custom properties?
-Is there a way to change or configure this values of the queues?
-Is it going to be displayed on GUI? or just part of the vdsm?
-Is there a hook that need to be configured for this feature?

Thank you

Michael

Comment 8 Dan Kenigsberg 2014-08-05 15:04:18 UTC
- The Engine admin must define a new vNIC custom property named "queues". This is done by adding stuff to the CustomDeviceProperties config value. (see http://gerrit.ovirt.org/#/c/25390/, please correct the Doc Text if it is wrong)

- Once configured, you should see a "queues" element on the vNIC profile dialog, and you should be able to set a number there.

- There is no need to install any hook.

Comment 9 Michael Burman 2014-08-11 14:18:09 UTC
Verified on -  oVirt Engine Version: 3.5.0-0.0.master.20140804172041.git23b558e.el6 

- To perform this feature, please see comment 8 and Doc Text:Feature above.

- less /var/log/vdsm/vdsm.log > ?queues > <driver queues='X'/>

- Run ps -ww `pgrep qemu-kvm` and look for:vhostfds=29:30:31:32(number of queues)
,or fds=25:26:27:28 (number of queues). for a NIC with mac of mac=00:1a:4a:16:88:5c(for example). In this example 4 queues for NIC.

Comment 10 Michael Burman 2014-08-11 14:37:33 UTC
- Also less /var/log/ovirt-engine/engine.log > ?queues > custom={queues=4}}

Comment 11 Michael Burman 2014-12-03 08:49:28 UTC
Dan, 

Isn't suppose to be-
engine-config -s "CustomDeviceProperties={type=interface;prop={<other-nic-properties;>queues=[1-9][1-9]*}}"   ??

To block the '0'?

Comment 12 Dan Kenigsberg 2014-12-03 10:50:17 UTC
Corrected. Thanks.

Comment 14 errata-xmlrpc 2015-02-11 21:09:57 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://rhn.redhat.com/errata/RHBA-2015-0159.html

Comment 16 YinKe 2015-06-19 08:40:22 UTC
Hi dan

Sorry to bother you, anande gave me the answer:

~~~
engine-config -s "CustomDeviceProperties={type=interface;prop={queues=[0-9]*}}"
~~~

Thank you
BR
Kenneth

Comment 19 Dan Kenigsberg 2016-07-28 10:11:43 UTC
Only now do I notice that a long while ago a big and important chunk of the doc text was dropped. Let it at least be found here:

An administrator can define a new custom property called "queues" for vNIC profiles.

  engine-config -s "CustomDeviceProperties={type=interface;prop={<other-nic-properties;>queues=[1-9][0-9]*}}"

Where <other-nic-properties;> is a semicolon-separated list of prexisting custom properties of nics.

A user can then set it to the number of queues he or she would like to allocate 
to a vNIC using this profile, instead of the default single queue. For best
performance, use the number of vCPUs. Note that each queue consumes about 100KB
of host memory, so a non-default values only when the number of queues is a
true bottle neck, such as when a multicore VM is expected to communicate with
other VMs on the same host.


Note You need to log in before you can comment on or make changes to this bug.