Bug 821493
Summary: | [RFE] Multiple queue NICs Feature - RHEV support | ||
---|---|---|---|
Product: | Red Hat Enterprise Virtualization Manager | Reporter: | Karen Noel <knoel> |
Component: | vdsm | Assignee: | Dan Kenigsberg <danken> |
Status: | CLOSED ERRATA | QA Contact: | Michael Burman <mburman> |
Severity: | low | Docs Contact: | |
Priority: | low | ||
Version: | unspecified | CC: | bazulay, bsarathy, bsettle, danken, ecohen, iheim, knoel, kyin, lpeer, mst, nyechiel, rbalakri, Rhev-m-bugs, yeylon |
Target Milestone: | --- | Keywords: | FutureFeature |
Target Release: | 3.5.0 | Flags: | sgrinber:
Triaged+
nyechiel: Triaged+ |
Hardware: | Unspecified | ||
OS: | Unspecified | ||
Whiteboard: | network | ||
Fixed In Version: | Doc Type: | Enhancement | |
Doc Text: |
When a multi-processor virtual machine communicates with other virtual machines on the same host, its CPU may generate traffic faster than a single virtio-net queue can consume it. This feature aims to avoid this bottle neck by allowing multiple queues per virtual network interface. Note that this is effective only when the host runs a Red Hat Enterprise Linux 7 kernel >= 3.10.0-9.el7.
|
Story Points: | --- |
Clone Of: | 818212 | Environment: | |
Last Closed: | 2015-02-11 21:09:57 UTC | Type: | Bug |
Regression: | --- | Mount Type: | --- |
Documentation: | --- | CRM: | |
Verified Versions: | Category: | --- | |
oVirt Team: | Network | RHEL 7.3 requirements from Atomic Host: | |
Cloudforms Team: | --- | Target Upstream Version: | |
Embargoed: | |||
Bug Depends On: | 821489, 821490, 821492 | ||
Bug Blocks: | 818212, 1142923, 1156165, 1529493 |
Description
Karen Noel
2012-05-14 16:38:24 UTC
Michael, would you confirm libvirt's suggestion of http://libvirt.org/formatdomain.html#elementsControllers """ An optional sub-element driver can specify the driver specific options. Currently it only supports attribute queues (1.0.5, QEMU and KVM only), which specifies the number of queues for the controller. For best performance, it's recommended to specify a value matching the number of vCPUs. """ What are the down sides for always doing so? On Tue, Feb 25, 2014 at 06:33:18PM +0200, Michael S. Tsirkin wrote:
> ATM the # of queues for tun is capped at 8 in kernel.
> We'll likely increase them at some point, but ATM
> this will fail with -E2BIG.
>
> If you are doing mostly guest<->external, there's no
> use to go higher than # of queues of the NIC.
> If you also go a lot of guest<>guest on same host,
> going as high as #vcpus might be helpful.
>
>
> > The down side is memory consuption? How bad is it?
>
> ~100Kbyte locked host memory per queue
Since the cost is not negligible, and the usefulness is limited, Vdsm would not turn it on by its own volition. We plan to expose this as a built-in custom property for end users to enable. danken - can you please add a doc text that describes where this custom property is useful. Hi dan We need additional info about this feature. -Is there a configure value? is it part of the custom properties? -Is there a way to change or configure this values of the queues? -Is it going to be displayed on GUI? or just part of the vdsm? -Is there a hook that need to be configured for this feature? Thank you Michael - The Engine admin must define a new vNIC custom property named "queues". This is done by adding stuff to the CustomDeviceProperties config value. (see http://gerrit.ovirt.org/#/c/25390/, please correct the Doc Text if it is wrong) - Once configured, you should see a "queues" element on the vNIC profile dialog, and you should be able to set a number there. - There is no need to install any hook. Verified on - oVirt Engine Version: 3.5.0-0.0.master.20140804172041.git23b558e.el6 - To perform this feature, please see comment 8 and Doc Text:Feature above. - less /var/log/vdsm/vdsm.log > ?queues > <driver queues='X'/> - Run ps -ww `pgrep qemu-kvm` and look for:vhostfds=29:30:31:32(number of queues) ,or fds=25:26:27:28 (number of queues). for a NIC with mac of mac=00:1a:4a:16:88:5c(for example). In this example 4 queues for NIC. - Also less /var/log/ovirt-engine/engine.log > ?queues > custom={queues=4}} Dan, Isn't suppose to be- engine-config -s "CustomDeviceProperties={type=interface;prop={<other-nic-properties;>queues=[1-9][1-9]*}}" ?? To block the '0'? Corrected. Thanks. Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory, and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://rhn.redhat.com/errata/RHBA-2015-0159.html Hi dan Sorry to bother you, anande gave me the answer: ~~~ engine-config -s "CustomDeviceProperties={type=interface;prop={queues=[0-9]*}}" ~~~ Thank you BR Kenneth Only now do I notice that a long while ago a big and important chunk of the doc text was dropped. Let it at least be found here: An administrator can define a new custom property called "queues" for vNIC profiles. engine-config -s "CustomDeviceProperties={type=interface;prop={<other-nic-properties;>queues=[1-9][0-9]*}}" Where <other-nic-properties;> is a semicolon-separated list of prexisting custom properties of nics. A user can then set it to the number of queues he or she would like to allocate to a vNIC using this profile, instead of the default single queue. For best performance, use the number of vCPUs. Note that each queue consumes about 100KB of host memory, so a non-default values only when the number of queues is a true bottle neck, such as when a multicore VM is expected to communicate with other VMs on the same host. |