RHEL Engineering is moving the tracking of its product development work on RHEL 6 through RHEL 9 to Red Hat Jira (issues.redhat.com). If you're a Red Hat customer, please continue to file support cases via the Red Hat customer portal. If you're not, please head to the "RHEL project" in Red Hat Jira and file new tickets here. Individual Bugzilla bugs in the statuses "NEW", "ASSIGNED", and "POST" are being migrated throughout September 2023. Bugs of Red Hat partners with an assigned Engineering Partner Manager (EPM) are migrated in late September as per pre-agreed dates. Bugs against components "kernel", "kernel-rt", and "kpatch" are only migrated if still in "NEW" or "ASSIGNED". If you cannot log in to RH Jira, please consult article #7032570. That failing, please send an e-mail to the RH Jira admins at rh-issues@redhat.com to troubleshoot your issue as a user management inquiry. The email creates a ServiceNow ticket with Red Hat. Individual Bugzilla bugs that are migrated will be moved to status "CLOSED", resolution "MIGRATED", and set with "MigratedToJIRA" in "Keywords". The link to the successor Jira issue will be found under "Links", have a little "two-footprint" icon next to it, and direct you to the "RHEL project" in Red Hat Jira (issue links are of type "https://issues.redhat.com/browse/RHEL-XXXX", where "X" is a digit). This same link will be available in a blue banner at the top of the page informing you that that bug has been migrated.
Bug 1005626 - Qemu should calculate the default number of vectors for multiqueue virtio-net
Summary: Qemu should calculate the default number of vectors for multiqueue virtio-net
Keywords:
Status: CLOSED WONTFIX
Alias: None
Product: Red Hat Enterprise Linux 7
Classification: Red Hat
Component: qemu-kvm
Version: 7.0
Hardware: Unspecified
OS: Unspecified
unspecified
unspecified
Target Milestone: rc
: ---
Assignee: jason wang
QA Contact: Virtualization Bugs
URL:
Whiteboard:
Depends On:
Blocks: 651941 1401400
TreeView+ depends on / blocked
 
Reported: 2013-09-09 02:52 UTC by jason wang
Modified: 2021-08-30 10:42 UTC (History)
16 users (show)

Fixed In Version:
Doc Type: Bug Fix
Doc Text:
Clone Of:
Environment:
Last Closed: 2016-12-26 04:33:23 UTC
Target Upstream Version:
Embargoed:


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
Red Hat Knowledge Base (Solution) 272273 0 None None None Never

Description jason wang 2013-09-09 02:52:29 UTC
Description of problem:
Currently, qemu depends on the management (libvirt) to calculate the default number of vectors for virtio-net. This is sub-optimal, we'd better let qemu to do this by itself and allow management to override the option.

Version-Release number of selected component (if applicable):


How reproducible:


Steps to Reproduce:
1.
2.
3.

Actual results:


Expected results:


Additional info:

Comment 2 Hu Jianwei 2013-11-08 02:51:50 UTC
Regarding Bug 651941 - [7.0 FEAT] KVM NW Performance: multiple TX queue support in macvtap/virtio-net

You created a dependency bug to it, I don't know what affect for libvirt's behavior. May I verify bug 651941 before you fix your bug? 
Please give me some suggestion,Thanks.

Comment 3 Hu Jianwei 2013-11-08 02:53:32 UTC
Also, please see https://bugzilla.redhat.com/show_bug.cgi?id=651941#c39

Comment 4 jason wang 2013-11-08 03:10:34 UTC
(In reply to Hu Jianwei from comment #3)
> Also, please see https://bugzilla.redhat.com/show_bug.cgi?id=651941#c39

Please pay attention to the name when setting needinfo flags, I think you may want to ask Michal instead of me.

Comment 5 Michal Privoznik 2013-11-08 07:11:21 UTC
(In reply to Hu Jianwei from comment #2)
> Regarding Bug 651941 - [7.0 FEAT] KVM NW Performance: multiple TX queue
> support in macvtap/virtio-net
> 
> You created a dependency bug to it, I don't know what affect for libvirt's
> behavior. May I verify bug 651941 before you fix your bug? 
> Please give me some suggestion,Thanks.

Yes, actually the qemu is the best place to fix this bug. So my patch to bug 651941 is no longer needed.

Comment 16 Paolo Bonzini 2014-02-19 12:47:37 UTC
My reading of vp_request_msix_vectors is that if you do not have 2N+2 interrupts, it will fail.  vp_find_vqs then will try again with one vector for config and one shared for virtqueues, effectively making nvectors=2N+1 the same as nvector=2.

What do you get in /proc/interrupts with nvectors=2N+1?

Comment 17 Michael S. Tsirkin 2014-02-19 12:51:16 UTC
to comment 16 - what's N here? # of VQs?
then by design the requirement is N + 1.
where did you get the factor of 2?

Comment 18 jason wang 2014-02-20 04:37:22 UTC
(In reply to Paolo Bonzini from comment #16)
> My reading of vp_request_msix_vectors is that if you do not have 2N+2
> interrupts, it will fail.  vp_find_vqs then will try again with one vector
> for config and one shared for virtqueues, effectively making nvectors=2N+1
> the same as nvector=2.
> 

See vq_try_to_find_vqs() if there's no callback for a vq, there's no need to request an msix vectors for that vq. And we don't have a callback for control virtqueue.

> What do you get in /proc/interrupts with nvectors=2N+1?

Something like this (queues=2):

...
 40:          0          0   PCI-MSI-edge      virtio0-config
 41:         51          4   PCI-MSI-edge      virtio0-input.0
 42:          0          1   PCI-MSI-edge      virtio0-output.0
 43:          0          0   PCI-MSI-edge      virtio0-input.1
 44:          0          0   PCI-MSI-edge      virtio0-output.1
...

Comment 19 Paolo Bonzini 2014-02-20 11:33:18 UTC
(mst: N = number of queues)

I don't think it's a good idea for libvirt to rely on the specifics of a driver's implementation.  So 2N+2 is better IMHO.

Comment 20 Michael S. Tsirkin 2014-02-20 13:15:21 UTC
why overestimate by factor of 2?
this wastes twice the number of FDs for irqfd
ioeventfd and vhost.

it's not a driver specific thing - it is
device specific thing.

Unfortunately as you are tweaking device
specific properties you need to know
about device specific things.

Comment 21 Paolo Bonzini 2014-02-20 13:49:34 UTC
Wait that would overestimate by 1.  The number of queues is not the number of _virtqueues_, see comment 18.  queues=2, virtqueues=5, nvectors proposed by Jason=5, nvectors proposed by me=6.

Comment 22 Michael S. Tsirkin 2014-02-20 15:43:20 UTC
Ah, I see. jason's suggestion is the right one here, and that's explicit
in the virtio spec.
I'm not sure why would 2N+2 be much safer than the optimal 2N+1,
but of course I agree this is less of a waster than would be with 2x waste.

It would be nice to have a way to find out legal values for
properties from QEMU.

This applies to more than just this case though.

Comment 23 Michal Privoznik 2014-02-20 16:07:58 UTC
(In reply to Michael S. Tsirkin from comment #22)
> Ah, I see. jason's suggestion is the right one here, and that's explicit
> in the virtio spec.
> I'm not sure why would 2N+2 be much safer than the optimal 2N+1,
> but of course I agree this is less of a waster than would be with 2x waste.
> 
> It would be nice to have a way to find out legal values for
> properties from QEMU.
> 
> This applies to more than just this case though.

Well, if the optimal number of queues is driver dependent I think it proves that qemu is the best place to guess the default value. Who knows qemu internals better than qemu itself?

Comment 24 Paolo Bonzini 2014-02-20 16:47:57 UTC
No, it's not _device_ dependent, it's _driver_ dependent.  Only the guest knows, and the guest hasn't been launched yet.

Answering Michael:
> that's explicit in the virtio spec.

Where?

> I'm not sure why would 2N+2 be much safer than the optimal 2N+1,

If the guest started using a callback for the control queue, you would not be able to use multiqueue correctly anymore.

Comment 25 jason wang 2014-02-21 04:53:25 UTC
I agree 2N+2 is better and qemu is the best place to do this.

Will send a patch upstream soon.

Thanks everyone.

Comment 27 Michal Privoznik 2014-02-21 07:38:43 UTC
(In reply to jason wang from comment #25)
> I agree 2N+2 is better and qemu is the best place to do this.
> 
> Will send a patch upstream soon.
> 
> Thanks everyone.

Well, we already have a libvirt patch that computes this for virtio-net:

https://bugzilla.redhat.com/show_bug.cgi?id=1066209

If we are going to compute the default in qemu then we need to discard the libvirt patch, don't we?

Comment 28 jason wang 2014-02-21 07:53:46 UTC
(In reply to Michal Privoznik from comment #27)
> (In reply to jason wang from comment #25)
> > I agree 2N+2 is better and qemu is the best place to do this.
> > 
> > Will send a patch upstream soon.
> > 
> > Thanks everyone.
> 
> Well, we already have a libvirt patch that computes this for virtio-net:
> 
> https://bugzilla.redhat.com/show_bug.cgi?id=1066209
> 
> If we are going to compute the default in qemu then we need to discard the
> libvirt patch, don't we?

Yes, I've sent the patch to qemu upstream. It was far simpler than I thought.

Thanks for the help.

Comment 29 Ronen Hod 2014-03-03 11:39:47 UTC
Deferring to 7.1, since Libvirt already overrides this default, and the urgent fix is in Libvirt.
Jason opened a new bug.

Comment 30 jason wang 2014-03-03 12:28:04 UTC
(In reply to Ronen Hod from comment #29)
> Deferring to 7.1, since Libvirt already overrides this default, and the
> urgent fix is in Libvirt.
> Jason opened a new bug.

Opened here: https://bugzilla.redhat.com/show_bug.cgi?id=1071888

Comment 32 Ronen Hod 2014-07-14 06:41:40 UTC
Jason already sent the patchs upstream.

Comment 33 jason wang 2014-10-24 06:16:47 UTC
Michael:

Any comment on:
https://lists.gnu.org/archive/html/qemu-devel/2014-03/msg01330.html
?

Comment 37 jason wang 2015-08-10 05:35:10 UTC
Impractical for 2.4, will try again in 2.5. So postpone to 7.3


Note You need to log in before you can comment on or make changes to this bug.