Bug 1005626 - Qemu should calculate the default number of vectors for multiqueue virtio-net
Qemu should calculate the default number of vectors for multiqueue virtio-net
Status: CLOSED WONTFIX
Product: Red Hat Enterprise Linux 7
Classification: Red Hat
Component: qemu-kvm (Show other bugs)
7.0
Unspecified Unspecified
unspecified Severity unspecified
: rc
: ---
Assigned To: jason wang
Virtualization Bugs
:
Depends On:
Blocks: 1401400 651941
  Show dependency treegraph
 
Reported: 2013-09-08 22:52 EDT by jason wang
Modified: 2016-12-25 23:33 EST (History)
16 users (show)

See Also:
Fixed In Version:
Doc Type: Bug Fix
Doc Text:
Story Points: ---
Clone Of:
Environment:
Last Closed: 2016-12-25 23:33:23 EST
Type: Bug
Regression: ---
Mount Type: ---
Documentation: ---
CRM:
Verified Versions:
Category: ---
oVirt Team: ---
RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: ---


Attachments (Terms of Use)


External Trackers
Tracker ID Priority Status Summary Last Updated
Red Hat Knowledge Base (Solution) 272273 None None None Never

  None (edit)
Description jason wang 2013-09-08 22:52:29 EDT
Description of problem:
Currently, qemu depends on the management (libvirt) to calculate the default number of vectors for virtio-net. This is sub-optimal, we'd better let qemu to do this by itself and allow management to override the option.

Version-Release number of selected component (if applicable):


How reproducible:


Steps to Reproduce:
1.
2.
3.

Actual results:


Expected results:


Additional info:
Comment 2 Hu Jianwei 2013-11-07 21:51:50 EST
Regarding Bug 651941 - [7.0 FEAT] KVM NW Performance: multiple TX queue support in macvtap/virtio-net

You created a dependency bug to it, I don't know what affect for libvirt's behavior. May I verify bug 651941 before you fix your bug? 
Please give me some suggestion,Thanks.
Comment 3 Hu Jianwei 2013-11-07 21:53:32 EST
Also, please see https://bugzilla.redhat.com/show_bug.cgi?id=651941#c39
Comment 4 jason wang 2013-11-07 22:10:34 EST
(In reply to Hu Jianwei from comment #3)
> Also, please see https://bugzilla.redhat.com/show_bug.cgi?id=651941#c39

Please pay attention to the name when setting needinfo flags, I think you may want to ask Michal instead of me.
Comment 5 Michal Privoznik 2013-11-08 02:11:21 EST
(In reply to Hu Jianwei from comment #2)
> Regarding Bug 651941 - [7.0 FEAT] KVM NW Performance: multiple TX queue
> support in macvtap/virtio-net
> 
> You created a dependency bug to it, I don't know what affect for libvirt's
> behavior. May I verify bug 651941 before you fix your bug? 
> Please give me some suggestion,Thanks.

Yes, actually the qemu is the best place to fix this bug. So my patch to bug 651941 is no longer needed.
Comment 16 Paolo Bonzini 2014-02-19 07:47:37 EST
My reading of vp_request_msix_vectors is that if you do not have 2N+2 interrupts, it will fail.  vp_find_vqs then will try again with one vector for config and one shared for virtqueues, effectively making nvectors=2N+1 the same as nvector=2.

What do you get in /proc/interrupts with nvectors=2N+1?
Comment 17 Michael S. Tsirkin 2014-02-19 07:51:16 EST
to comment 16 - what's N here? # of VQs?
then by design the requirement is N + 1.
where did you get the factor of 2?
Comment 18 jason wang 2014-02-19 23:37:22 EST
(In reply to Paolo Bonzini from comment #16)
> My reading of vp_request_msix_vectors is that if you do not have 2N+2
> interrupts, it will fail.  vp_find_vqs then will try again with one vector
> for config and one shared for virtqueues, effectively making nvectors=2N+1
> the same as nvector=2.
> 

See vq_try_to_find_vqs() if there's no callback for a vq, there's no need to request an msix vectors for that vq. And we don't have a callback for control virtqueue.

> What do you get in /proc/interrupts with nvectors=2N+1?

Something like this (queues=2):

...
 40:          0          0   PCI-MSI-edge      virtio0-config
 41:         51          4   PCI-MSI-edge      virtio0-input.0
 42:          0          1   PCI-MSI-edge      virtio0-output.0
 43:          0          0   PCI-MSI-edge      virtio0-input.1
 44:          0          0   PCI-MSI-edge      virtio0-output.1
...
Comment 19 Paolo Bonzini 2014-02-20 06:33:18 EST
(mst: N = number of queues)

I don't think it's a good idea for libvirt to rely on the specifics of a driver's implementation.  So 2N+2 is better IMHO.
Comment 20 Michael S. Tsirkin 2014-02-20 08:15:21 EST
why overestimate by factor of 2?
this wastes twice the number of FDs for irqfd
ioeventfd and vhost.

it's not a driver specific thing - it is
device specific thing.

Unfortunately as you are tweaking device
specific properties you need to know
about device specific things.
Comment 21 Paolo Bonzini 2014-02-20 08:49:34 EST
Wait that would overestimate by 1.  The number of queues is not the number of _virtqueues_, see comment 18.  queues=2, virtqueues=5, nvectors proposed by Jason=5, nvectors proposed by me=6.
Comment 22 Michael S. Tsirkin 2014-02-20 10:43:20 EST
Ah, I see. jason's suggestion is the right one here, and that's explicit
in the virtio spec.
I'm not sure why would 2N+2 be much safer than the optimal 2N+1,
but of course I agree this is less of a waster than would be with 2x waste.

It would be nice to have a way to find out legal values for
properties from QEMU.

This applies to more than just this case though.
Comment 23 Michal Privoznik 2014-02-20 11:07:58 EST
(In reply to Michael S. Tsirkin from comment #22)
> Ah, I see. jason's suggestion is the right one here, and that's explicit
> in the virtio spec.
> I'm not sure why would 2N+2 be much safer than the optimal 2N+1,
> but of course I agree this is less of a waster than would be with 2x waste.
> 
> It would be nice to have a way to find out legal values for
> properties from QEMU.
> 
> This applies to more than just this case though.

Well, if the optimal number of queues is driver dependent I think it proves that qemu is the best place to guess the default value. Who knows qemu internals better than qemu itself?
Comment 24 Paolo Bonzini 2014-02-20 11:47:57 EST
No, it's not _device_ dependent, it's _driver_ dependent.  Only the guest knows, and the guest hasn't been launched yet.

Answering Michael:
> that's explicit in the virtio spec.

Where?

> I'm not sure why would 2N+2 be much safer than the optimal 2N+1,

If the guest started using a callback for the control queue, you would not be able to use multiqueue correctly anymore.
Comment 25 jason wang 2014-02-20 23:53:25 EST
I agree 2N+2 is better and qemu is the best place to do this.

Will send a patch upstream soon.

Thanks everyone.
Comment 27 Michal Privoznik 2014-02-21 02:38:43 EST
(In reply to jason wang from comment #25)
> I agree 2N+2 is better and qemu is the best place to do this.
> 
> Will send a patch upstream soon.
> 
> Thanks everyone.

Well, we already have a libvirt patch that computes this for virtio-net:

https://bugzilla.redhat.com/show_bug.cgi?id=1066209

If we are going to compute the default in qemu then we need to discard the libvirt patch, don't we?
Comment 28 jason wang 2014-02-21 02:53:46 EST
(In reply to Michal Privoznik from comment #27)
> (In reply to jason wang from comment #25)
> > I agree 2N+2 is better and qemu is the best place to do this.
> > 
> > Will send a patch upstream soon.
> > 
> > Thanks everyone.
> 
> Well, we already have a libvirt patch that computes this for virtio-net:
> 
> https://bugzilla.redhat.com/show_bug.cgi?id=1066209
> 
> If we are going to compute the default in qemu then we need to discard the
> libvirt patch, don't we?

Yes, I've sent the patch to qemu upstream. It was far simpler than I thought.

Thanks for the help.
Comment 29 Ronen Hod 2014-03-03 06:39:47 EST
Deferring to 7.1, since Libvirt already overrides this default, and the urgent fix is in Libvirt.
Jason opened a new bug.
Comment 30 jason wang 2014-03-03 07:28:04 EST
(In reply to Ronen Hod from comment #29)
> Deferring to 7.1, since Libvirt already overrides this default, and the
> urgent fix is in Libvirt.
> Jason opened a new bug.

Opened here: https://bugzilla.redhat.com/show_bug.cgi?id=1071888
Comment 32 Ronen Hod 2014-07-14 02:41:40 EDT
Jason already sent the patchs upstream.
Comment 33 jason wang 2014-10-24 02:16:47 EDT
Michael:

Any comment on:
https://lists.gnu.org/archive/html/qemu-devel/2014-03/msg01330.html
?
Comment 37 jason wang 2015-08-10 01:35:10 EDT
Impractical for 2.4, will try again in 2.5. So postpone to 7.3

Note You need to log in before you can comment on or make changes to this bug.