Bug 1005626
Summary: | Qemu should calculate the default number of vectors for multiqueue virtio-net | ||
---|---|---|---|
Product: | Red Hat Enterprise Linux 7 | Reporter: | jason wang <jasowang> |
Component: | qemu-kvm | Assignee: | jason wang <jasowang> |
Status: | CLOSED WONTFIX | QA Contact: | Virtualization Bugs <virt-bugs> |
Severity: | unspecified | Docs Contact: | |
Priority: | unspecified | ||
Version: | 7.0 | CC: | dshaks, hhuang, jasowang, jeder, juzhang, knoel, michen, mprivozn, mst, mzhan, nhorman, pbonzini, perfbz, rbalakri, tbowling, virt-maint |
Target Milestone: | rc | ||
Target Release: | --- | ||
Hardware: | Unspecified | ||
OS: | Unspecified | ||
Whiteboard: | |||
Fixed In Version: | Doc Type: | Bug Fix | |
Doc Text: | Story Points: | --- | |
Clone Of: | Environment: | ||
Last Closed: | 2016-12-26 04:33:23 UTC | Type: | Bug |
Regression: | --- | Mount Type: | --- |
Documentation: | --- | CRM: | |
Verified Versions: | Category: | --- | |
oVirt Team: | --- | RHEL 7.3 requirements from Atomic Host: | |
Cloudforms Team: | --- | Target Upstream Version: | |
Embargoed: | |||
Bug Depends On: | |||
Bug Blocks: | 651941, 1401400 |
Description
jason wang
2013-09-09 02:52:29 UTC
Regarding Bug 651941 - [7.0 FEAT] KVM NW Performance: multiple TX queue support in macvtap/virtio-net You created a dependency bug to it, I don't know what affect for libvirt's behavior. May I verify bug 651941 before you fix your bug? Please give me some suggestion,Thanks. Also, please see https://bugzilla.redhat.com/show_bug.cgi?id=651941#c39 (In reply to Hu Jianwei from comment #3) > Also, please see https://bugzilla.redhat.com/show_bug.cgi?id=651941#c39 Please pay attention to the name when setting needinfo flags, I think you may want to ask Michal instead of me. (In reply to Hu Jianwei from comment #2) > Regarding Bug 651941 - [7.0 FEAT] KVM NW Performance: multiple TX queue > support in macvtap/virtio-net > > You created a dependency bug to it, I don't know what affect for libvirt's > behavior. May I verify bug 651941 before you fix your bug? > Please give me some suggestion,Thanks. Yes, actually the qemu is the best place to fix this bug. So my patch to bug 651941 is no longer needed. My reading of vp_request_msix_vectors is that if you do not have 2N+2 interrupts, it will fail. vp_find_vqs then will try again with one vector for config and one shared for virtqueues, effectively making nvectors=2N+1 the same as nvector=2. What do you get in /proc/interrupts with nvectors=2N+1? to comment 16 - what's N here? # of VQs? then by design the requirement is N + 1. where did you get the factor of 2? (In reply to Paolo Bonzini from comment #16) > My reading of vp_request_msix_vectors is that if you do not have 2N+2 > interrupts, it will fail. vp_find_vqs then will try again with one vector > for config and one shared for virtqueues, effectively making nvectors=2N+1 > the same as nvector=2. > See vq_try_to_find_vqs() if there's no callback for a vq, there's no need to request an msix vectors for that vq. And we don't have a callback for control virtqueue. > What do you get in /proc/interrupts with nvectors=2N+1? Something like this (queues=2): ... 40: 0 0 PCI-MSI-edge virtio0-config 41: 51 4 PCI-MSI-edge virtio0-input.0 42: 0 1 PCI-MSI-edge virtio0-output.0 43: 0 0 PCI-MSI-edge virtio0-input.1 44: 0 0 PCI-MSI-edge virtio0-output.1 ... (mst: N = number of queues) I don't think it's a good idea for libvirt to rely on the specifics of a driver's implementation. So 2N+2 is better IMHO. why overestimate by factor of 2? this wastes twice the number of FDs for irqfd ioeventfd and vhost. it's not a driver specific thing - it is device specific thing. Unfortunately as you are tweaking device specific properties you need to know about device specific things. Wait that would overestimate by 1. The number of queues is not the number of _virtqueues_, see comment 18. queues=2, virtqueues=5, nvectors proposed by Jason=5, nvectors proposed by me=6. Ah, I see. jason's suggestion is the right one here, and that's explicit in the virtio spec. I'm not sure why would 2N+2 be much safer than the optimal 2N+1, but of course I agree this is less of a waster than would be with 2x waste. It would be nice to have a way to find out legal values for properties from QEMU. This applies to more than just this case though. (In reply to Michael S. Tsirkin from comment #22) > Ah, I see. jason's suggestion is the right one here, and that's explicit > in the virtio spec. > I'm not sure why would 2N+2 be much safer than the optimal 2N+1, > but of course I agree this is less of a waster than would be with 2x waste. > > It would be nice to have a way to find out legal values for > properties from QEMU. > > This applies to more than just this case though. Well, if the optimal number of queues is driver dependent I think it proves that qemu is the best place to guess the default value. Who knows qemu internals better than qemu itself? No, it's not _device_ dependent, it's _driver_ dependent. Only the guest knows, and the guest hasn't been launched yet. Answering Michael: > that's explicit in the virtio spec. Where? > I'm not sure why would 2N+2 be much safer than the optimal 2N+1, If the guest started using a callback for the control queue, you would not be able to use multiqueue correctly anymore. I agree 2N+2 is better and qemu is the best place to do this. Will send a patch upstream soon. Thanks everyone. (In reply to jason wang from comment #25) > I agree 2N+2 is better and qemu is the best place to do this. > > Will send a patch upstream soon. > > Thanks everyone. Well, we already have a libvirt patch that computes this for virtio-net: https://bugzilla.redhat.com/show_bug.cgi?id=1066209 If we are going to compute the default in qemu then we need to discard the libvirt patch, don't we? (In reply to Michal Privoznik from comment #27) > (In reply to jason wang from comment #25) > > I agree 2N+2 is better and qemu is the best place to do this. > > > > Will send a patch upstream soon. > > > > Thanks everyone. > > Well, we already have a libvirt patch that computes this for virtio-net: > > https://bugzilla.redhat.com/show_bug.cgi?id=1066209 > > If we are going to compute the default in qemu then we need to discard the > libvirt patch, don't we? Yes, I've sent the patch to qemu upstream. It was far simpler than I thought. Thanks for the help. Deferring to 7.1, since Libvirt already overrides this default, and the urgent fix is in Libvirt. Jason opened a new bug. (In reply to Ronen Hod from comment #29) > Deferring to 7.1, since Libvirt already overrides this default, and the > urgent fix is in Libvirt. > Jason opened a new bug. Opened here: https://bugzilla.redhat.com/show_bug.cgi?id=1071888 Jason already sent the patchs upstream. Michael: Any comment on: https://lists.gnu.org/archive/html/qemu-devel/2014-03/msg01330.html ? Impractical for 2.4, will try again in 2.5. So postpone to 7.3 |