Benchmarks have shown substantial performance improvements when using polling mode. This is a work-in-progress feature being discussed and refined both upstream and downstream.
+++ This bug was initially created as a clone of Bug #1404308 +++
+++ This bug was initially created as a clone of Bug #1404303 +++
From the patch over letter:
Recent performance investigation work done by Karl Rister shows that the
guest->host notification takes around 20 us. This is more than the "overhead"
of QEMU itself (e.g. block layer).
One way to avoid the costly exit is to use polling instead of notification.
The main drawback of polling is that it consumes CPU resources. In order to
benefit performance the host must have extra CPU cycles available on physical
CPUs that aren't used by the guest.
This is an experimental AioContext polling implementation. It adds a polling
callback into the event loop. Polling functions are implemented for virtio-blk
virtqueue guest->host kick and Linux AIO completion.
The -object iothread,poll-max-ns=NUM parameter sets the number of nanoseconds
to poll before entering the usual blocking poll(2) syscall. Try setting this
parameter to the time from old request completion to new virtqueue kick. By
default no polling is done so you must set this parameter to get busy polling.
Current patch series (v4):
https://lists.gnu.org/archive/html/qemu-devel/2016-12/msg00148.html
Comment 2Kashyap Chamarthy
2018-04-05 16:28:32 UTC
I don't think there is any action item for Nova; as QEMU will do the right thing.
Quoting the comment from the libvirt bug[*]:
"Based on the discussion [1] in upstream we should not do anything in Libvirt and let's QEMU deal with the polling stuff. QEMU will enable it by default so users usually don't have to care about it. If someone comes up with good and valid reasons to disable polling we can consider adding that feature into Libvirt.
[1] <https://www.redhat.com/archives/libvir-list/2017-February/msg01084.html>
[*] https://bugzilla.redhat.com/show_bug.cgi?id=1404308#c2