io_uring is a Linux API for asynchronous I/O. It is designed for higher performance than the older Linux AIO API that QEMU supports. In QEMU, io_uring is an alternative AIO engine. Instead of specifying -drive aio=threads or -drive aio=native, use -drive aio=io_uring. The kernel feature (io_uring itself) is described in this lwn.net article: https://lwn.net/Articles/776703/ The QEMU implementation was started as an Outreachy project and is expected to be merged in QEMU-4.2. The upstream page tracking progress is https://wiki.qemu.org/Features/IOUring
The QEMU feature will be merged in 4.3. The feature was not mature enough for 4.2 and further work would be required to achieve noticeable performance improvements over Linux AIO (fd registration, memory buffer registration, and kernel-side polling).
NB, management apps like OpenStack will not expose this kind of low level knob to users. They require performance results to demonstrate which AIO (threads, native, uring) is the best performing in the variety of different storage setups, so that they can automatically pick a sensible AIO option for each storage setup used. We previously had the perf team do this work for AIO threads vs native, so we need a new set of results to include uring. I think such performance result data & usage recommendations should be considered a blocker for calling this a supported feature in RHEL, even if it is merged in upstream QEMU already.
QEMU has been recently split into sub-components and as a one-time operation to avoid breakage of tools, we are setting the QEMU sub-component of this BZ to "General". Please review and change the sub-component if necessary the next time you review this BZ. Thanks
This is future feature issue, Do we have plan to support it ? it is ok for me to close it due to stale date.
(In reply to qing.wang from comment #17) > This is future feature issue, Do we have plan to support it ? it is ok for > me to close it due to stale date. Yes, we plan to support it, but until there's kernel support we can't do much.
After evaluating this issue, there are no plans to address it further or fix it in an upcoming release. Therefore, it is being closed. If plans change such that this issue will be fixed in an upcoming release, then the bug can be reopened.
QE agree to close it.