Description of problem: virtio-blk multi-queue was enabled by default to improve scalability. randread and randrw on NVMe on an smp=4 x86 guest with iodepth=64,ioengine=libaio,bs=16kb and 16 threads regressed by 18% and 25%, respectively. Benchmark details are available here: http://kvm-perf.englab.nay.redhat.com/results/regression/multiqueue_rhel8.4.0/iothread-none-scheduler-native/raw.virtio_blk.*.x86_64.html Version-Release number of selected component (if applicable): qemu-kvm-5.2.0-1.module+el8.4.0+9091+650b220a How reproducible: Steps to Reproduce: See benchmark details above. Actual results: 18% and 25% performance regression. Expected results: Comparable or better than single-queue. Additional info:
v2 posted: https://lists.gnu.org/archive/html/qemu-devel/2021-07/msg05492.html Patches merged upstream and released with QEMU v6.1.0-rc0: d7ddd0a161 linux-aio: limit the batch size using `aio-max-batch` parameter 1793ad0247 iothread: add aio-max-batch parameter 0445409d74 iothread: generalize iothread_set_param/iothread_get_param
See also: Bug 1859048
QE bot(pre verify): Set 'Verified:Tested,SanityOnly' as gating/tier1 test pass.
Verified this bug as below, there is no regression now. Tested with: qemu-kvm-6.0.0-26.module+el8.5.0+12044+525f0ebc kernel-modules-4.18.0-321.el8.x86_64 Results: http://kvm-perf.englab.nay.redhat.com/results/regression/nvme8.5.0-qemu6.0.0-26-12044_kernel4.18.0-321_with_numqueues/raw.virtio_blk.*.x86_64.html
Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory (virt:av bug fix and enhancement update), and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://access.redhat.com/errata/RHBA-2021:4684