Description of problem: Notification suppression avoids unnecessary virtqueue kicks while the IOThread is polling the virtqueue. This mechanism is broken and guest drivers are kicking even though they don't need to. High IOPS workload performance can be improved by fixing notification suppression because they rely on polling for performance. Slower I/O devices don't benefit from polling and also don't benefit from fixing notification suppression. This performance issue is most easily reproduced with the null-co block driver. How reproducible: 100% Steps to Reproduce: 1. Add a null-co drive to the libvirt domain XML: <iothreads>4</iothreads> <qemu:commandline> <qemu:arg value='-drive'/> <qemu:arg value='driver=null-co,size=375081926656,id=drive-nvme,if=none'/> </qemu:commandline> <qemu:commandline> <qemu:arg value='-device'/> <qemu:arg value='virtio-blk-pci,drive=drive-nvme,iothread=iothread1,scsi=off,bus=pci.7,addr=0x0,id=virtio-disk1,write-cache=on'/> </qemu:commandline> 2. Run fio inside the guest: guest# cat >fio.job [global] filename=/dev/vdb ioengine=libaio direct=1 runtime=60 ramp_time=5 gtod_reduce=1 [job1] rw=randread bs=4K ^D guest# fio fio.job Actual results: fio completes successfully with X IOPS. Expected results: fio completes successfully with X + ~5% IOPS.
Verified it according steps from comment#0 with NVMe backend. there is about 12% improvement with AioContext polling. Version + Results qemu-kvm-4.2.0-5.module+el8.2.0+5389+367d9739 | read: IOPS=3973, BW=15.5MiB/s (16.3MB/s)(931MiB/60004msec) qemu-kvm-4.2.0-6.module+el8.2.0+5451+991cea0d | read: IOPS=4446, BW=17.4MiB/s (18.2MB/s)(1042MiB/60009msec) | + 12% improve Base on above, set it to verified.
QEMU has been recently split into sub-components and as a one-time operation to avoid breakage of tools, we are setting the QEMU sub-component of this BZ to "General". Please review and change the sub-component if necessary the next time you review this BZ. Thanks
Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory, and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://access.redhat.com/errata/RHBA-2020:2017