Bug 1789301 - virtio-blk/scsi: fix notification suppression during AioContext polling
Summary: virtio-blk/scsi: fix notification suppression during AioContext polling
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: Red Hat Enterprise Linux Advanced Virtualization
Classification: Red Hat
Component: qemu-kvm
Version: 8.2
Hardware: Unspecified
OS: Unspecified
unspecified
unspecified
Target Milestone: rc
: 8.0
Assignee: Stefan Hajnoczi
QA Contact: Quan Wenli
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2020-01-09 09:59 UTC by Stefan Hajnoczi
Modified: 2020-12-20 06:47 UTC (History)
6 users (show)

Fixed In Version: qemu-kvm-4.2.0-6.module+el8.2.0+5451+991cea0d
Doc Type: If docs needed, set a value
Doc Text:
Clone Of:
Environment:
Last Closed: 2020-05-05 09:55:17 UTC
Type: Bug
Target Upstream Version:
Embargoed:


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
Red Hat Product Errata RHBA-2020:2017 0 None None None 2020-05-05 09:56:58 UTC

Description Stefan Hajnoczi 2020-01-09 09:59:10 UTC
Description of problem:

Notification suppression avoids unnecessary virtqueue kicks while the IOThread is polling the virtqueue.  This mechanism is broken and guest drivers are kicking even though they don't need to.

High IOPS workload performance can be improved by fixing notification suppression because they rely on polling for performance.  Slower I/O devices don't benefit from polling and also don't benefit from fixing notification suppression.

This performance issue is most easily reproduced with the null-co block driver.

How reproducible:
100%

Steps to Reproduce:
1. Add a null-co drive to the libvirt domain XML:

  <iothreads>4</iothreads>
  <qemu:commandline>
   <qemu:arg value='-drive'/>
   <qemu:arg value='driver=null-co,size=375081926656,id=drive-nvme,if=none'/>
  </qemu:commandline>
  <qemu:commandline>
   <qemu:arg value='-device'/>
   <qemu:arg value='virtio-blk-pci,drive=drive-nvme,iothread=iothread1,scsi=off,bus=pci.7,addr=0x0,id=virtio-disk1,write-cache=on'/>
  </qemu:commandline>

2. Run fio inside the guest:

  guest# cat >fio.job 
  [global]
  filename=/dev/vdb
  ioengine=libaio
  direct=1
  runtime=60
  ramp_time=5
  gtod_reduce=1

  [job1]
  rw=randread
  bs=4K
  ^D
  guest# fio fio.job

Actual results:

fio completes successfully with X IOPS.

Expected results:

fio completes successfully with X + ~5% IOPS.

Comment 6 Quan Wenli 2020-02-04 04:04:09 UTC
Verified it according steps from comment#0 with NVMe backend. there is about 12% improvement with AioContext polling.


Version                                       + Results
qemu-kvm-4.2.0-5.module+el8.2.0+5389+367d9739 | read: IOPS=3973, BW=15.5MiB/s (16.3MB/s)(931MiB/60004msec)
qemu-kvm-4.2.0-6.module+el8.2.0+5451+991cea0d | read: IOPS=4446, BW=17.4MiB/s (18.2MB/s)(1042MiB/60009msec)
                                              |       + 12% improve


Base on above, set it to verified.

Comment 7 Ademar Reis 2020-02-05 23:12:17 UTC
QEMU has been recently split into sub-components and as a one-time operation to avoid breakage of tools, we are setting the QEMU sub-component of this BZ to "General". Please review and change the sub-component if necessary the next time you review this BZ. Thanks

Comment 9 errata-xmlrpc 2020-05-05 09:55:17 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHBA-2020:2017


Note You need to log in before you can comment on or make changes to this bug.