Note: This bug is displayed in read-only format because the product is no longer active in Red Hat Bugzilla.

Bug 1789301

Summary: virtio-blk/scsi: fix notification suppression during AioContext polling
Product: Red Hat Enterprise Linux Advanced Virtualization Reporter: Stefan Hajnoczi <stefanha>
Component: qemu-kvmAssignee: Stefan Hajnoczi <stefanha>
qemu-kvm sub component: General QA Contact: Quan Wenli <wquan>
Status: CLOSED ERRATA Docs Contact:
Severity: unspecified    
Priority: unspecified CC: chayang, coli, ddepaula, juzhang, virt-maint, wquan
Version: 8.2Flags: pm-rhel: mirror+
Target Milestone: rc   
Target Release: 8.0   
Hardware: Unspecified   
OS: Unspecified   
Whiteboard:
Fixed In Version: qemu-kvm-4.2.0-6.module+el8.2.0+5451+991cea0d Doc Type: If docs needed, set a value
Doc Text:
Story Points: ---
Clone Of: Environment:
Last Closed: 2020-05-05 09:55:17 UTC Type: Bug
Regression: --- Mount Type: ---
Documentation: --- CRM:
Verified Versions: Category: ---
oVirt Team: --- RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: --- Target Upstream Version:
Embargoed:

Description Stefan Hajnoczi 2020-01-09 09:59:10 UTC
Description of problem:

Notification suppression avoids unnecessary virtqueue kicks while the IOThread is polling the virtqueue.  This mechanism is broken and guest drivers are kicking even though they don't need to.

High IOPS workload performance can be improved by fixing notification suppression because they rely on polling for performance.  Slower I/O devices don't benefit from polling and also don't benefit from fixing notification suppression.

This performance issue is most easily reproduced with the null-co block driver.

How reproducible:
100%

Steps to Reproduce:
1. Add a null-co drive to the libvirt domain XML:

  <iothreads>4</iothreads>
  <qemu:commandline>
   <qemu:arg value='-drive'/>
   <qemu:arg value='driver=null-co,size=375081926656,id=drive-nvme,if=none'/>
  </qemu:commandline>
  <qemu:commandline>
   <qemu:arg value='-device'/>
   <qemu:arg value='virtio-blk-pci,drive=drive-nvme,iothread=iothread1,scsi=off,bus=pci.7,addr=0x0,id=virtio-disk1,write-cache=on'/>
  </qemu:commandline>

2. Run fio inside the guest:

  guest# cat >fio.job 
  [global]
  filename=/dev/vdb
  ioengine=libaio
  direct=1
  runtime=60
  ramp_time=5
  gtod_reduce=1

  [job1]
  rw=randread
  bs=4K
  ^D
  guest# fio fio.job

Actual results:

fio completes successfully with X IOPS.

Expected results:

fio completes successfully with X + ~5% IOPS.

Comment 6 Quan Wenli 2020-02-04 04:04:09 UTC
Verified it according steps from comment#0 with NVMe backend. there is about 12% improvement with AioContext polling.


Version                                       + Results
qemu-kvm-4.2.0-5.module+el8.2.0+5389+367d9739 | read: IOPS=3973, BW=15.5MiB/s (16.3MB/s)(931MiB/60004msec)
qemu-kvm-4.2.0-6.module+el8.2.0+5451+991cea0d | read: IOPS=4446, BW=17.4MiB/s (18.2MB/s)(1042MiB/60009msec)
                                              |       + 12% improve


Base on above, set it to verified.

Comment 7 Ademar Reis 2020-02-05 23:12:17 UTC
QEMU has been recently split into sub-components and as a one-time operation to avoid breakage of tools, we are setting the QEMU sub-component of this BZ to "General". Please review and change the sub-component if necessary the next time you review this BZ. Thanks

Comment 9 errata-xmlrpc 2020-05-05 09:55:17 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHBA-2020:2017