Description of problem:
When running simple fio tests on baremetal and a KVM guest using a virtio-blk device the overhead is very high, with the KVM scenario being as much as 63% slower than baremetal. We have workload targets (such as for SAP) where the requirement is the virtualized solution being <10% slower than baremetal.
Version-Release number of selected component (if applicable):
RHEL 8.x and upstream testing
Steps to Reproduce:
1. Setup a KVM guest using virtio-blk with cache=none, io=native, and a dedicated io thread
2. Run simple (ie. ioengine=sync, iodepth=1, job=1) fio tests in baremetal directly against the device (ie. /dev/nvme0n1) and in the guest (ie. /dev/vdb).
3. Compare the results
For example, for a random read test with block size=4K and iodepth=1 baremetal achieves 290,700 IOPs across 3 Intel Optane 4800x PCIe devices. When tested inside a KVM guest with virtio-blk being the paravirtual interface to the physical devices the throughput achieved was 105,000 IOPs. That is a 63.8% reduction in throughput.
A throughput reduction of less than 10%.
This is a preliminary report of this regression. There is a significant amount of data collected that explores various optimizations that improve the situation but come nowhere close to achieving the less than 10% target.
Move RHEL-AV bugs to RHEL9. If necessary to resolve in RHEL8, then clone to the current RHEL8 release.
After evaluating this issue, there are no plans to address it further or fix it in an upcoming release. Therefore, it is being closed. If plans change such that this issue will be fixed in an upcoming release, then the bug can be reopened.