Not sure which component to use, so picking vdsm (I miss the RFE component)
+++ This bug was initially created as a clone of Bug #1416182 +++
+++ This bug was initially created as a clone of Bug #1416180 +++
This BZ tracks the upstream work currently being done by Fam to introduce a VFIO based NVMe driver to QEMU:
https://lists.gnu.org/archive/html/qemu-devel/2016-12/msg02812.html
Date: Wed, 21 Dec 2016 00:31:35 +0800
From: Fam Zheng <famz>
Subject: [Qemu-devel] [PATCH 0/4] RFC: A VFIO based block driver for NVMe device
This series adds a new protocol driver that is intended to achieve about 20%
better performance for latency bound workloads (i.e. synchronous I/O) than
linux-aio when guest is exclusively accessing a NVMe device, by talking to the
device directly instead of through kernel file system layers and its NVMe
driver.
This applies on top of Stefan's block-next tree which has the busy polling
patches - the new driver also supports it.
A git branch is also available as:
https://github.com/famz/qemu nvme
See patch 4 for benchmark numbers.
Tests are done on QEMU's NVMe emulation and a real Intel P3700 SSD NVMe card.
Most of dd/fio/mkfs/kernel build and OS installation testings work well, but an
weird write fault looking similar to [1] is consistently seen when installing
RHEL 7.3 guest, which is still under investigation.
[1]: http://lists.infradead.org/pipermail/linux-nvme/2015-May/001840.html
Also, the ram notifier is not enough for hot plugged block device because in
that case the notifier is installed _after_ ram blocks are added so it won't
get the events.
This bug didn't get any attention for a while, we didn't have the capacity to make any progress. If you deeply care about it or want to work on it please assign/target accordingly
This bug didn't get any attention for a while, we didn't have the capacity to make any progress. If you deeply care about it or want to work on it please assign/target accordingly