Bug 1590471

Summary: RFE: QEMU VFIO based block driver for NVMe devices (RHV)
Product: Red Hat Enterprise Virtualization Manager Reporter: Ademar Reis <areis>
Component: vdsmAssignee: Dan Kenigsberg <danken>
Status: CLOSED DEFERRED QA Contact: Avihai <aefrat>
Severity: unspecified Docs Contact:
Priority: unspecified    
Version: unspecifiedCC: dyuan, eskultet, lmen, lsurette, nyewale, srevivo, virt-bugs, virt-maint, xuzhang, ycui, yisun
Target Milestone: ---Keywords: FutureFeature
Target Release: ---   
Hardware: Unspecified   
OS: Unspecified   
Whiteboard:
Fixed In Version: Doc Type: Enhancement
Doc Text:
Story Points: ---
Clone Of: 1416182 Environment:
Last Closed: 2020-04-01 14:44:53 UTC Type: Bug
Regression: --- Mount Type: ---
Documentation: --- CRM:
Verified Versions: Category: ---
oVirt Team: Storage RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: --- Target Upstream Version:
Embargoed:
Bug Depends On: 1416180, 1416182, 1519004, 1519005    
Bug Blocks:    

Description Ademar Reis 2018-06-12 16:11:23 UTC
Not sure which component to use, so picking vdsm (I miss the RFE component)


+++ This bug was initially created as a clone of Bug #1416182 +++

+++ This bug was initially created as a clone of Bug #1416180 +++

This BZ tracks the upstream work currently being done by Fam to introduce a VFIO based NVMe driver to QEMU:

https://lists.gnu.org/archive/html/qemu-devel/2016-12/msg02812.html

Date: Wed, 21 Dec 2016 00:31:35 +0800
From: Fam Zheng <famz>
Subject: [Qemu-devel] [PATCH 0/4] RFC: A VFIO based block driver for NVMe device

This series adds a new protocol driver that is intended to achieve about 20%
better performance for latency bound workloads (i.e. synchronous I/O) than
linux-aio when guest is exclusively accessing a NVMe device, by talking to the
device directly instead of through kernel file system layers and its NVMe
driver.

This applies on top of Stefan's block-next tree which has the busy polling
patches - the new driver also supports it.

A git branch is also available as:

    https://github.com/famz/qemu nvme

See patch 4 for benchmark numbers.

Tests are done on QEMU's NVMe emulation and a real Intel P3700 SSD NVMe card.
Most of dd/fio/mkfs/kernel build and OS installation testings work well, but an
weird write fault looking similar to [1] is consistently seen when installing
RHEL 7.3 guest, which is still under investigation.

[1]: http://lists.infradead.org/pipermail/linux-nvme/2015-May/001840.html

Also, the ram notifier is not enough for hot plugged block device because in
that case the notifier is installed _after_ ram blocks are added so it won't
get the events.

Comment 2 Sandro Bonazzola 2019-01-28 09:40:24 UTC
This bug has not been marked as blocker for oVirt 4.3.0.
Since we are releasing it tomorrow, January 29th, this bug has been re-targeted to 4.3.1.

Comment 5 Michal Skrivanek 2020-03-18 15:44:00 UTC
This bug didn't get any attention for a while, we didn't have the capacity to make any progress. If you deeply care about it or want to work on it please assign/target accordingly

Comment 6 Michal Skrivanek 2020-03-18 15:47:09 UTC
This bug didn't get any attention for a while, we didn't have the capacity to make any progress. If you deeply care about it or want to work on it please assign/target accordingly

Comment 7 Michal Skrivanek 2020-04-01 14:44:53 UTC
ok, closing. Please reopen if still relevant/you want to work on it.

Comment 8 Michal Skrivanek 2020-04-01 14:49:41 UTC
ok, closing. Please reopen if still relevant/you want to work on it.