RHEL Engineering is moving the tracking of its product development work on RHEL 6 through RHEL 9 to Red Hat Jira (issues.redhat.com). If you're a Red Hat customer, please continue to file support cases via the Red Hat customer portal. If you're not, please head to the "RHEL project" in Red Hat Jira and file new tickets here. Individual Bugzilla bugs in the statuses "NEW", "ASSIGNED", and "POST" are being migrated throughout September 2023. Bugs of Red Hat partners with an assigned Engineering Partner Manager (EPM) are migrated in late September as per pre-agreed dates. Bugs against components "kernel", "kernel-rt", and "kpatch" are only migrated if still in "NEW" or "ASSIGNED". If you cannot log in to RH Jira, please consult article #7032570. That failing, please send an e-mail to the RH Jira admins at rh-issues@redhat.com to troubleshoot your issue as a user management inquiry. The email creates a ServiceNow ticket with Red Hat. Individual Bugzilla bugs that are migrated will be moved to status "CLOSED", resolution "MIGRATED", and set with "MigratedToJIRA" in "Keywords". The link to the successor Jira issue will be found under "Links", have a little "two-footprint" icon next to it, and direct you to the "RHEL project" in Red Hat Jira (issue links are of type "https://issues.redhat.com/browse/RHEL-XXXX", where "X" is a digit). This same link will be available in a blue banner at the top of the page informing you that that bug has been migrated.
Bug 1827750 - Add support for sharing PCI devices between nvme:// block driver instances
Summary: Add support for sharing PCI devices between nvme:// block driver instances
Keywords:
Status: CLOSED WONTFIX
Alias: None
Product: Red Hat Enterprise Linux 9
Classification: Red Hat
Component: qemu-kvm
Version: 9.0
Hardware: Unspecified
OS: Unspecified
high
medium
Target Milestone: rc
: ---
Assignee: Stefan Hajnoczi
QA Contact: Tingting Mao
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2020-04-24 17:41 UTC by Stefan Hajnoczi
Modified: 2021-12-07 22:40 UTC (History)
8 users (show)

Fixed In Version:
Doc Type: If docs needed, set a value
Doc Text:
Clone Of:
Environment:
Last Closed: 2021-10-24 07:26:51 UTC
Type: Feature Request
Target Upstream Version:
Embargoed:


Attachments (Terms of Use)

Description Stefan Hajnoczi 2020-04-24 17:41:14 UTC
The nvme:// block driver in QEMU currently requires exclusive access to the NVMe PCI adapter.  It already supports accessing just a specific NVMe namespace (a subset of the drive).

The nvme:// block driver needs to be extended to allow multiple instances to share a single NVMe PCI adapter within the same QEMU or qemu-storage-daemon process.  For example:
  -drive if=none,id=nvme0,file=nvme://0000:5e:00.0/1,format=raw
  -drive if=none,id=nvme1,file=nvme://0000:5e:00.0/2,format=raw

The driver should detect that 0000:5e:00.0 has already been opened by another instance and share it between nvme0 and nvme1.

The main design decision is whether requests should share queues or not.  Since multiple guests could be sharing a single NVMe PCI adapter when qemu-storage-daemon is used as vhost-user-blk the device, it is important that the quality of service is good even if the guests do not cooperate with each other.

Comment 2 Ademar Reis 2020-11-18 22:21:25 UTC
(In reply to Stefan Hajnoczi from comment #0)
> The nvme:// block driver in QEMU currently requires exclusive access to the
> NVMe PCI adapter.  It already supports accessing just a specific NVMe
> namespace (a subset of the drive).
> 
> The nvme:// block driver needs to be extended to allow multiple instances to
> share a single NVMe PCI adapter within the same QEMU or qemu-storage-daemon
> process.  For example:
>   -drive if=none,id=nvme0,file=nvme://0000:5e:00.0/1,format=raw
>   -drive if=none,id=nvme1,file=nvme://0000:5e:00.0/2,format=raw
> 
> The driver should detect that 0000:5e:00.0 has already been opened by
> another instance and share it between nvme0 and nvme1.
> 
> The main design decision is whether requests should share queues or not. 
> Since multiple guests could be sharing a single NVMe PCI adapter when
> qemu-storage-daemon is used as vhost-user-blk the device, it is important
> that the quality of service is good even if the guests do not cooperate with
> each other.


What about exposing this through libvirt? Do we need an additional entry in the XML?

For reference, right now the libvirt domain XML is as follows:

<disk type='nvme' device='disk'>
  <driver name='qemu' type='raw'/>
  <source type='pci' managed='yes' namespace='1'>
    <address domain='0x0000' bus='0x01' slot='0x00' function='0x0'/>
  </source>
  <target dev='vde' bus='virtio'/>
</disk>

The NVMe namespace can be selected on drives that support multiple namespaces using <source namespace=’N’>.

Comment 3 Stefan Hajnoczi 2020-12-17 13:07:38 UTC
(In reply to Ademar Reis from comment #2)
> (In reply to Stefan Hajnoczi from comment #0)
> > The nvme:// block driver in QEMU currently requires exclusive access to the
> > NVMe PCI adapter.  It already supports accessing just a specific NVMe
> > namespace (a subset of the drive).
> > 
> > The nvme:// block driver needs to be extended to allow multiple instances to
> > share a single NVMe PCI adapter within the same QEMU or qemu-storage-daemon
> > process.  For example:
> >   -drive if=none,id=nvme0,file=nvme://0000:5e:00.0/1,format=raw
> >   -drive if=none,id=nvme1,file=nvme://0000:5e:00.0/2,format=raw
> > 
> > The driver should detect that 0000:5e:00.0 has already been opened by
> > another instance and share it between nvme0 and nvme1.
> > 
> > The main design decision is whether requests should share queues or not. 
> > Since multiple guests could be sharing a single NVMe PCI adapter when
> > qemu-storage-daemon is used as vhost-user-blk the device, it is important
> > that the quality of service is good even if the guests do not cooperate with
> > each other.
> 
> 
> What about exposing this through libvirt? Do we need an additional entry in
> the XML?
> 
> For reference, right now the libvirt domain XML is as follows:
> 
> <disk type='nvme' device='disk'>
>   <driver name='qemu' type='raw'/>
>   <source type='pci' managed='yes' namespace='1'>
>     <address domain='0x0000' bus='0x01' slot='0x00' function='0x0'/>
>   </source>
>   <target dev='vde' bus='virtio'/>
> </disk>
> 
> The NVMe namespace can be selected on drives that support multiple
> namespaces using <source namespace=’N’>.

There is a BZ for the libvirt part here:
https://bugzilla.redhat.com/show_bug.cgi?id=1829865

Perhaps it should be split into NVMe Namespaces and raw offset=/size= pieces since they are independent concepts.

Comment 5 John Ferlan 2021-09-08 21:53:57 UTC
Bulk update: Move RHEL-AV bugs to RHEL9. If necessary to resolve in RHEL8, then clone to the current RHEL8 release.

Comment 7 RHEL Program Management 2021-10-24 07:26:51 UTC
After evaluating this issue, there are no plans to address it further or fix it in an upcoming release.  Therefore, it is being closed.  If plans change such that this issue will be fixed in an upcoming release, then the bug can be reopened.


Note You need to log in before you can comment on or make changes to this bug.