Bug 1827750
| Summary: | Add support for sharing PCI devices between nvme:// block driver instances | ||
|---|---|---|---|
| Product: | Red Hat Enterprise Linux 9 | Reporter: | Stefan Hajnoczi <stefanha> |
| Component: | qemu-kvm | Assignee: | Stefan Hajnoczi <stefanha> |
| qemu-kvm sub component: | NVMe | QA Contact: | Tingting Mao <timao> |
| Status: | CLOSED WONTFIX | Docs Contact: | |
| Severity: | medium | ||
| Priority: | high | CC: | coli, jinzhao, juzhang, kkiwi, philmd, virt-maint, xuwei, yhong |
| Version: | 9.0 | Keywords: | FutureFeature, Triaged |
| Target Milestone: | rc | Flags: | pm-rhel:
mirror+
|
| Target Release: | --- | ||
| Hardware: | Unspecified | ||
| OS: | Unspecified | ||
| Whiteboard: | |||
| Fixed In Version: | Doc Type: | If docs needed, set a value | |
| Doc Text: | Story Points: | --- | |
| Clone Of: | Environment: | ||
| Last Closed: | 2021-10-24 07:26:51 UTC | Type: | Feature Request |
| Regression: | --- | Mount Type: | --- |
| Documentation: | --- | CRM: | |
| Verified Versions: | Category: | --- | |
| oVirt Team: | --- | RHEL 7.3 requirements from Atomic Host: | |
| Cloudforms Team: | --- | Target Upstream Version: | |
| Embargoed: | |||
|
Description
Stefan Hajnoczi
2020-04-24 17:41:14 UTC
(In reply to Stefan Hajnoczi from comment #0) > The nvme:// block driver in QEMU currently requires exclusive access to the > NVMe PCI adapter. It already supports accessing just a specific NVMe > namespace (a subset of the drive). > > The nvme:// block driver needs to be extended to allow multiple instances to > share a single NVMe PCI adapter within the same QEMU or qemu-storage-daemon > process. For example: > -drive if=none,id=nvme0,file=nvme://0000:5e:00.0/1,format=raw > -drive if=none,id=nvme1,file=nvme://0000:5e:00.0/2,format=raw > > The driver should detect that 0000:5e:00.0 has already been opened by > another instance and share it between nvme0 and nvme1. > > The main design decision is whether requests should share queues or not. > Since multiple guests could be sharing a single NVMe PCI adapter when > qemu-storage-daemon is used as vhost-user-blk the device, it is important > that the quality of service is good even if the guests do not cooperate with > each other. What about exposing this through libvirt? Do we need an additional entry in the XML? For reference, right now the libvirt domain XML is as follows: <disk type='nvme' device='disk'> <driver name='qemu' type='raw'/> <source type='pci' managed='yes' namespace='1'> <address domain='0x0000' bus='0x01' slot='0x00' function='0x0'/> </source> <target dev='vde' bus='virtio'/> </disk> The NVMe namespace can be selected on drives that support multiple namespaces using <source namespace=’N’>. (In reply to Ademar Reis from comment #2) > (In reply to Stefan Hajnoczi from comment #0) > > The nvme:// block driver in QEMU currently requires exclusive access to the > > NVMe PCI adapter. It already supports accessing just a specific NVMe > > namespace (a subset of the drive). > > > > The nvme:// block driver needs to be extended to allow multiple instances to > > share a single NVMe PCI adapter within the same QEMU or qemu-storage-daemon > > process. For example: > > -drive if=none,id=nvme0,file=nvme://0000:5e:00.0/1,format=raw > > -drive if=none,id=nvme1,file=nvme://0000:5e:00.0/2,format=raw > > > > The driver should detect that 0000:5e:00.0 has already been opened by > > another instance and share it between nvme0 and nvme1. > > > > The main design decision is whether requests should share queues or not. > > Since multiple guests could be sharing a single NVMe PCI adapter when > > qemu-storage-daemon is used as vhost-user-blk the device, it is important > > that the quality of service is good even if the guests do not cooperate with > > each other. > > > What about exposing this through libvirt? Do we need an additional entry in > the XML? > > For reference, right now the libvirt domain XML is as follows: > > <disk type='nvme' device='disk'> > <driver name='qemu' type='raw'/> > <source type='pci' managed='yes' namespace='1'> > <address domain='0x0000' bus='0x01' slot='0x00' function='0x0'/> > </source> > <target dev='vde' bus='virtio'/> > </disk> > > The NVMe namespace can be selected on drives that support multiple > namespaces using <source namespace=’N’>. There is a BZ for the libvirt part here: https://bugzilla.redhat.com/show_bug.cgi?id=1829865 Perhaps it should be split into NVMe Namespaces and raw offset=/size= pieces since they are independent concepts. Bulk update: Move RHEL-AV bugs to RHEL9. If necessary to resolve in RHEL8, then clone to the current RHEL8 release. After evaluating this issue, there are no plans to address it further or fix it in an upcoming release. Therefore, it is being closed. If plans change such that this issue will be fixed in an upcoming release, then the bug can be reopened. |