RHEL Engineering is moving the tracking of its product development work on RHEL 6 through RHEL 9 to Red Hat Jira (issues.redhat.com). If you're a Red Hat customer, please continue to file support cases via the Red Hat customer portal. If you're not, please head to the "RHEL project" in Red Hat Jira and file new tickets here. Individual Bugzilla bugs in the statuses "NEW", "ASSIGNED", and "POST" are being migrated throughout September 2023. Bugs of Red Hat partners with an assigned Engineering Partner Manager (EPM) are migrated in late September as per pre-agreed dates. Bugs against components "kernel", "kernel-rt", and "kpatch" are only migrated if still in "NEW" or "ASSIGNED". If you cannot log in to RH Jira, please consult article #7032570. That failing, please send an e-mail to the RH Jira admins at rh-issues@redhat.com to troubleshoot your issue as a user management inquiry. The email creates a ServiceNow ticket with Red Hat. Individual Bugzilla bugs that are migrated will be moved to status "CLOSED", resolution "MIGRATED", and set with "MigratedToJIRA" in "Keywords". The link to the successor Jira issue will be found under "Links", have a little "two-footprint" icon next to it, and direct you to the "RHEL project" in Red Hat Jira (issue links are of type "https://issues.redhat.com/browse/RHEL-XXXX", where "X" is a digit). This same link will be available in a blue banner at the top of the page informing you that that bug has been migrated.
Bug 877838 - Support virtio-blk data-plane from qemu
Summary: Support virtio-blk data-plane from qemu
Keywords:
Status: CLOSED WORKSFORME
Alias: None
Product: Red Hat Enterprise Linux 6
Classification: Red Hat
Component: libvirt
Version: 6.5
Hardware: Unspecified
OS: Unspecified
unspecified
unspecified
Target Milestone: rc
: ---
Assignee: Eric Blake
QA Contact: Virtualization Bugs
URL:
Whiteboard:
Depends On: 877836
Blocks: 824644 824650 1029596
TreeView+ depends on / blocked
 
Reported: 2012-11-19 02:04 UTC by Ademar Reis
Modified: 2013-11-12 17:13 UTC (History)
14 users (show)

Fixed In Version:
Doc Type: Bug Fix
Doc Text:
Clone Of: 877836
Environment:
Last Closed: 2012-11-26 03:00:01 UTC
Target Upstream Version:
Embargoed:


Attachments (Terms of Use)

Description Ademar Reis 2012-11-19 02:04:09 UTC
We probably need something like this:

<qemu:commandline>
   <qemu:arg value='-set'/>
   <qemu:arg value='device.virtio-disk0.x-data-plane=on'/>
</qemu:commandline>


+++ This bug was initially created as a clone of Bug #877836 +++

We want a backport of the data-plane support on RHEL6, maybe as tech-preview feature on the first interaction.

From the latest patch submission from Stefan Hajnoczi:
(http://comments.gmane.org/gmane.comp.emulators.qemu/180530)

This series adds the -device virtio-blk-pci,x-data-plane=on property that
enables a high performance I/O codepath.  A dedicated thread is used to process
virtio-blk requests outside the global mutex and without going through the QEMU
block layer.

Khoa Huynh <khoa <at> us.ibm.com> reported an increase from 140,000 IOPS to 600,000
IOPS for a single VM using virtio-blk-data-plane in July:

  http://comments.gmane.org/gmane.comp.emulators.kvm.devel/94580

The virtio-blk-data-plane approach was originally presented at Linux Plumbers
Conference 2010.  The following slides contain a brief overview:

  http://linuxplumbersconf.org/2010/ocw/system/presentations/651/original/Optimizing_the_QEMU_Storage_Stack.pdf

The basic approach is:
1. Each virtio-blk device has a thread dedicated to handling ioeventfd
   signalling when the guest kicks the virtqueue.
2. Requests are processed without going through the QEMU block layer using
   Linux AIO directly.
3. Completion interrupts are injected via irqfd from the dedicated thread.

To try it out:

  qemu -drive if=none,id=drive0,cache=none,aio=native,format=raw,file=...
       -device virtio-blk-pci,drive=drive0,scsi=off,x-data-plane=on

Limitations:
 * Only format=raw is supported
 * Live migration is not supported

Comment 2 Daniel Berrangé 2012-11-20 13:15:19 UTC
The following upstream mail thread does not make this sound like something we should support in libvirt at this time:

  https://lists.nongnu.org/archive/html/qemu-devel/2012-11/msg01945.html

>>> Users can take advantage of the virtio-blk-data-plane feature using the
>>> new -device virtio-blk-pci,x-data-plane=on property.
>>>
>>> The x-data-plane name was chosen because at this stage the feature is
>>> experimental and likely to see changes in the future.
>> 
>> Can you give some indication of how it is likely to change, since
>> this has a bearing on any libvirt use of this feature ?
>
>I suppose the intended semantics is "libvirt, don't touch this!"
>
>Maybe we could document the x-... prefix for experimental features that
>may be changed in incompatible ways or removed in future versions, and
>that no management tools should use.

Comment 6 Daniel Berrangé 2012-11-20 14:33:44 UTC
QEMU upstream has further clarified their wishes wrt libvirt usage:

> The following expectations:
>
> 1. This is an experimental feature.  It can be enabled through libvirt
>   using <qemu:commandline>.
>
> 2. There is ongoing work to break down the global mutex in QEMU, which
>   will allow virtio-blk-data-plane functionality to become the
>   virtio-blk emulation default.  At that point no command-line options
>   will be necessary (migration and image formats will be supported).
>
>So I think there's no need for libvirt to do anything here.

Comment 7 Dave Allan 2012-11-20 18:34:03 UTC
Dan (Yasny), you ok with using the qemu commandline passthrough?  If so, I think we can close this BZ.

Comment 8 Dan Yasny 2012-11-22 12:28:58 UTC
(In reply to comment #7)
> Dan (Yasny), you ok with using the qemu commandline passthrough?  If so, I
> think we can close this BZ.

I'd rather have proper API, but this will probably be good enough, especially if we're going to bring data-plane into virtio-blk permanently, and not as an option

Comment 9 Dave Allan 2012-11-26 03:00:01 UTC
Ok, I'll close this BZ as no work to do at this point, and we can always reopen if needed.


Note You need to log in before you can comment on or make changes to this bug.