Bug 1720016 - Adding Virtio network device in "passthrough" mode is not equivalent to true PCI passthrough performance
Summary: Adding Virtio network device in "passthrough" mode is not equivalent to true ...
Keywords:
Status: CLOSED CANTFIX
Alias: None
Product: Virtualization Tools
Classification: Community
Component: virt-manager
Version: unspecified
Hardware: x86_64
OS: Linux
unspecified
medium
Target Milestone: ---
Assignee: Cole Robinson
QA Contact:
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2019-06-12 23:01 UTC by quincy.wofford
Modified: 2020-01-26 19:12 UTC (History)
4 users (show)

Fixed In Version:
Doc Type: If docs needed, set a value
Doc Text:
Clone Of:
Environment:
Last Closed: 2020-01-26 19:12:18 UTC
Embargoed:


Attachments (Terms of Use)
Performance results which indicate that virtio "passthrough" is not equivalent to true passthrough (15.62 KB, image/png)
2019-06-12 23:01 UTC, quincy.wofford
no flags Details

Description quincy.wofford 2019-06-12 23:01:01 UTC
Created attachment 1579987 [details]
Performance results which indicate that virtio "passthrough" is not equivalent to true passthrough

Description of problem:
virt-manager allows the user to add a network device with virtio in "passthrough" mode, but passthrough mode isn't full PCI passthrough, and some latency is introduced as a consequence. If instead the user decides to add a PCI host device, this is "true" PCI passthrough and functions appropriately. Some warning should be made that virtio "passthrough" is not equivalent to true PCI passthrough performance.

Version-Release number of selected component (if applicable):
1.4.0 is the virt-manager I'm running

How reproducible:
Very, if the host supports VTx, support is enabled in the BIOS, and the appropriate kernel flag is specified at boot (intel_iommu=on for Debian hosts)

Steps to Reproduce:
1. Open virt-manager
2. Start VM
3. Open hardware
4. Remove existing network device
5. Add Network device, select NIC from dropdown menu, select virtio from dropdown menu
6. Observe config in XML via virsh:
...
    <interface type='direct'>
      <mac address='52:54:00:f0:d0:cd'/>
      <source dev='enp5s0' mode='passthrough'/>
      <model type='virtio'/>
      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0'/>
    </interface>
...
7. Ping some external host, observe latency.
8. Remove network device, add PCI Host device, select NIC from menu, click finish
9. Observe config:
...
    <hostdev mode='subsystem' type='pci' managed='yes'>
      <source>
        <address domain='0x0000' bus='0x05' slot='0x00' function='0x0'/>
      </source>
      <address type='pci' domain='0x0000' bus='0x00' slot='0x09' function='0x0'/>
    </hostdev>
...
10. Reboot VM, test latency. 
11. Observe that this config, true PCI passthrough, is significantly faster.

I've attached 300 samples of pings sent to a Gb connected server for native pings from host OS, true PCI passthrough pings, virtio "passthrough" pings, and virtual network pings.

Actual results:
Performance which is not representative of true PCI-passthrough

Expected results:
Either latency which is representative of full PCI-passthrough, or some label other than simply "passthrough". This word is misleading.

Additional info:

Comment 1 Cole Robinson 2020-01-26 19:12:18 UTC
I'm not super familiar with the macvtap modes, but I don't think it's expected that 'passthrough' mode gives as good performance as PCI passthrough. Even if it is it's not virt-manager's issue to fix, it would likely be kernel or qemu. If you want clarification the best place to ask is probably qemu-users or qemu-devel list. Closing this as CANTFIX for virt-manager


Note You need to log in before you can comment on or make changes to this bug.