Bug 1720016

Summary: Adding Virtio network device in "passthrough" mode is not equivalent to true PCI passthrough performance
Product: [Community] Virtualization Tools Reporter: quincy.wofford
Component: virt-managerAssignee: Cole Robinson <crobinso>
Status: CLOSED CANTFIX QA Contact:
Severity: medium Docs Contact:
Priority: unspecified    
Version: unspecifiedCC: berrange, crobinso, gscrivan, tburke
Target Milestone: ---   
Target Release: ---   
Hardware: x86_64   
OS: Linux   
Whiteboard:
Fixed In Version: Doc Type: If docs needed, set a value
Doc Text:
Story Points: ---
Clone Of: Environment:
Last Closed: 2020-01-26 19:12:18 UTC Type: Bug
Regression: --- Mount Type: ---
Documentation: --- CRM:
Verified Versions: Category: ---
oVirt Team: --- RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: --- Target Upstream Version:
Embargoed:
Attachments:
Description Flags
Performance results which indicate that virtio "passthrough" is not equivalent to true passthrough none

Description quincy.wofford 2019-06-12 23:01:01 UTC
Created attachment 1579987 [details]
Performance results which indicate that virtio "passthrough" is not equivalent to true passthrough

Description of problem:
virt-manager allows the user to add a network device with virtio in "passthrough" mode, but passthrough mode isn't full PCI passthrough, and some latency is introduced as a consequence. If instead the user decides to add a PCI host device, this is "true" PCI passthrough and functions appropriately. Some warning should be made that virtio "passthrough" is not equivalent to true PCI passthrough performance.

Version-Release number of selected component (if applicable):
1.4.0 is the virt-manager I'm running

How reproducible:
Very, if the host supports VTx, support is enabled in the BIOS, and the appropriate kernel flag is specified at boot (intel_iommu=on for Debian hosts)

Steps to Reproduce:
1. Open virt-manager
2. Start VM
3. Open hardware
4. Remove existing network device
5. Add Network device, select NIC from dropdown menu, select virtio from dropdown menu
6. Observe config in XML via virsh:
...
    <interface type='direct'>
      <mac address='52:54:00:f0:d0:cd'/>
      <source dev='enp5s0' mode='passthrough'/>
      <model type='virtio'/>
      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0'/>
    </interface>
...
7. Ping some external host, observe latency.
8. Remove network device, add PCI Host device, select NIC from menu, click finish
9. Observe config:
...
    <hostdev mode='subsystem' type='pci' managed='yes'>
      <source>
        <address domain='0x0000' bus='0x05' slot='0x00' function='0x0'/>
      </source>
      <address type='pci' domain='0x0000' bus='0x00' slot='0x09' function='0x0'/>
    </hostdev>
...
10. Reboot VM, test latency. 
11. Observe that this config, true PCI passthrough, is significantly faster.

I've attached 300 samples of pings sent to a Gb connected server for native pings from host OS, true PCI passthrough pings, virtio "passthrough" pings, and virtual network pings.

Actual results:
Performance which is not representative of true PCI-passthrough

Expected results:
Either latency which is representative of full PCI-passthrough, or some label other than simply "passthrough". This word is misleading.

Additional info:

Comment 1 Cole Robinson 2020-01-26 19:12:18 UTC
I'm not super familiar with the macvtap modes, but I don't think it's expected that 'passthrough' mode gives as good performance as PCI passthrough. Even if it is it's not virt-manager's issue to fix, it would likely be kernel or qemu. If you want clarification the best place to ask is probably qemu-users or qemu-devel list. Closing this as CANTFIX for virt-manager