Bug 1715724

Summary: The same iommu_group NICs can not be assigned to a Win2019 guest at the same time
Product: Red Hat Enterprise Linux 8 Reporter: Lei Yang <leiyang>
Component: qemu-kvmAssignee: Alex Williamson <alex.williamson>
Status: CLOSED DUPLICATE QA Contact: Pei Zhang <pezhang>
Severity: high Docs Contact:
Priority: high    
Version: 8.0CC: alex.williamson, chayang, jinzhao, juzhang, pezhang, rbalakri, ribarry, virt-maint
Target Milestone: rc   
Target Release: 8.0   
Hardware: Unspecified   
OS: Unspecified   
Whiteboard:
Fixed In Version: Doc Type: If docs needed, set a value
Doc Text:
Story Points: ---
Clone Of: Environment:
Last Closed: 2019-05-31 15:37:24 UTC Type: Bug
Regression: --- Mount Type: ---
Documentation: --- CRM:
Verified Versions: Category: ---
oVirt Team: --- RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: --- Target Upstream Version:
Embargoed:

Description Lei Yang 2019-05-31 05:23:02 UTC
Description of problem:
The same iommu_group nic can not be assigned to same Win2019 guest at the same time. But each nic can be assigned individually to Win2019 guest.

Version-Release number of selected component (if applicable):
kernel-4.18.0-80.4.1.el8_0.x86_64
qemu-kvm-3.1.0-27.module+el8.0.1+3253+c5371cb3.x86_64

How reproducible:
100%

Steps to Reproduce:
1.Unbond same iommu_group from host driver to vfio-pci.
# modprobe vfio-pci
# dpdk-devbind --bind=vfio-pci 0000:83:00.0
# dpdk-devbind --bind=vfio-pci 0000:83:00.1

2.Boot Win2019 guest with PFs assigned.
QEMU cli:
/usr/libexec/qemu-kvm -name win2019 \
-M q35,kernel-irqchip=split -m 4G \
-nodefaults \
-cpu Haswell-noTSX \
-device intel-iommu,intremap=true,caching-mode=true \
-smp 4,sockets=1,cores=4,threads=1 \
-device pcie-root-port,id=root.1,chassis=1 \
-device pcie-root-port,id=root.2,chassis=2 \
-device pcie-root-port,id=root.3,chassis=3 \
-device pcie-root-port,id=root.4,chassis=4 \
-device pcie-root-port,id=root.5,chassis=5 \
-blockdev driver=file,cache.direct=off,cache.no-flush=on,filename=/home/win2019_ovmf.qcow2,node-name=my_file \
-drive id=drive_cd1,if=none,snapshot=off,aio=native,cache=none,media=cdrom,file=/home/en_windows_server_2019_updated_march_2019_x64_dvd_2ae967ab.iso \
-blockdev driver=qcow2,node-name=my,file=my_file \
-device virtio-blk-pci,drive=my,id=virtio-blk0,bus=root.1 \
-device ide-cd,id=cd1,drive=drive_cd1,bus=ide.0,unit=0 \
-drive id=drive_winutils,if=none,snapshot=off,aio=native,cache=none,media=cdrom,file=/usr/share/virtio-win/virtio-win-1.9.8.iso \
-device ide-cd,id=winutils,drive=drive_winutils,bus=ide.1,unit=0 \
-vnc :1 \
-vga qxl \
-monitor stdio \
-qmp tcp:0:5555,server,nowait \
-usb -device usb-tablet \
-boot menu=on \
-device vfio-pci,id=pf2,host=83:00.1,bus=root.5,rombar=0 \
-device vfio-pci,id=pf1,host=83:00.0,bus=root.4,rombar=0 \

Actual results:
vfio 0000:83:00.0: group 42 used in multiple address spaces

Expected results:
Win2019 guest can works well.

Additional info:
1.Both OVMF and Seabios have same issue.

2.NIC info:

# lspci |grep Eth
83:00.0 Ethernet controller: Broadcom Inc. and subsidiaries NetXtreme II BCM57810 10 Gigabit Ethernet (rev 10)
83:00.1 Ethernet controller: Broadcom Inc. and subsidiaries NetXtreme II BCM57810 10 Gigabit Ethernet (rev 10)

# ls /sys/bus/pci/devices/0000\:83\:00.0/iommu_group/devices/
0000:83:00.0  0000:83:00.1

Comment 2 Alex Williamson 2019-05-31 15:37:24 UTC
The expectation is wrong here, when a VM is configured with intel-iommu each assigned devices is placed into a separate address space, which is in direct conflict with the vfio model where containers define an address space and the granularity with which we can attach devices to a container is a group.  QEMU does not currently provide DMA aliasing support, which would allow very specific configurations of multiple devices within the same IOMMU group to be attached to a VM configured with intel-iommu when those devices are aliased to the same address space, such as by a conventional PCI bus.

Cloning to relevant RFE, behavior described here matches current expectations.

*** This bug has been marked as a duplicate of bug 1627499 ***