RHEL Engineering is moving the tracking of its product development work on RHEL 6 through RHEL 9 to Red Hat Jira (issues.redhat.com). If you're a Red Hat customer, please continue to file support cases via the Red Hat customer portal. If you're not, please head to the "RHEL project" in Red Hat Jira and file new tickets here. Individual Bugzilla bugs in the statuses "NEW", "ASSIGNED", and "POST" are being migrated throughout September 2023. Bugs of Red Hat partners with an assigned Engineering Partner Manager (EPM) are migrated in late September as per pre-agreed dates. Bugs against components "kernel", "kernel-rt", and "kpatch" are only migrated if still in "NEW" or "ASSIGNED". If you cannot log in to RH Jira, please consult article #7032570. That failing, please send an e-mail to the RH Jira admins at rh-issues@redhat.com to troubleshoot your issue as a user management inquiry. The email creates a ServiceNow ticket with Red Hat. Individual Bugzilla bugs that are migrated will be moved to status "CLOSED", resolution "MIGRATED", and set with "MigratedToJIRA" in "Keywords". The link to the successor Jira issue will be found under "Links", have a little "two-footprint" icon next to it, and direct you to the "RHEL project" in Red Hat Jira (issue links are of type "https://issues.redhat.com/browse/RHEL-XXXX", where "X" is a digit). This same link will be available in a blue banner at the top of the page informing you that that bug has been migrated.
Bug 1653327 - libvirt: Implement virtio-iommu support for aarch64
Summary: libvirt: Implement virtio-iommu support for aarch64
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: Red Hat Enterprise Linux 9
Classification: Red Hat
Component: libvirt
Version: 9.0
Hardware: aarch64
OS: Unspecified
medium
medium
Target Milestone: beta
: 9.1
Assignee: Andrea Bolognani
QA Contact: Virtualization Bugs
URL:
Whiteboard:
Depends On: 1477099
Blocks: 1543699 1683831 1811148 1924294
TreeView+ depends on / blocked
 
Reported: 2018-11-26 14:44 UTC by Andrea Bolognani
Modified: 2022-11-15 10:37 UTC (History)
20 users (show)

Fixed In Version: libvirt-8.3.0-1.el9
Doc Type: Enhancement
Doc Text:
Clone Of: 1477099
Environment:
Last Closed: 2022-11-15 10:03:03 UTC
Type: Feature Request
Target Upstream Version: 8.3.0
Embargoed:


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
Red Hat Product Errata RHSA-2022:8003 0 None None None 2022-11-15 10:03:48 UTC

Description Andrea Bolognani 2018-11-26 14:44:52 UTC
+++ This bug was initially created as a clone of Bug #1477099 +++

Exposing a virtual IOMMU to a QEMU/KVM guest has been enabled on several architectures and ARM support is looming. This is required for DPDK nested device assignment, nested virtualization and virtio traffic isolation.

On ARM, two approaches are considered: QEMU SMMUv3 full emulation (covered by BZ1430408) and virtio paravirtualized approach. Full emulation is the solution
traditionally adopted by other architectures while the second is a
new approach, backed by ARM kernel maintainers.

This BZ tracks the status of virtio-iommu/ARM proof of concept.

Comment 10 Andrea Bolognani 2021-10-08 16:56:31 UTC
Patches posted upstream.

  https://listman.redhat.com/archives/libvir-list/2021-October/msg00459.html

Comment 11 Andrea Bolognani 2021-10-20 13:05:19 UTC
v2 patches posted upstream.

  https://listman.redhat.com/archives/libvir-list/2021-October/msg00784.html

Comment 12 Andrea Bolognani 2021-10-20 15:29:24 UTC
The patches have been ACKed upstream, and I've already pushed the
first few ones - those that were just setting the stage for
implementing the feature and made sense even as standalone cleanups.

I'll push the remaining ones once the QEMU part is merged upstream.

Comment 13 RHEL Program Management 2022-03-15 07:27:21 UTC
After evaluating this issue, there are no plans to address it further or fix it in an upcoming release.  Therefore, it is being closed.  If plans change such that this issue will be fixed in an upcoming release, then the bug can be reopened.

Comment 14 Luiz Capitulino 2022-03-15 11:52:25 UTC
Reopening, this is planned for 9.1.

Comment 16 Andrea Bolognani 2022-03-18 17:12:52 UTC
v3 patches, mistakenly marked as v2, posted upstream.

  https://listman.redhat.com/archives/libvir-list/2022-March/229397.html

Comment 17 Andrea Bolognani 2022-03-18 17:17:20 UTC
(In reply to Andrea Bolognani from comment #16)
> v3 patches, mistakenly marked as v2, posted upstream.
> 
>   https://listman.redhat.com/archives/libvir-list/2022-March/229397.html

Eric, can you give these a try? The switch to JSON syntax for
-device, which is the only significant user-visible difference
compared to the previous version, should not cause any issues, but
it'd still be nice to get confirmation of that. I can pick up your
Tested-by tags if you provide them. Thanks!

Comment 18 Eric Auger 2022-03-19 10:56:36 UTC
(In reply to Andrea Bolognani from comment #17)
> (In reply to Andrea Bolognani from comment #16)
> > v3 patches, mistakenly marked as v2, posted upstream.
> > 
> >   https://listman.redhat.com/archives/libvir-list/2022-March/229397.html
> 
> Eric, can you give these a try? The switch to JSON syntax for
> -device, which is the only significant user-visible difference
> compared to the previous version, should not cause any issues, but
> it'd still be nice to get confirmation of that. I can pick up your
> Tested-by tags if you provide them. Thanks!

Hi Andrea, yes sure I will be happy test. Can you provide me with a brew build or guide me through a libvirt compilation from src (I saw your provided your branch link). What is the simplest?

Thanks

Eric

Comment 19 Andrea Bolognani 2022-03-21 12:59:26 UTC
(In reply to Eric Auger from comment #18)
> (In reply to Andrea Bolognani from comment #17)
> > Eric, can you give these a try? The switch to JSON syntax for
> > -device, which is the only significant user-visible difference
> > compared to the previous version, should not cause any issues, but
> > it'd still be nice to get confirmation of that. I can pick up your
> > Tested-by tags if you provide them. Thanks!
> 
> Hi Andrea, yes sure I will be happy test. Can you provide me with a brew
> build or guide me through a libvirt compilation from src (I saw your
> provided your branch link). What is the simplest?

I actually realized that you don't need any specific hardware to test
these, which for some reason I was convinced was the case. So I just
went ahead and tested the changes myself - everything seems to be
fine.

In case you still want to play around with the patches, and for
future reference, the easiest way to test a random libvirt branch is
to build RPMs from it, which roughly looks like

  $ mkdir build && cd build
  $ meson && meson dist --no-tests
  $ rpmbuild -ta meson-dist/libvirt-*.tar.xz

To figure out the build dependencies, you can look into

  ci/containers/centos-stream-*.Dockerfile

Note that a few packages listed in the file are not available in
RHEL, but none of them is a hard requirement, so you can safely skip
them. You should also be able to avoid enabling the extra
repositories.

If you have installed your custom build of QEMU under /usr/local,
then you'll need to run something like

  $ for w in own mod con; do
      sudo ch$w --reference /usr/libexec/qemu-kvm \
                /usr/local/bin/qemu-system-aarch64;
    done

or libvirt will be unable to use it. You might be aware of this
already, but I've included it here for completeness' sake :)

Comment 20 Andrea Bolognani 2022-04-01 18:10:51 UTC
v4 patches posted upstream.

  https://listman.redhat.com/archives/libvir-list/2022-April/229810.html

Comment 21 Andrea Bolognani 2022-04-04 08:50:57 UTC
Merged upstream.

  commit 19734c3050c70dc3452c48af70558f5a06152031
  Author: Andrea Bolognani <abologna>
  Date:   Fri Sep 24 19:29:37 2021 +0200

    qemu: Generate command line for virtio-iommu
    
    https://bugzilla.redhat.com/show_bug.cgi?id=1653327
    
    Signed-off-by: Andrea Bolognani <abologna>
    Reviewed-by: Ján Tomko <jtomko>

  v8.2.0-51-g19734c3050

It will be in libvirt 8.3.0, releasing early next month.

Comment 23 Yiding Liu (Fujitsu) 2022-04-08 03:33:42 UTC
Hi Andrea and Eric. 

I used the upstream qemu and libvirt to test in advance.

libvirt: e53c02ea20 (HEAD -> master, origin/master, origin/HEAD) virportallocator: Use automatic mutex management
qemu: f53faa70bb (HEAD -> master, origin/staging, origin/master, origin/HEAD) Merge tag 'for-upstream' of https://gitlab.com/bonzini/qemu into staging

Parts of guest xml:
```
    <iommu model='virtio'>
      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0'/>
    </iommu>

```

Then start guest.
a. Check qemu cmdline
cmdline has virtio-iommu element
```
-device {"driver":"virtio-iommu","bus":"pcie.0","addr":"0x3"}
```

b. Check guest dmesg
```
# dmesg | grep iommu
[    2.613310] iommu: Default domain type: Translated
[    2.614874] iommu: DMA domain TLB invalidation policy: lazy mode
[    2.949607] virtio_iommu virtio0: input address: 64 bits
[    2.951688] virtio_iommu virtio0: page mask: 0xfffffffffffff000
[    2.985120] xhci_hcd 0000:02:00.0: Adding to iommu group 0
[    2.986845] iommu: Failed to allocate default IOMMU domain of type 11 for group (null) - Falling back to IOMMU_DOMAIN_DMA                                                                 
[    4.035370] pcieport 0000:00:01.0: Adding to iommu group 1
[    4.037157] iommu: Failed to allocate default IOMMU domain of type 11 for group (null) - Falling back to IOMMU_DOMAIN_DMA                                                                 
[    4.056668] pcieport 0000:00:01.1: Adding to iommu group 1
[    4.070602] pcieport 0000:00:01.2: Adding to iommu group 1
[    4.087266] pcieport 0000:00:01.3: Adding to iommu group 1
[    4.103366] pcieport 0000:00:01.4: Adding to iommu group 1
[    4.119086] pcieport 0000:00:01.5: Adding to iommu group 1
[    4.135109] pcieport 0000:00:01.6: Adding to iommu group 1
[    4.151071] pcieport 0000:00:01.7: Adding to iommu group 1
[    4.167282] pcieport 0000:00:02.0: Adding to iommu group 2
[    4.169051] iommu: Failed to allocate default IOMMU domain of type 11 for group (null) - Falling back to IOMMU_DOMAIN_DMA                                                                 
[    4.190429] pcieport 0000:00:02.1: Adding to iommu group 2
[    4.206620] pcieport 0000:00:02.2: Adding to iommu group 2
[    4.222490] virtio-pci 0000:01:00.0: Adding to iommu group 1
[    4.232386] virtio-pci 0000:03:00.0: Adding to iommu group 1
[    4.242856] virtio-pci 0000:04:00.0: Adding to iommu group 1
[    4.253576] virtio-pci 0000:05:00.0: Adding to iommu group 1
[    4.259425] virtio-pci 0000:06:00.0: Adding to iommu group 1
[    4.268598] virtio-pci 0000:07:00.0: Adding to iommu group 1
```

I just did a most basic test, if there is anything else I need to test please tell me.

Comment 24 Andrea Bolognani 2022-05-04 09:09:37 UTC
(In reply to Yiding Liu (Fujitsu) from comment #23)
> I just did a most basic test, if there is anything else I need to test
> please tell me.

Hi,

apologies for the long delay in replying. Your message fell through
the cracks somehow.

What you did for testing looks reasonable enough for me - in fact,
it's pretty much exactly what I did during development :)

Eric, any additional thoughts?

Comment 25 Eric Auger 2022-05-04 09:24:36 UTC
the downstream qemu-kvm shall be usable to test the feature as the following patches are now there:

d9c96f2425  virtio-iommu: Support bypass domain (8 weeks ago) <Jean-Philippe Brucker>
448179e33e  virtio-iommu: Default to bypass during boot (8 weeks ago) <Jean-Philippe Brucker>

What did you use as a guest? boot bypass feature is not yet supported downstream: I guess you use upstream kernel?

I think the test should feature both a virtio block pci device and virtio net pci device protected by the virtio-iommu. A virtio-gpu may be a plus as well.

Check the iommu groups on the guest
find /sys/kernel/iommu_groups/

You can also stimulate the protected NIC using dnf install <pkg> or any script like:

#!/bin/bash

for i in `seq 1 30`
do
    echo "iteration $i"
    if [ -d netperf ]; then
        rm -rf netperf
    fi
    git clone https://github.com/HewlettPackard/netperf.git netperf
    if [ -d iperf ]; then
        rm -rf iperf
    fi
    git clone https://github.com/esnet/iperf.git iperf
done

Doing a whole VM install with the virtio-iommu would be perfect as well.

Thanks

Eric

Comment 26 Yiding Liu (Fujitsu) 2022-05-06 02:49:24 UTC
(In reply to Eric Auger from comment #25)
> the downstream qemu-kvm shall be usable to test the feature as the following
> patches are now there:
> 
> d9c96f2425  virtio-iommu: Support bypass domain (8 weeks ago) <Jean-Philippe
> Brucker>
> 448179e33e  virtio-iommu: Default to bypass during boot (8 weeks ago)
> <Jean-Philippe Brucker>
> 
> What did you use as a guest? boot bypass feature is not yet supported
> downstream: I guess you use upstream kernel?

No, I use a RHEL9 guest to test. 

> 
> I think the test should feature both a virtio block pci device and virtio
> net pci device protected by the virtio-iommu. A virtio-gpu may be a plus as
> well.
> 
> Check the iommu groups on the guest
> find /sys/kernel/iommu_groups/
> 
> You can also stimulate the protected NIC using dnf install <pkg> or any
> script like:
> 
> #!/bin/bash
> 
> for i in `seq 1 30`
> do
>     echo "iteration $i"
>     if [ -d netperf ]; then
>         rm -rf netperf
>     fi
>     git clone https://github.com/HewlettPackard/netperf.git netperf
>     if [ -d iperf ]; then
>         rm -rf iperf
>     fi
>     git clone https://github.com/esnet/iperf.git iperf
> done
> 
> Doing a whole VM install with the virtio-iommu would be perfect as well.

Ok, got it. Thanks for your advice.

> 
> Thanks
> 
> Eric

Comment 27 Hu Shuai (Fujitsu) 2022-05-18 06:34:27 UTC
Pre Verify Env:
Host:
 kernel-5.14.0-92.el9.aarch64
 libvirt-8.3.0-1.el9.aarch64
 qemu-kvm-7.0.0-3.el9.aarch64
Guest:
 kernel-5.14.0-92.el9.aarch64

Pre Verify Result:
a. Guest can successfully start with the virtio-iommu
Parts of guest xml:
```
    <iommu model='virtio'>
      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0'/>
    </iommu>
```
b. dmesg | grep iommu
```
# dmesg | grep iommu
[    3.316754] iommu: Default domain type: Translated 
[    3.318171] iommu: DMA domain TLB invalidation policy: lazy mode 
[    3.740987] virtio_iommu virtio0: input address: 64 bits
[    3.742534] virtio_iommu virtio0: page mask: 0xfffffffffffff000
[    3.799342] xhci_hcd 0000:02:00.0: Adding to iommu group 0
[    3.800934] iommu: Failed to allocate default IOMMU domain of type 11 for group (null) - Falling back to IOMMU_DOMAIN_DMA
[    6.045464] pcieport 0000:00:01.0: Adding to iommu group 1
[    6.047995] iommu: Failed to allocate default IOMMU domain of type 11 for group (null) - Falling back to IOMMU_DOMAIN_DMA
[    6.098540] pcieport 0000:00:01.1: Adding to iommu group 1
[    6.117988] pcieport 0000:00:01.2: Adding to iommu group 1
[    6.143471] pcieport 0000:00:01.3: Adding to iommu group 1
[    6.165886] pcieport 0000:00:01.4: Adding to iommu group 1
[    6.187972] pcieport 0000:00:01.5: Adding to iommu group 1
[    6.220452] pcieport 0000:00:01.6: Adding to iommu group 1
[    6.254336] pcieport 0000:00:01.7: Adding to iommu group 1
[    6.272365] pcieport 0000:00:02.0: Adding to iommu group 2
[    6.273923] iommu: Failed to allocate default IOMMU domain of type 11 for group (null) - Falling back to IOMMU_DOMAIN_DMA
[    6.299766] pcieport 0000:00:02.1: Adding to iommu group 2
[    6.316418] pcieport 0000:00:02.2: Adding to iommu group 2
[    6.333190] virtio-pci 0000:01:00.0: Adding to iommu group 1
[    6.344131] virtio-pci 0000:03:00.0: Adding to iommu group 1
[    6.355628] virtio-pci 0000:04:00.0: Adding to iommu group 1
[    6.367419] virtio-pci 0000:05:00.0: Adding to iommu group 1
[    6.373608] virtio-pci 0000:06:00.0: Adding to iommu group 1
[    6.383215] virtio-pci 0000:07:00.0: Adding to iommu group 1
```
c. the iommu groups on the guest
```
# ll /sys/kernel/iommu_groups/
total 0
drwxr-xr-x. 3 root root 0 May 18 14:12 0
drwxr-xr-x. 3 root root 0 May 18 14:12 1
drwxr-xr-x. 3 root root 0 May 18 14:12 2
```

Comment 28 Eric Auger 2022-05-18 06:47:32 UTC
2 things:

I see this xml snippet, suggesting that the virtio-iommu-pci is potentially pluggable at any PCIe address. An issue was reported if this latter is instantiated elsewhere than on the root bus (pcie root prot for instance) using the qemu interface (BZ2087155). However maybe and most probably given Andrea's info such setting is forbidden by libvirt. However it would be nice to double check if this is handled: ie. try on instantiate it on a pcie root port instead.

    <iommu model='virtio'>
      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0'/>
    </iommu>


> # ll /sys/kernel/iommu_groups/
> total 0
> drwxr-xr-x. 3 root root 0 May 18 14:12 0
> drwxr-xr-x. 3 root root 0 May 18 14:12 1
> drwxr-xr-x. 3 root root 0 May 18 14:12 2
> ```
I am surprised to see no group here.

Comment 29 Eric Auger 2022-05-18 06:50:18 UTC
(In reply to Eric Auger from comment #28)

> > # ll /sys/kernel/iommu_groups/
> > total 0
> > drwxr-xr-x. 3 root root 0 May 18 14:12 0
> > drwxr-xr-x. 3 root root 0 May 18 14:12 1
> > drwxr-xr-x. 3 root root 0 May 18 14:12 2
> > ```
> I am surprised to see no group here.
Need another coffee. Forget that one ;-)

Comment 32 Hu Shuai (Fujitsu) 2022-05-19 07:00:35 UTC
(In reply to Eric Auger from comment #28)
> 2 things:
> 
> I see this xml snippet, suggesting that the virtio-iommu-pci is potentially
> pluggable at any PCIe address. An issue was reported if this latter is
> instantiated elsewhere than on the root bus (pcie root prot for instance)
> using the qemu interface (BZ2087155). However maybe and most probably given
> Andrea's info such setting is forbidden by libvirt. However it would be nice
> to double check if this is handled: ie. try on instantiate it on a pcie root
> port instead.
> 
>     <iommu model='virtio'>
>       <address type='pci' domain='0x0000' bus='0x00' slot='0x03'
> function='0x0'/>
>     </iommu>


Hi, Eric

I did some change for the xml:
```
    <controller type='pci' index='12' model='pcie-root-port'>
      <model name='pcie-root-port'/>
      <target chassis='12' port='0x13'/>
      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x0'/>
    </controller>

    <iommu model='virtio'>
      <address type='pci' domain='0x0000' bus='0x0c' slot='0x00' function='0x0'/>
    </iommu>
```

Then it failed to define:
```
# virsh define /tmp/fj-km-vm.xml 
error: Failed to define domain from /tmp/fj-km-vm.xml
error: XML error: The device at PCI address 0000:0c:00.0 needs to be an integrated device (bus=0)
```

Is it appropriate to test this way, does it fail as expected?

Thanks!
Hu Shuai

Comment 33 Eric Auger 2022-05-19 08:46:45 UTC
> Is it appropriate to test this way, does it fail as expected?

Yes that looks the right way to test and it confirms the expectation. Thank you!

Eric

Comment 34 Yiding Liu (Fujitsu) 2022-05-25 06:54:45 UTC
Env:
Host kernel: 5.14.0-96.el9.aarch64
Qemu: qemu-kvm-7.0.0-4.el9.aarch64
Libivrt: libvirt-8.3.0-1.el9.aarch64
Guest kernel: 

Based on comment 25.
Verify this BZ from install stage.

Step1.
Install a guest with virtio-iommu and virtio block pci device, virtio net pci device, virtio-gpu protected by the virtio-iommu.
# virt-install --name test-iommu --machine virt --memory 8192 --vcpu 8 --iommu virtio --import --disk /var/lib/libvirt/images/RHEL-9.0-aarch64-latest.qcow2,driver
.iommu=on --network network=default,model=virtio,driver.iommu=on --video model=virtio,driver.iommu=on


Step2.
Guest start successfully, check qemu-cmdline. Devices were created as expected.
```
/usr/libexec/qemu-kvm -name guest=test-iommu,debug-threads=on -S -object {"qom-type":"secret","id":"masterKey0","format":"raw","file":"/var/lib/libvirt/qemu/domain-6-test-iommu/master-key.aes"} -blockdev {"driver":"file","filename":"/usr/share/edk2/aarch64/QEMU_EFI-silent-pflash.raw","node-name":"libvirt-pflash0-storage","auto-read-only":true,"discard":"unmap"} -blockdev {"node-name":"libvirt-pflash0-format","read-only":true,"driver":"raw","file":"libvirt-pflash0-storage"} -blockdev {"driver":"file","filename":"/var/lib/libvirt/qemu/nvram/test-iommu_VARS.fd","node-name":"libvirt-pflash1-storage","auto-read-only":true,"discard":"unmap"} -blockdev {"node-name":"libvirt-pflash1-format","read-only":false,"driver":"raw","file":"libvirt-pflash1-storage"} -machine virt-rhel9.0.0,usb=off,dump-guest-core=off,gic-version=3,pflash0=libvirt-pflash0-format,pflash1=libvirt-pflash1-format,memory-backend=mach-virt.ram -accel kvm -cpu host -m 8192 -object {"qom-type":"memory-backend-ram","id":"mach-virt.ram","size":8589934592} -overcommit mem-lock=off -smp 8,sockets=8,cores=1,threads=1 -uuid 17ed9a0f-75a4-4638-aeb5-7b3112337f84 -display none -no-user-config -nodefaults -chardev socket,id=charmonitor,fd=32,server=on,wait=off -mon chardev=charmonitor,id=monitor,mode=control -rtc base=utc -no-shutdown -boot strict=on -device {"driver":"virtio-iommu","bus":"pcie.0","addr":"0x2"} -device {"driver":"pcie-root-port","port":8,"chassis":1,"id":"pci.1","bus":"pcie.0","multifunction":true,"addr":"0x1"} -device {"driver":"pcie-root-port","port":9,"chassis":2,"id":"pci.2","bus":"pcie.0","addr":"0x1.0x1"} -device {"driver":"pcie-root-port","port":10,"chassis":3,"id":"pci.3","bus":"pcie.0","addr":"0x1.0x2"} -device {"driver":"pcie-root-port","port":11,"chassis":4,"id":"pci.4","bus":"pcie.0","addr":"0x1.0x3"} -device {"driver":"pcie-root-port","port":12,"chassis":5,"id":"pci.5","bus":"pcie.0","addr":"0x1.0x4"} -device {"driver":"pcie-root-port","port":13,"chassis":6,"id":"pci.6","bus":"pcie.0","addr":"0x1.0x5"} -device {"driver":"qemu-xhci","p2":15,"p3":15,"id":"usb","bus":"pci.2","addr":"0x0"} -device {"driver":"virtio-serial-pci","id":"virtio-serial0","bus":"pci.3","addr":"0x0"} -blockdev {"driver":"file","filename":"/var/lib/libvirt/images/RHEL-9.0-aarch64-latest.qcow2","node-name":"libvirt-1-storage","auto-read-only":true,"discard":"unmap"} -blockdev {"node-name":"libvirt-1-format","read-only":false,"driver":"qcow2","file":"libvirt-1-storage","backing":null} -device {"driver":"virtio-blk-pci","iommu_platform":true,"bus":"pci.4","addr":"0x0","drive":"libvirt-1-format","id":"virtio-disk0","bootindex":1} -netdev tap,fd=33,id=hostnet0,vhost=on,vhostfd=35 -device {"driver":"virtio-net-pci","iommu_platform":true,"netdev":"hostnet0","id":"net0","mac":"52:54:00:40:b1:82","bus":"pci.1","addr":"0x0"} -chardev pty,id=charserial0 -serial chardev:charserial0 -chardev socket,id=charchannel0,fd=30,server=on,wait=off -device {"driver":"virtserialport","bus":"virtio-serial0.0","nr":1,"chardev":"charchannel0","id":"channel0","name":"org.qemu.guest_agent.0"} -audiodev {"id":"audio1","driver":"none"} -device {"driver":"virtio-gpu-pci","iommu_platform":true,"id":"video0","max_outputs":1,"bus":"pci.5","addr":"0x0"} -sandbox on,obsolete=deny,elevateprivileges=deny,spawn=deny,resourcecontrol=deny -msg timestamp=on
```

Step3. Login guest(via console) and check device info, dmesg and iommu_groups
```
[root@localhost devices]# lspci             
00:00.0 Host bridge: Red Hat, Inc. QEMU PCIe Host bridge
00:01.0 PCI bridge: Red Hat, Inc. QEMU PCIe Root port               
00:01.1 PCI bridge: Red Hat, Inc. QEMU PCIe Root port      
00:01.2 PCI bridge: Red Hat, Inc. QEMU PCIe Root port             
00:01.3 PCI bridge: Red Hat, Inc. QEMU PCIe Root port        
00:01.4 PCI bridge: Red Hat, Inc. QEMU PCIe Root port                                                                                                                                         
00:01.5 PCI bridge: Red Hat, Inc. QEMU PCIe Root port        
00:02.0 Unclassified device [00ff]: Red Hat, Inc. Device 1057 (rev 01)                                                                                                                        
01:00.0 Ethernet controller: Red Hat, Inc. Virtio network device (rev 01)
02:00.0 USB controller: Red Hat, Inc. QEMU XHCI Host Controller (rev 01)
03:00.0 Communication controller: Red Hat, Inc. Virtio console (rev 01)
04:00.0 SCSI storage controller: Red Hat, Inc. Virtio block device (rev 01)
05:00.0 Display controller: Red Hat, Inc. Virtio GPU (rev 01)
[root@localhost devices]# dmesg | grep iommu                                                   
[    1.262372] iommu: Default domain type: Translated          
[    1.264475] iommu: DMA domain TLB invalidation policy: lazy mode 
[    1.523717] virtio_iommu virtio0: input address: 64 bits    
[    1.526277] virtio_iommu virtio0: page mask: 0xfffffffffffff000
[    1.552675] xhci_hcd 0000:02:00.0: Adding to iommu group 0                                                                                                                                 
[    1.554958] iommu: Failed to allocate default IOMMU domain of type 11 for group (null) - Falling back to IOMMU_DOMAIN_DMA                                                                  
[    2.552210] pcieport 0000:00:01.0: Adding to iommu group 1                                                                                                                                 
[    2.554602] iommu: Failed to allocate default IOMMU domain of type 11 for group (null) - Falling back to IOMMU_DOMAIN_DMA                                                                  
[    2.577648] pcieport 0000:00:01.1: Adding to iommu group 1                                                                                                                                 
[    2.594593] pcieport 0000:00:01.2: Adding to iommu group 1                                                                                                                                 
[    2.614362] pcieport 0000:00:01.3: Adding to iommu group 1                                                                                                                                 
[    2.633246] pcieport 0000:00:01.4: Adding to iommu group 1                                                                                                                                 
[    2.651717] pcieport 0000:00:01.5: Adding to iommu group 1                                                                                                                                 
[    2.670112] virtio-pci 0000:01:00.0: Adding to iommu group 1                                                                                                                               
[    2.680815] virtio-pci 0000:03:00.0: Adding to iommu group 1                                                                                                                               
[    2.691762] virtio-pci 0000:04:00.0: Adding to iommu group 1                                                                                                                               
[    2.697294] virtio-pci 0000:05:00.0: Adding to iommu group 1  
[root@localhost devices]# ls /sys/kernel/iommu_groups/1/devices/
0000:00:01.0  0000:00:01.2  0000:00:01.4  0000:01:00.0  0000:04:00.0
0000:00:01.1  0000:00:01.3  0000:00:01.5  0000:03:00.0  0000:05:00.0 
```
Devices were available in the guest and iommu_group info matched dmesg iommu.

Step4. Reboot guest && Login guest(via ssh) && Stimulate the protected NIC
```
[root@localhost ~]# ./test.sh                                                                                                                                                                 
iteration 1                                                                                                                                                                                   
Cloning into 'netperf'...                                                                                                                                                                     
remote: Enumerating objects: 5252, done.                                                                                                                                                      
remote: Counting objects: 100% (324/324), done.                                                                                                                                               
remote: Compressing objects: 100% (94/94), done.                                                                                                                                              
remote: Total 5252 (delta 234), reused 305 (delta 229), pack-reused 4928                                                                                                                      
Receiving objects: 100% (5252/5252), 16.75 MiB | 13.71 MiB/s, done.                                                                                                                           
Resolving deltas: 100% (3930/3930), done.                                                                                                                                                     
Cloning into 'iperf'...
[snip]
iteration 30
Cloning into 'netperf'...
remote: Enumerating objects: 5252, done.
remote: Counting objects: 100% (324/324), done.
remote: Compressing objects: 100% (94/94), done.
remote: Total 5252 (delta 234), reused 305 (delta 229), pack-reused 4928
Receiving objects: 100% (5252/5252), 16.75 MiB | 12.39 MiB/s, done.
Resolving deltas: 100% (3930/3930), done.
Cloning into 'iperf'...
remote: Enumerating objects: 8821, done.
remote: Counting objects: 100% (687/687), done.
remote: Compressing objects: 100% (300/300), done.
remote: Total 8821 (delta 458), reused 560 (delta 387), pack-reused 8134
Receiving objects: 100% (8821/8821), 12.55 MiB | 13.84 MiB/s, done.
Resolving deltas: 100% (6237/6237), done.                       
```

Verified.
@yalzhang. Please help set the BZ as Verified. Thanks.

Comment 42 errata-xmlrpc 2022-11-15 10:03:03 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory (Low: libvirt security, bug fix, and enhancement update), and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHSA-2022:8003


Note You need to log in before you can comment on or make changes to this bug.