RHEL Engineering is moving the tracking of its product development work on RHEL 6 through RHEL 9 to Red Hat Jira (issues.redhat.com). If you're a Red Hat customer, please continue to file support cases via the Red Hat customer portal. If you're not, please head to the "RHEL project" in Red Hat Jira and file new tickets here. Individual Bugzilla bugs in the statuses "NEW", "ASSIGNED", and "POST" are being migrated throughout September 2023. Bugs of Red Hat partners with an assigned Engineering Partner Manager (EPM) are migrated in late September as per pre-agreed dates. Bugs against components "kernel", "kernel-rt", and "kpatch" are only migrated if still in "NEW" or "ASSIGNED". If you cannot log in to RH Jira, please consult article #7032570. That failing, please send an e-mail to the RH Jira admins at rh-issues@redhat.com to troubleshoot your issue as a user management inquiry. The email creates a ServiceNow ticket with Red Hat. Individual Bugzilla bugs that are migrated will be moved to status "CLOSED", resolution "MIGRATED", and set with "MigratedToJIRA" in "Keywords". The link to the successor Jira issue will be found under "Links", have a little "two-footprint" icon next to it, and direct you to the "RHEL project" in Red Hat Jira (issue links are of type "https://issues.redhat.com/browse/RHEL-XXXX", where "X" is a digit). This same link will be available in a blue banner at the top of the page informing you that that bug has been migrated.
Bug 1477099 - virtio-iommu (including ACPI, VHOST/VFIO integration, migration support)
Summary: virtio-iommu (including ACPI, VHOST/VFIO integration, migration support)
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: Red Hat Enterprise Linux 9
Classification: Red Hat
Component: qemu-kvm
Version: 9.0
Hardware: aarch64
OS: Unspecified
medium
medium
Target Milestone: beta
: 9.1
Assignee: Eric Auger
QA Contact: Yihuang Yu
URL:
Whiteboard:
: 1736263 1836885 (view as bug list)
Depends On: 1972795 2064757
Blocks: 1543699 1653327 1683831 1727536 1802982 1811148 1924294
TreeView+ depends on / blocked
 
Reported: 2017-08-01 08:34 UTC by Eric Auger
Modified: 2022-11-15 10:15 UTC (History)
22 users (show)

Fixed In Version: qemu-kvm-7.0.0-3.el9
Doc Type: If docs needed, set a value
Doc Text:
Clone Of:
: 1653327 (view as bug list)
Environment:
Last Closed: 2022-11-15 09:53:23 UTC
Type: Feature Request
Target Upstream Version:
Embargoed:


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
Gitlab redhat/centos-stream/src qemu-kvm merge_requests 83 0 None opened Enable virtio-iommu-pci on aarch64 2022-05-09 14:47:23 UTC
Red Hat Product Errata RHSA-2022:7967 0 None None None 2022-11-15 09:54:27 UTC

Description Eric Auger 2017-08-01 08:34:43 UTC
Exposing a virtual IOMMU to a QEMU/KVM guest has been enabled on several architectures and ARM support is looming. This is required for DPDK nested device assignment, nested virtualization and virtio traffic isolation.

On ARM, two approaches are considered: QEMU SMMUv3 full emulation (covered by BZ1430408) and virtio paravirtualized approach. Full emulation is the solution
traditionally adopted by other architectures while the second is a
new approach, backed by ARM kernel maintainers.

This BZ tracks the status of virtio-iommu/ARM proof of concept.

Comment 3 Eric Auger 2017-11-15 08:32:25 UTC
Current upstream status is:
[RFC v4 00/16] VIRTIO-IOMMU device, aligned with V0.4 specification.

Comment 4 Mark Langsdorf 2018-03-22 18:19:09 UTC
removing the rhel-8.0.0 flag again, see if it will hold this time.

Comment 5 Hai Huang 2018-03-22 20:08:38 UTC
This patch series has not been merged upstream, 
and is unlikely to be merged in time for 8.0.  

Previously, the bot incorrectly set the rhel-8.0? flag.
Moving to rhel-8.1.

Comment 7 Eric Auger 2019-04-01 12:34:41 UTC
The kernel driver is not yet upstreamed and the Virtio spec is still under review (however it is closed to be approved/voted I think). So the qemu devices has those dependencies to be resolved.

Comment 8 Eric Auger 2019-05-28 14:04:39 UTC
This will miss 8.1 as neither the virtio spec is voted or the driver is upstreamed. Also what about moving this bug to RHEL AV?

Comment 9 Luiz Capitulino 2019-05-28 20:21:02 UTC
(In reply to Eric Auger from comment #8)
> This will miss 8.1 as neither the virtio spec is voted or the driver is
> upstreamed. Also what about moving this bug to RHEL AV?

It's OK to target this one for 8.2. Also, I agree this should be
moved to AV.

Comment 10 Eric Auger 2019-05-29 07:55:25 UTC
Moved to RHEL AV as other new aarch64 features

Comment 12 Ademar Reis 2020-02-05 22:44:15 UTC
QEMU has been recently split into sub-components and as a one-time operation to avoid breakage of tools, we are setting the QEMU sub-component of this BZ to "General". Please review and change the sub-component if necessary the next time you review this BZ. Thanks

Comment 13 Eric Auger 2020-02-28 09:45:07 UTC
The code now is upstream (qemu 5.0) with the restriction it only works with arm virt machine and with guest booting in DT mode.
 
Non DT support is under development at kernel level by Jean-Philippe Brucker from Linaro:
[1] [PATCH 0/3] virtio-iommu on non-devicetree platforms
(https://www.spinics.net/lists/linux-virtualization/msg41391.html)
Outcome is still uncertain (ie. can we integrate without ACPI, just relying on binding info in the PCIe config space?).

If we want to be able to protect VFIO devices we now need to respin:
[PATCH RFC v5 0/5] virtio-iommu: VFIO integration
(https://lists.gnu.org/archive/html/qemu-devel/2018-11/msg05383.html)

Bharat Bhushan now working at Marvell was the original contributor of this series.

Comment 14 Eric Auger 2020-05-14 09:44:37 UTC
Given the non DT integration issues, we cannot target 8.3 anymore. The upstream code only supports DT integration. For non DT, the plan is to introduce a new ACPI table dedicated to virtio-iommu. This is under work by Jean-Philippe Brucker (Linaro). But it is a long process ...

Comment 16 Eric Auger 2020-11-12 07:40:46 UTC
*** Bug 1836885 has been marked as a duplicate of this bug. ***

Comment 17 Eric Auger 2020-11-12 07:42:32 UTC
*** Bug 1736263 has been marked as a duplicate of this bug. ***

Comment 19 RHEL Program Management 2021-01-15 07:40:26 UTC
After evaluating this issue, there are no plans to address it further or fix it in an upcoming release.  Therefore, it is being closed.  If plans change such that this issue will be fixed in an upcoming release, then the bug can be reopened.

Comment 22 Eric Auger 2021-02-04 11:03:21 UTC
Hi Andrea,

I think this is still material for RHEL. ACPI integration, which was missing to complete the job, should be voted soon, maybe in Feb.
Thanks

Eric

Comment 23 Andrea Bolognani 2021-02-04 13:09:18 UTC
(In reply to Eric Auger from comment #22)
> Hi Andrea,
> 
> I think this is still material for RHEL. ACPI integration, which was missing
> to complete the job, should be voted soon, maybe in Feb.

Good to know, thanks!

With this in mind, I think the bug should be reopened.

Comment 24 Luiz Capitulino 2021-02-08 03:29:50 UTC
Reopening as per comment 22 and comment 23.

Comment 26 Eric Auger 2021-06-09 13:54:24 UTC
From upstream pov, the ACPI integration still is not merged.

However is seems close to be:

[PATCH v3 0/6] Add support for ACPI VIOT
https://lore.kernel.org/linux-iommu/20210602154444.1077006-7-jean-philippe@linaro.org/T/

On downstream we will need to bacport the driver, the ACPI integration, enable the CONFIG_VIRTIO_IOMMU.

Then the QEMU integration needs to be upstreamed but it should go faster.

Comment 30 RHEL Program Management 2021-08-08 07:26:56 UTC
After evaluating this issue, there are no plans to address it further or fix it in an upcoming release.  Therefore, it is being closed.  If plans change such that this issue will be fixed in an upcoming release, then the bug can be reopened.

Comment 31 Luiz Capitulino 2021-08-09 14:41:55 UTC
We're waiting for this series to be merged on QEMU upstream. This is taking some time but it's expected to happen. We're going to target this work for 9.0 or 9.1.

Comment 36 Andrea Bolognani 2022-01-04 08:45:02 UTC
Hi Eric,

any updates on the current status of the virtio-iommu feature in
QEMU?

It missed 6.2.0, and the libvirt part is going to miss 8.0.0 too
unless I can get it merged this week, which at this point doesn't
seem very likely.

Can we still hope it makes it into RHEL 9.0 through backports?

Thanks!

Comment 37 Eric Auger 2022-01-04 08:57:07 UTC
The kernel dependencies have been reviewed/acked and/but they should be merged in 5.17 merge window. Only afterwards QEMU patches (not yet submitted publicly) can use them. So indeed the only way now to get the feature at QEMU and libvirt level is through backport. I will ping Jean-Philippe for the QEMU patches and do the kernel/qemu backports if it is still relevant with regard to the schedule :-(

Comment 40 Eric Auger 2022-03-09 17:24:30 UTC
The last qemu dependencies (boot bypass) were pulled in qemu 7.0. Moving the BZ to POST.

Comment 52 Eric Auger 2022-05-06 14:16:25 UTC
Hi Luiz, what do you expect as info? everything is downstream now in qemu.

Comment 53 Luiz Capitulino 2022-05-06 14:31:49 UTC
(In reply to Eric Auger from comment #52)
> Hi Luiz, what do you expect as info? everything is downstream now in qemu.

It's about comment 49, Yihuang is reporting that we don't have CONFIG_VIRTIO_IOMMU=y.

Comment 54 Eric Auger 2022-05-06 14:46:49 UTC
(In reply to Luiz Capitulino from comment #53)
> (In reply to Eric Auger from comment #52)
> > Hi Luiz, what do you expect as info? everything is downstream now in qemu.
> 
> It's about comment 49, Yihuang is reporting that we don't have
> CONFIG_VIRTIO_IOMMU=y.

argh, OK

Comment 55 Luiz Capitulino 2022-05-09 12:36:11 UTC
Eric, do you plan to send an additional patch enabling CONFIG_VIRTIO_IOMMU? I believe we might need to update DTM/ITM.

Comment 56 Eric Auger 2022-05-09 12:41:41 UTC
Yes that's what I am currently busy doing ...

Comment 58 Yihuang Yu 2022-05-13 09:35:29 UTC
QE bot(pre verify): Set 'Verified:Tested,SanityOnly' as gating/tier1 test pass.

Comment 60 Eric Auger 2022-05-17 09:48:54 UTC
default-bus-bypass-iommu is meant to bypass the smmu on the root bus so that's normal. If you check with smmuv3, you get the same behavior.

For the second issue, ie. plugging the virtio-iommu-pci on a root port, logically it should be feasible and should not prevent the guest from booting but I need to further investigate what is the expected protection then. I can reproduce on my end. This is definitively what I would have expected as a use case and I don't think this is what libvirt does (I don't know if libvirt allows to plug the virtio-iommu-pci on a given root port though). Pinging Andrea on this. I don't think this should block this BZ and the feature, especially if libvirt does not allow that kind of topology. Maybe enter another BZ to track this down?

Comment 61 Yihuang Yu 2022-05-17 09:54:26 UTC
(In reply to Eric Auger from comment #60)
> default-bus-bypass-iommu is meant to bypass the smmu on the root bus so
> that's normal. If you check with smmuv3, you get the same behavior.
> 
> For the second issue, ie. plugging the virtio-iommu-pci on a root port,
> logically it should be feasible and should not prevent the guest from
> booting but I need to further investigate what is the expected protection
> then. I can reproduce on my end. This is definitively what I would have
> expected as a use case and I don't think this is what libvirt does (I don't
> know if libvirt allows to plug the virtio-iommu-pci on a given root port
> though). Pinging Andrea on this. I don't think this should block this BZ and
> the feature, especially if libvirt does not allow that kind of topology.
> Maybe enter another BZ to track this down?

Thanks Eric, I am clear now. After this bug goes to ON_QA, I will verify it, and for the second issue, I will also file a new bug to track it.

Comment 62 Andrea Bolognani 2022-05-17 15:09:10 UTC
(In reply to Eric Auger from comment #60)
> For the second issue, ie. plugging the virtio-iommu-pci on a root port,
> logically it should be feasible and should not prevent the guest from
> booting but I need to further investigate what is the expected protection
> then. I can reproduce on my end. This is definitively what I would have
> expected as a use case and I don't think this is what libvirt does (I don't
> know if libvirt allows to plug the virtio-iommu-pci on a given root port
> though). Pinging Andrea on this. I don't think this should block this BZ and
> the feature, especially if libvirt does not allow that kind of topology.
> Maybe enter another BZ to track this down?

I can confirm that libvirt will always place the virtio-iommu-pci
device directly on pcie.0 and reject attempts to move it to a
different bus.

As for whether that's actually correct... I'm not entirely sure? I
based that decision off the following exchange:

> >> - Here is the sample qemu cmd line I am using
> >>
> >> -device virtio-iommu-pci,addr=0xa,disable-legacy=on
> >
> > Is the exact PCI address important, or did you just pick an arbitrary
> > slot on pcie.0? Are there any limitations that you're aware of in
> > that regard?
>
> no it isn't. It is arbitrary here. You can put it anywhere on pcie.0
> normally.

That's a snippet from an off-list thread between me and Eric dating
back to last September.

Maybe I read too much into it, and it would actually be fine if the
device was not on pcie.0? If that turns out to be the case, we can
easily lift the restriction on the libvirt side.

Eric, you said you were going to ask Jean-Philippe Brucker for more
information on this topic, right? Please update the bug once you hear
back :)

Comment 65 Yihuang Yu 2022-05-24 11:04:34 UTC
Verify with qemu-kvm-7.0.0-3.el9.aarch64
Guest kernel: 

QEMU command line:
MALLOC_PERTURB_=1  /usr/libexec/qemu-kvm \
    -name 'avocado-vt-vm1'  \
    -sandbox on  \
    -blockdev node-name=file_aavmf_code,driver=file,filename=/usr/share/edk2/aarch64/QEMU_EFI-silent-pflash.raw,auto-read-only=on,discard=unmap \
    -blockdev node-name=drive_aavmf_code,driver=raw,read-only=on,file=file_aavmf_code \
    -blockdev node-name=file_aavmf_vars,driver=file,filename=/home/kvm_autotest_root/images/avocado-vt-vm1_rhel910-aarch64-virtio.qcow2_VARS.fd,auto-read-only=on,discard=unmap \
    -blockdev node-name=drive_aavmf_vars,driver=raw,read-only=off,file=file_aavmf_vars \
    -machine virt,gic-version=host,memory-backend=mem-machine_mem,pflash0=drive_aavmf_code,pflash1=drive_aavmf_vars \
    -device pcie-root-port,id=pcie-root-port-0,multifunction=on,bus=pcie.0,addr=0x1,chassis=1 \
    -device pcie-pci-bridge,id=pcie-pci-bridge-0,addr=0x0,bus=pcie-root-port-0  \
    -nodefaults \
    -device virtio-iommu-pci,bus=pcie.0,addr=0x2 \
    -device pcie-root-port,id=pcie-root-port-1,port=0x1,addr=0x1.0x1,bus=pcie.0,chassis=2 \
    -device virtio-gpu-pci,bus=pcie-root-port-1,addr=0x0,iommu_platform=on \
    -m 8192 \
    -object memory-backend-ram,size=8192M,id=mem-machine_mem  \
    -smp 4,maxcpus=4,cores=2,threads=1,sockets=2  \
    -cpu 'host' \
    -serial unix:'/tmp/serial-serial0',server=on,wait=off \
    -device pcie-root-port,id=pcie-root-port-2,port=0x2,addr=0x1.0x2,bus=pcie.0,chassis=3 \
    -device qemu-xhci,id=usb1,bus=pcie-root-port-2,addr=0x0 \
    -device usb-tablet,id=usb-tablet1,bus=usb1.0,port=1 \
    -blockdev node-name=file_image1,driver=file,auto-read-only=on,discard=unmap,aio=threads,filename=/home/kvm_autotest_root/images/rhel910-aarch64-virtio.qcow2,cache.direct=on,cache.no-flush=off \
    -blockdev node-name=drive_image1,driver=qcow2,read-only=off,cache.direct=on,cache.no-flush=off,file=file_image1 \
    -device pcie-root-port,id=pcie-root-port-3,port=0x3,addr=0x1.0x3,bus=pcie.0,chassis=4 \
    -device virtio-blk-pci,id=image1,drive=drive_image1,bootindex=0,write-cache=on,bus=pcie-root-port-3,addr=0x0,iommu_platform=on \
    -device pcie-root-port,id=pcie-root-port-4,port=0x4,addr=0x1.0x4,bus=pcie.0,chassis=5 \
    -device virtio-net-pci,mac=9a:67:ed:03:aa:3c,rombar=0,id=idtzGRNX,netdev=idzIjEeK,bus=pcie-root-port-4,addr=0x0,iommu_platform=on  \
    -netdev tap,id=idzIjEeK,vhost=on  \
    -vnc :0  \
    -rtc base=utc,clock=host,driftfix=slew \
    -enable-kvm

Check dmesg inside the guest:
# dmesg | grep -i iommu
[    1.044297] iommu: Default domain type: Translated
[    1.045534] iommu: DMA domain TLB invalidation policy: lazy mode
[    1.209360] virtio_iommu virtio0: input address: 64 bits
[    1.210709] virtio_iommu virtio0: page mask: 0xfffffffffffff000
[    1.226013] xhci_hcd 0000:04:00.0: Adding to iommu group 0
[    1.227516] iommu: Failed to allocate default IOMMU domain of type 11 for group (null) - Falling back to IOMMU_DOMAIN_DMA
[    2.196732] pcieport 0000:00:01.0: Adding to iommu group 1
[    2.198186] iommu: Failed to allocate default IOMMU domain of type 11 for group (null) - Falling back to IOMMU_DOMAIN_DMA
[    2.214842] pcieport 0000:00:01.1: Adding to iommu group 1
[    2.228951] pcieport 0000:00:01.2: Adding to iommu group 1
[    2.239385] pcieport 0000:00:01.3: Adding to iommu group 1
[    2.252172] pcieport 0000:00:01.4: Adding to iommu group 1
[    2.264326] pcieport 0000:01:00.0: Adding to iommu group 1
[    2.267173] virtio-pci 0000:03:00.0: Adding to iommu group 1
[    2.270554] virtio-pci 0000:05:00.0: Adding to iommu group 1
[    2.273847] virtio-pci 0000:06:00.0: Adding to iommu group 1

Check VIOT ACPI table:
# dmesg | grep -i VIOT
[    0.000000] ACPI: VIOT 0x000000023C04E498 000058 (v00 BOCHS  BXPC     00000001 BXPC 00000001)

Comment 67 Eric Auger 2022-05-24 12:52:33 UTC
I am not too much concerned about those devices. However would be nice to compare with x86 and intel iommu. Do we have the same kind of failures on guest?

Comment 68 Yihuang Yu 2022-05-25 03:26:11 UTC
After my try, intel-iommu doesn't print those failure messages.

CPU: Intel(R) Xeon(R) CPU E3-1260L v5 @ 2.90GHz

MALLOC_PERTURB_=1  /usr/libexec/qemu-kvm \
    -name 'avocado-vt-vm1'  \
    -sandbox on  \
    -blockdev node-name=file_ovmf_code,driver=file,filename=/usr/share/OVMF/OVMF_CODE.secboot.fd,auto-read-only=on,discard=unmap \
    -blockdev node-name=drive_ovmf_code,driver=raw,read-only=on,file=file_ovmf_code \
    -blockdev node-name=file_ovmf_vars,driver=file,filename=/home/kvm_autotest_root/images/avocado-vt-vm1_rhel910-64-virtio-scsi.qcow2_VARS.fd,auto-read-only=on,discard=unmap \
    -blockdev node-name=drive_ovmf_vars,driver=raw,read-only=off,file=file_ovmf_vars \
    -machine q35,kernel-irqchip=split,memory-backend=mem-machine_mem,pflash0=drive_ovmf_code,pflash1=drive_ovmf_vars \
    -device pcie-root-port,id=pcie-root-port-0,multifunction=on,bus=pcie.0,addr=0x1,chassis=1 \
    -device pcie-pci-bridge,id=pcie-pci-bridge-0,addr=0x0,bus=pcie-root-port-0  \
    -nodefaults \
    -device intel-iommu,intremap=on,device-iotlb=on,caching-mode=on \
    -device VGA,bus=pcie.0,addr=0x2 \
    -m 7168 \
    -object memory-backend-ram,size=7168M,id=mem-machine_mem  \
    -smp 4,maxcpus=4,cores=2,threads=1,dies=1,sockets=2  \
    -cpu 'Skylake-Client-IBRS',ss=on,vmx=on,pdcm=on,hypervisor=on,tsc-adjust=on,clflushopt=on,umip=on,md-clear=on,stibp=on,arch-capabilities=on,ssbd=on,xsaves=on,pdpe1gb=on,ibpb=on,ibrs=on,amd-stibp=on,amd-ssbd=on,rsba=on,skip-l1dfl-vmentry=on,pschange-mc-no=on,hle=off,rtm=off,kvm_pv_unhalt=on \
    -device pvpanic,ioport=0x505,id=idsuheBo \
    -chardev socket,server=on,wait=off,path=/tmp/serial-serial0,id=chardev_serial0 \
    -device isa-serial,id=serial0,chardev=chardev_serial0  \
    -chardev socket,id=seabioslog_id_20220524-111702-V2HVzima,path=/tmp/seabios0,server=on,wait=off \
    -device isa-debugcon,chardev=seabioslog_id_20220524-111702-V2HVzima,iobase=0x402 \
    -device pcie-root-port,id=pcie-root-port-1,port=0x1,addr=0x1.0x1,bus=pcie.0,chassis=2 \
    -device qemu-xhci,id=usb1,bus=pcie-root-port-1,addr=0x0 \
    -device usb-tablet,id=usb-tablet1,bus=usb1.0,port=1 \
    -device pcie-root-port,id=pcie-root-port-2,port=0x2,addr=0x1.0x2,bus=pcie.0,chassis=3 \
    -device virtio-scsi-pci,id=virtio_scsi_pci0,bus=pcie-root-port-2,addr=0x0 \
    -blockdev node-name=file_image1,driver=file,auto-read-only=on,discard=unmap,aio=threads,filename=/home/kvm_autotest_root/images/rhel910-64-virtio-scsi.qcow2,cache.direct=on,cache.no-flush=off \
    -blockdev node-name=drive_image1,driver=qcow2,read-only=off,cache.direct=on,cache.no-flush=off,file=file_image1 \
    -device scsi-hd,id=image1,drive=drive_image1,write-cache=on \
    -device pcie-root-port,id=pcie-root-port-3,port=0x3,addr=0x1.0x3,bus=pcie.0,chassis=4 \
    -device virtio-net-pci,mac=9a:32:f8:d1:2b:62,id=idgxpMuw,netdev=idHMW2n6,bus=pcie-root-port-3,addr=0x0  \
    -netdev tap,id=idHMW2n6,vhost=on  \
    -vnc :0  \
    -rtc base=utc,clock=host,driftfix=slew  \
    -boot menu=off,order=cdn,once=c,strict=off \
    -enable-kvm \
    -monitor stdio

# dmesg | grep iommu
[    0.000000] Command line: BOOT_IMAGE=(hd0,gpt2)/vmlinuz-5.14.0-96.el9.x86_64 root=/dev/mapper/rhel_vm--179--240-root ro console=tty0 crashkernel=1G-4G:192M,4G-64G:256M,64G-:512M resume=/dev/mapper/rhel_vm--179--240-swap rd.lvm.lv=rhel_vm-179-240/root rd.lvm.lv=rhel_vm-179-240/swap net.ifnames=0 console=ttyS0,115200 intel_iommu=on iommu=pt
[    0.023110] Kernel command line: BOOT_IMAGE=(hd0,gpt2)/vmlinuz-5.14.0-96.el9.x86_64 root=/dev/mapper/rhel_vm--179--240-root ro console=tty0 crashkernel=1G-4G:192M,4G-64G:256M,64G-:512M resume=/dev/mapper/rhel_vm--179--240-swap rd.lvm.lv=rhel_vm-179-240/root rd.lvm.lv=rhel_vm-179-240/swap net.ifnames=0 console=ttyS0,115200 intel_iommu=on iommu=pt
[    0.023208] Unknown kernel command line parameters "BOOT_IMAGE=(hd0,gpt2)/vmlinuz-5.14.0-96.el9.x86_64 intel_iommu=on", will be passed to user space.
[    0.358009] iommu: Default domain type: Passthrough (set via kernel command line)
[    0.504582] pci 0000:00:00.0: Adding to iommu group 0
[    0.505088] pci 0000:00:01.0: Adding to iommu group 1
[    0.505581] pci 0000:00:01.1: Adding to iommu group 2
[    0.506118] pci 0000:00:01.2: Adding to iommu group 3
[    0.506609] pci 0000:00:01.3: Adding to iommu group 4
[    0.507117] pci 0000:00:02.0: Adding to iommu group 5
[    0.507604] pci 0000:00:1f.0: Adding to iommu group 6
[    0.508087] pci 0000:00:1f.2: Adding to iommu group 6
[    0.508568] pci 0000:00:1f.3: Adding to iommu group 6
[    0.509064] pci 0000:01:00.0: Adding to iommu group 7
[    0.509557] pci 0000:03:00.0: Adding to iommu group 8
[    0.510052] pci 0000:04:00.0: Adding to iommu group 9
[    0.510543] pci 0000:05:00.0: Adding to iommu group 10
[    1.250254]     intel_iommu=on

Comment 76 errata-xmlrpc 2022-11-15 09:53:23 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory (Moderate: qemu-kvm security, bug fix, and enhancement update), and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHSA-2022:7967


Note You need to log in before you can comment on or make changes to this bug.