RHEL Engineering is moving the tracking of its product development work on RHEL 6 through RHEL 9 to Red Hat Jira (issues.redhat.com). If you're a Red Hat customer, please continue to file support cases via the Red Hat customer portal. If you're not, please head to the "RHEL project" in Red Hat Jira and file new tickets here. Individual Bugzilla bugs in the statuses "NEW", "ASSIGNED", and "POST" are being migrated throughout September 2023. Bugs of Red Hat partners with an assigned Engineering Partner Manager (EPM) are migrated in late September as per pre-agreed dates. Bugs against components "kernel", "kernel-rt", and "kpatch" are only migrated if still in "NEW" or "ASSIGNED". If you cannot log in to RH Jira, please consult article #7032570. That failing, please send an e-mail to the RH Jira admins at rh-issues@redhat.com to troubleshoot your issue as a user management inquiry. The email creates a ServiceNow ticket with Red Hat. Individual Bugzilla bugs that are migrated will be moved to status "CLOSED", resolution "MIGRATED", and set with "MigratedToJIRA" in "Keywords". The link to the successor Jira issue will be found under "Links", have a little "two-footprint" icon next to it, and direct you to the "RHEL project" in Red Hat Jira (issue links are of type "https://issues.redhat.com/browse/RHEL-XXXX", where "X" is a digit). This same link will be available in a blue banner at the top of the page informing you that that bug has been migrated.
Bug 2100106 - Fix virtio-iommu/vfio bypass
Summary: Fix virtio-iommu/vfio bypass
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: Red Hat Enterprise Linux 9
Classification: Red Hat
Component: qemu-kvm
Version: 9.0
Hardware: All
OS: Linux
medium
medium
Target Milestone: rc
: 9.1
Assignee: Eric Auger
QA Contact: Yanghang Liu
URL:
Whiteboard:
: 2102195 (view as bug list)
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2022-06-22 12:55 UTC by Eric Auger
Modified: 2022-11-30 08:29 UTC (History)
12 users (show)

Fixed In Version: qemu-kvm-7.0.0-9.el9
Doc Type: If docs needed, set a value
Doc Text:
Clone Of:
Environment:
Last Closed: 2022-11-15 09:54:42 UTC
Type: Bug
Target Upstream Version:
Embargoed:


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
Gitlab redhat/centos-stream/src qemu-kvm merge_requests 105 0 None opened virtio-iommu: Fix bypass mode for assigned devices 2022-07-02 10:15:37 UTC
Red Hat Issue Tracker RHELPLAN-125981 0 None None None 2022-06-22 13:04:30 UTC
Red Hat Product Errata RHSA-2022:7967 0 None None None 2022-11-15 09:55:16 UTC

Description Eric Auger 2022-06-22 12:55:19 UTC
virtio-iommu's logic to support bypass mode only works for
emulated device but not for assigned devices as no GPA/HPA mapping is programmed in the physical IOMMU.

Upstream series [PATCH 0/3] Add bypass mode support to assigned device fixes that and needs to be backported.

While at it let's also backport
[PATCH v3 0/6] hw/acpi/viot: generate stable VIOT ACPI tables

Comment 8 Eric Auger 2022-07-04 08:15:40 UTC
Sorry in comment 7 I mixed up the MR955 name. Thought this was the qemu MR (ie. the 105). Can you clarify which qemu version was used for testing in comment 6.

Comment 9 Guo, Zhiyi 2022-07-04 08:29:26 UTC
*** Bug 2102195 has been marked as a duplicate of this bug. ***

Comment 10 Yanghang Liu 2022-07-04 08:36:37 UTC
Hi Eric,

[1] 

> Can you clarify which qemu version was used for testing in comment 6.

The qemu-kvm version I used in the comment 6 is 7.0.50 (v7.0.0-2113-g29f6db7566)

# ./qemu-system-x86_64 --version
QEMU emulator version 7.0.50 (v7.0.0-2113-g29f6db7566)
Copyright (c) 2003-2022 Fabrice Bellard and the QEMU Project developers

> Test env:
> host package version:
> 5.14.0-123.el9.x86_64
> upstream 7.0.50v7.0.0-2113-g29f6db7566

> guest kernel version:
> 5.14.0-104.mr955_220602_1540.el9.x86_64


[2] 

> By the way did you retest the vGPU/virtio-iommu with MR955 to confirm this is the same pt issue?

Zhiyi has re-tested for that and closed bug2102195 as Duplicate of this bug

Comment 11 Yanghang Liu 2022-07-04 10:32:01 UTC

> Please can you try to raise the dma_entry_limit module option on the vfio_iommu_type1 module to see if it fixes your issue. 
> Maybe that's just that setting that needs to be tuned.

It seems that raising the dma_entry_limit value can be a workaround.

After I change the dma_entry_limit value from the 65535 to 655350, the domain which has a virtio-iommu device and two PFs can be started successfully without any error info.


The cmd I use to update the  dma_entry_limit value:
# echo 655350 > /sys/module/vfio_iommu_type1/parameters/dma_entry_limit

# cat /sys/module/vfio_iommu_type1/parameters/dma_entry_limit 
655350


The domain xml:
virt-install --machine=q35 --noreboot --name=rhel91 --boot=uefi --boot nvram.template=/usr/share/edk2/ovmf/OVMF_VARS.fd --memory=4096 --vcpus=4 --graphics type=vnc,port=5991,listen=0.0.0.0 --import --noautoconsole --network bridge=switch,model=virtio,mac=52:54:00:00:91:91 --disk path=/home/images/2089765.qcow2,bus=virtio,cache=none,format=qcow2,io=threads,size=20 --memtune hard_limit=12582912 --iommu model=virtio --hostdev pci_0000_3b_00_0 --hostdev pci_0000_3b_00_1


Test package version:
host:
5.14.0-123.el9.x86_64
qemu-kvm-7.0.0-7.el9.x86_64
guest:
5.14.0-121.el9.x86_64

Comment 12 Eric Auger 2022-07-04 10:42:04 UTC
(In reply to Yanghang Liu from comment #11)
> 
> > Please can you try to raise the dma_entry_limit module option on the vfio_iommu_type1 module to see if it fixes your issue. 
> > Maybe that's just that setting that needs to be tuned.
> 
> It seems that raising the dma_entry_limit value can be a workaround.
> 
> After I change the dma_entry_limit value from the 65535 to 655350, the
> domain which has a virtio-iommu device and two PFs can be started
> successfully without any error info.
> 
> 
> The cmd I use to update the  dma_entry_limit value:
> # echo 655350 > /sys/module/vfio_iommu_type1/parameters/dma_entry_limit
> 
> # cat /sys/module/vfio_iommu_type1/parameters/dma_entry_limit 
> 655350
> 
> 
> The domain xml:
> virt-install --machine=q35 --noreboot --name=rhel91 --boot=uefi --boot
> nvram.template=/usr/share/edk2/ovmf/OVMF_VARS.fd --memory=4096 --vcpus=4
> --graphics type=vnc,port=5991,listen=0.0.0.0 --import --noautoconsole
> --network bridge=switch,model=virtio,mac=52:54:00:00:91:91 --disk
> path=/home/images/2089765.qcow2,bus=virtio,cache=none,format=qcow2,
> io=threads,size=20 --memtune hard_limit=12582912 --iommu model=virtio
> --hostdev pci_0000_3b_00_0 --hostdev pci_0000_3b_00_1
> 
> 
> Test package version:
> host:
> 5.14.0-123.el9.x86_64
> qemu-kvm-7.0.0-7.el9.x86_64
> guest:
> 5.14.0-121.el9.x86_64

OK thank you for the confirmation. Sorry I don't remember whether you didn't you encounter the ENOSPC error with intel_iommu with caching mode and the same 2 PFs. At the moment I don't see why this wouldn't fail too with the intel_iommu with the exact same test setup. We should exercise VFIO the same way with both virtio-iommu and intel_iommu/caching_mode. If you don't encounter the error with exact same setup, that's worth to submit a separate BZ to understand why.

Comment 13 Yanghang Liu 2022-07-04 11:22:17 UTC
(In reply to Eric Auger from comment #12)

> OK thank you for the confirmation. Sorry I don't remember whether you didn't
> you encounter the ENOSPC error with intel_iommu with caching mode and the
> same 2 PFs. At the moment I don't see why this wouldn't fail too with the
> intel_iommu with the exact same test setup. We should exercise VFIO the same
> way with both virtio-iommu and intel_iommu/caching_mode. If you don't
> encounter the error with exact same setup, that's worth to submit a separate
> BZ to understand why.



Hi Eric,

Thanks a lot for the reminder!

Just same as I have mentioned in the https://bugzilla.redhat.com/show_bug.cgi?id=2089765#c22 before, I have *never* encountered this "ENOSPC" problem after replacing the virtio-iommu device with intel-iommu device.

I have submitted a separate BZ for tracking that. 
- Bug 2103649 - [virtio-iommu][PFs] The qemu-kvm keep throwing "VFIO_MAP_DMA failed: No space left on device" after start a domain which has a virtio-iommu device and two PFs

Comment 14 Eric Auger 2022-07-04 13:02:46 UTC
(In reply to Yanghang Liu from comment #13)
> (In reply to Eric Auger from comment #12)
> 
> > OK thank you for the confirmation. Sorry I don't remember whether you didn't
> > you encounter the ENOSPC error with intel_iommu with caching mode and the
> > same 2 PFs. At the moment I don't see why this wouldn't fail too with the
> > intel_iommu with the exact same test setup. We should exercise VFIO the same
> > way with both virtio-iommu and intel_iommu/caching_mode. If you don't
> > encounter the error with exact same setup, that's worth to submit a separate
> > BZ to understand why.
> 
> 
> 
> Hi Eric,
> 
> Thanks a lot for the reminder!
> 
> Just same as I have mentioned in the
> https://bugzilla.redhat.com/show_bug.cgi?id=2089765#c22 before, I have
> *never* encountered this "ENOSPC" problem after replacing the virtio-iommu
> device with intel-iommu device.

OK thanks!
> 
> I have submitted a separate BZ for tracking that. 
> - Bug 2103649 - [virtio-iommu][PFs] The qemu-kvm keep throwing "VFIO_MAP_DMA
> failed: No space left on device" after start a domain which has a
> virtio-iommu device and two PFs
I will track this ENOSPC issue in this new BZ then!

Comment 15 Eric Auger 2022-07-15 15:23:51 UTC
Hi Mirek, do you need a respin to merge this series? (I see in the MR "Merge blocked: the source branch must be rebased onto the target branch.")

Comment 17 Miroslav Rezanina 2022-07-19 10:14:18 UTC
(In reply to Eric Auger from comment #15)
> Hi Mirek, do you need a respin to merge this series? (I see in the MR "Merge
> blocked: the source branch must be rebased onto the target branch.")

No respin needed. This label means there's newer version available than base of the MR so you should check whether respin is needed (due to conflict or behavior change). In case not needed (MR apply and no hidden change in behavior) you can remove it.

Comment 18 Yanghang Liu 2022-07-20 10:11:12 UTC
This bug can be reproduced in the qemu-kvm-7.0.0-8.el9.x86_64



Test env:
host:
qemu-kvm-7.0.0-8.el9.x86_64
5.14.0-130.el9.x86_64
libvirt-8.5.0-1.el9.x86_64


Test device:
3b:00.0 Ethernet controller: Intel Corporation Ethernet Controller XXV710 for 25GbE SFP28 (rev 02)



Test steps:

(1) make sure the vm kernel 
# cat /proc/cmdline 
...
 iommu=pt intel_iommu=on

(2) start a vm with a virtio iommu device and a XXV710 PF

# virt-install --machine=q35 --noreboot --name=bug2100106 --boot=uefi --boot nvram.template=/usr/share/edk2/ovmf/OVMF_VARS.fd --memory=4096 --vcpus=4 --graphics type=vnc,port=5991,listen=0.0.0.0 --import --noautoconsole --network bridge=switch,model=virtio,mac=52:54:00:00:91:91 --disk path=/home/images/2089765.qcow2,bus=virtio,cache=none,format=qcow2,io=threads,size=20 --memtune hard_limit=12582912 --iommu model=virtio --hostdev pci_0000_3b_00_0 --osinfo detect=on,require=off --check mac_in_use=off

# virsh start bug2100106

(3) check the host dmesg

[85126.274438] i40e 0000:3b:00.0: i40e_ptp_stop: removed PHC on enp59s0f0
[85126.906938] switch: port 2(vnet1) entered blocking state
[85126.912255] switch: port 2(vnet1) entered disabled state
[85126.917639] device vnet1 entered promiscuous mode
[85126.922535] switch: port 2(vnet1) entered blocking state
[85126.927856] switch: port 2(vnet1) entered forwarding state
[85127.164181] vfio-pci 0000:3b:00.0: Masking broken INTx support
[85127.170112] vfio-pci 0000:3b:00.0: vfio_ecap_init: hiding ecap 0x19@0x1d0
[85149.267915] DMAR: DRHD: handling fault status reg 602
[85149.272970] DMAR: [DMA Read NO_PASID] Request device [3b:00.0] fault addr 0x1036ba000 [fault reason 0x06] PTE Read access is not set

(4) check the vm dmesg

# dmesg | grep -i iommu
[    0.000000] Command line: BOOT_IMAGE=(hd0,gpt2)/vmlinuz-5.14.0-130.el9.x86_64 root=/dev/mapper/rhel-root ro console=tty0 console=ttyS0,115200 reboot=pci biosdevname=0 crashkernel=1G-4G:192M,4G-64G:256M,64G-:512M resume=/dev/mapper/rhel-swap rd.lvm.lv=rhel/root rd.lvm.lv=rhel/swap iommu=pt intel_iommu=on
[    0.049427] Kernel command line: BOOT_IMAGE=(hd0,gpt2)/vmlinuz-5.14.0-130.el9.x86_64 root=/dev/mapper/rhel-root ro console=tty0 console=ttyS0,115200 reboot=pci biosdevname=0 crashkernel=1G-4G:192M,4G-64G:256M,64G-:512M resume=/dev/mapper/rhel-swap rd.lvm.lv=rhel/root rd.lvm.lv=rhel/swap iommu=pt intel_iommu=on
[    0.049582] DMAR: IOMMU enabled
[    0.541309] iommu: Default domain type: Passthrough (set via kernel command line)
[    0.851308] virtio_iommu virtio0: input address: 64 bits
[    0.851851] virtio_iommu virtio0: page mask: 0x40201000
[    0.856914] ehci-pci 0000:00:1d.7: Adding to iommu group 0
[    0.880139] uhci_hcd 0000:00:1d.0: Adding to iommu group 0
[    0.896505] uhci_hcd 0000:00:1d.1: Adding to iommu group 0
[    0.910749] uhci_hcd 0000:00:1d.2: Adding to iommu group 0
[    1.533914] pcieport 0000:00:02.0: Adding to iommu group 1
[    1.540784] pcieport 0000:00:02.1: Adding to iommu group 1
[    1.546101] pcieport 0000:00:02.2: Adding to iommu group 1
[    1.551635] pcieport 0000:00:02.3: Adding to iommu group 1
[    1.556526] pcieport 0000:00:02.4: Adding to iommu group 1
[    1.561558] pcieport 0000:00:02.5: Adding to iommu group 1
[    1.566328] pcieport 0000:00:02.6: Adding to iommu group 1
[    1.571237] pcieport 0000:00:02.7: Adding to iommu group 1
[    1.575876] pcieport 0000:00:03.0: Adding to iommu group 2
[    1.581495] pcieport 0000:00:03.1: Adding to iommu group 2
[    1.585809] pcieport 0000:00:03.2: Adding to iommu group 2
[    1.589989] pcieport 0000:00:03.3: Adding to iommu group 2
[    1.594097] pcieport 0000:00:03.4: Adding to iommu group 2
[    1.598239] pcieport 0000:00:03.5: Adding to iommu group 2
[    1.602768] virtio-pci 0000:01:00.0: Adding to iommu group 1
[    1.604658] virtio-pci 0000:02:00.0: Adding to iommu group 1
[    1.606459] virtio-pci 0000:04:00.0: Adding to iommu group 1
[    2.161056] ahci 0000:00:1f.2: Adding to iommu group 3
[    5.390042] lpc_ich 0000:00:1f.0: Adding to iommu group 3
[    5.489199] i40e 0000:03:00.0: Adding to iommu group 1
[    5.500729] i801_smbus 0000:00:1f.3: Adding to iommu group 3
[    5.510029] bochs-drm 0000:00:01.0: Adding to iommu group 4

# dmesg | grep -i i40e
[    5.486068] i40e: Intel(R) Ethernet Connection XL710 Network Driver
[    5.487385] i40e: Copyright (c) 2013 - 2019 Intel Corporation.
[    5.489199] i40e 0000:03:00.0: Adding to iommu group 1
[    5.887147] i40e 0000:03:00.0: The driver for the device stopped because the device firmware failed to init. Try updating your NVM image.
[    5.889698] i40e: probe of 0000:03:00.0 failed with error -66

Comment 19 Yanghang Liu 2022-07-20 10:21:37 UTC
This bug is fixed in the qemu-kvm-7.0.0-9.el9.x86_64


Test env:
host:
qemu-kvm-7.0.0-9.el9.x86_64
5.14.0-130.el9.x86_64
libvirt-8.5.0-1.el9.x86_64


Test device:
3b:00.0 Ethernet controller: Intel Corporation Ethernet Controller XXV710 for 25GbE SFP28 (rev 02)



Test steps:

(1) make sure the vm kernel 
# cat /proc/cmdline 
...
 iommu=pt intel_iommu=on

(2) start a vm with a virtio iommu device and a XXV710 PF

# virt-install --machine=q35 --noreboot --name=bug2100106 --boot=uefi --boot nvram.template=/usr/share/edk2/ovmf/OVMF_VARS.fd --memory=4096 --vcpus=4 --graphics type=vnc,port=5991,listen=0.0.0.0 --import --noautoconsole --network bridge=switch,model=virtio,mac=52:54:00:00:91:91 --disk path=/home/images/2089765.qcow2,bus=virtio,cache=none,format=qcow2,io=threads,size=20 --memtune hard_limit=12582912 --iommu model=virtio --hostdev pci_0000_3b_00_0 --osinfo detect=on,require=off --check mac_in_use=off

# virsh start bug2100106

(3) check the host dmesg

# dmesg
[86229.790871] i40e 0000:3b:00.0: i40e_ptp_stop: removed PHC on enp59s0f0
[86230.390463] switch: port 2(vnet0) entered blocking state
[86230.395783] switch: port 2(vnet0) entered disabled state
[86230.401155] device vnet0 entered promiscuous mode
[86230.406153] switch: port 2(vnet0) entered blocking state
[86230.411474] switch: port 2(vnet0) entered forwarding state
[86231.422624] vfio-pci 0000:3b:00.0: Masking broken INTx support
[86231.428556] vfio-pci 0000:3b:00.0: vfio_ecap_init: hiding ecap 0x19@0x1d0


(4) check the vm dmesg

# dmesg | grep -i iommu
[    0.000000] Command line: BOOT_IMAGE=(hd0,gpt2)/vmlinuz-5.14.0-130.el9.x86_64 root=/dev/mapper/rhel-root ro console=tty0 console=ttyS0,115200 reboot=pci biosdevname=0 crashkernel=1G-4G:192M,4G-64G:256M,64G-:512M resume=/dev/mapper/rhel-swap rd.lvm.lv=rhel/root rd.lvm.lv=rhel/swap iommu=pt intel_iommu=on
[    0.041247] Kernel command line: BOOT_IMAGE=(hd0,gpt2)/vmlinuz-5.14.0-130.el9.x86_64 root=/dev/mapper/rhel-root ro console=tty0 console=ttyS0,115200 reboot=pci biosdevname=0 crashkernel=1G-4G:192M,4G-64G:256M,64G-:512M resume=/dev/mapper/rhel-swap rd.lvm.lv=rhel/root rd.lvm.lv=rhel/swap iommu=pt intel_iommu=on
[    0.041400] DMAR: IOMMU enabled
[    0.509730] iommu: Default domain type: Passthrough (set via kernel command line)
[    0.800870] virtio_iommu virtio0: input address: 64 bits
[    0.801415] virtio_iommu virtio0: page mask: 0xfffffffffffff000
[    0.806122] ehci-pci 0000:00:1d.7: Adding to iommu group 0
[    0.828670] uhci_hcd 0000:00:1d.0: Adding to iommu group 0
[    0.842833] uhci_hcd 0000:00:1d.1: Adding to iommu group 0
[    0.856758] uhci_hcd 0000:00:1d.2: Adding to iommu group 0
[    1.482827] pcieport 0000:00:02.0: Adding to iommu group 1
[    1.490016] pcieport 0000:00:02.1: Adding to iommu group 1
[    1.495202] pcieport 0000:00:02.2: Adding to iommu group 1
[    1.500235] pcieport 0000:00:02.3: Adding to iommu group 1
[    1.505185] pcieport 0000:00:02.4: Adding to iommu group 1
[    1.510089] pcieport 0000:00:02.5: Adding to iommu group 1
[    1.514942] pcieport 0000:00:02.6: Adding to iommu group 1
[    1.519639] pcieport 0000:00:02.7: Adding to iommu group 1
[    1.524378] pcieport 0000:00:03.0: Adding to iommu group 2
[    1.530210] pcieport 0000:00:03.1: Adding to iommu group 2
[    1.534375] pcieport 0000:00:03.2: Adding to iommu group 2
[    1.538413] pcieport 0000:00:03.3: Adding to iommu group 2
[    1.542401] pcieport 0000:00:03.4: Adding to iommu group 2
[    1.546395] pcieport 0000:00:03.5: Adding to iommu group 2
[    1.550612] virtio-pci 0000:01:00.0: Adding to iommu group 1
[    1.552440] virtio-pci 0000:02:00.0: Adding to iommu group 1
[    1.554237] virtio-pci 0000:04:00.0: Adding to iommu group 1
[    2.129858] ahci 0000:00:1f.2: Adding to iommu group 3
[    5.411704] lpc_ich 0000:00:1f.0: Adding to iommu group 3
[    5.540582] i801_smbus 0000:00:1f.3: Adding to iommu group 3
[    5.562873] bochs-drm 0000:00:01.0: Adding to iommu group 4
[    5.710124] i40e 0000:03:00.0: Adding to iommu group 1


# dmesg | grep -i i40e
[    5.708491] i40e: Intel(R) Ethernet Connection XL710 Network Driver
[    5.709142] i40e: Copyright (c) 2013 - 2019 Intel Corporation.
[    5.710124] i40e 0000:03:00.0: Adding to iommu group 1
[    5.731870] i40e 0000:03:00.0: fw 6.80.48603 api 1.7 nvm 6.00 0x80003546 18.3.6 [8086:158b] [8086:0009]
[    5.796309] i40e 0000:03:00.0: MAC address: 3c:fd:fe:b5:eb:50
[    5.798312] i40e 0000:03:00.0: FW LLDP is enabled
[    5.811502] i40e 0000:03:00.0 eth0: NIC Link is Up, 10 Gbps Full Duplex, Flow Control: RX/TX
[    5.814751] i40e 0000:03:00.0: PCI-Express: Speed 8.0GT/s Width x8
[    5.824215] i40e 0000:03:00.0: Features: PF-id[0] VFs: 64 VSIs: 66 QP: 4 RSS FD_ATR FD_SB NTUPLE DCB VxLAN Geneve PTP VEPA
[    5.847443] i40e 0000:03:00.0 enp3s0: renamed from eth0

Comment 20 Yanghang Liu 2022-07-20 10:25:17 UTC
Hi Eric,

Could you please help check the comment 18 and comment 19 ?

Is it enough to verify this bug ?

Please let me know, if you have any concerns and want me to do more tests for covering this bug.

Comment 22 Yanghang Liu 2022-07-21 09:07:31 UTC
Thanks Luiz for the info.

Move the ITM to 23.

Comment 23 Yanan Fu 2022-07-25 12:58:42 UTC
QE bot(pre verify): Set 'Verified:Tested,SanityOnly' as gating/tier1 test pass.

Comment 27 Eric Auger 2022-07-27 15:43:42 UTC
(In reply to Yanghang Liu from comment #20)
> Hi Eric,
> 
> Could you please help check the comment 18 and comment 19 ?
> 
> Is it enough to verify this bug ?
> 
> Please let me know, if you have any concerns and want me to do more tests
> for covering this bug.

 Yep this looks good to me (besides you don't need intel_iommu=on on guest side as you run virtio-iommu driver and not intel iommu driver).

Comment 28 Yanghang Liu 2022-07-27 16:05:59 UTC
(In reply to Eric Auger from comment #27)
> (In reply to Yanghang Liu from comment #20)
> > Hi Eric,
> > 
> > Could you please help check the comment 18 and comment 19 ?
> > 
> > Is it enough to verify this bug ?
> > 
> > Please let me know, if you have any concerns and want me to do more tests
> > for covering this bug.
> 
>  Yep this looks good to me (besides you don't need intel_iommu=on on guest side as you run virtio-iommu driver and not intel iommu driver).


Thanks Eric for the confirmation.

Repeat the same test steps but without intel_iommu=on in the vm kernel option , the test result is still PASS

Comment 29 Yanghang Liu 2022-07-27 16:07:06 UTC
Move bug status to VERIFIED based on comment 19 and comment 28

Comment 32 errata-xmlrpc 2022-11-15 09:54:42 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory (Moderate: qemu-kvm security, bug fix, and enhancement update), and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHSA-2022:7967


Note You need to log in before you can comment on or make changes to this bug.