RHEL Engineering is moving the tracking of its product development work on RHEL 6 through RHEL 9 to Red Hat Jira (issues.redhat.com). If you're a Red Hat customer, please continue to file support cases via the Red Hat customer portal. If you're not, please head to the "RHEL project" in Red Hat Jira and file new tickets here. Individual Bugzilla bugs in the statuses "NEW", "ASSIGNED", and "POST" are being migrated throughout September 2023. Bugs of Red Hat partners with an assigned Engineering Partner Manager (EPM) are migrated in late September as per pre-agreed dates. Bugs against components "kernel", "kernel-rt", and "kpatch" are only migrated if still in "NEW" or "ASSIGNED". If you cannot log in to RH Jira, please consult article #7032570. That failing, please send an e-mail to the RH Jira admins at rh-issues@redhat.com to troubleshoot your issue as a user management inquiry. The email creates a ServiceNow ticket with Red Hat. Individual Bugzilla bugs that are migrated will be moved to status "CLOSED", resolution "MIGRATED", and set with "MigratedToJIRA" in "Keywords". The link to the successor Jira issue will be found under "Links", have a little "two-footprint" icon next to it, and direct you to the "RHEL project" in Red Hat Jira (issue links are of type "https://issues.redhat.com/browse/RHEL-XXXX", where "X" is a digit). This same link will be available in a blue banner at the top of the page informing you that that bug has been migrated.
Bug 2055123 - [Q35] Failed to hot-plug a device whose membar > 2M into the vm
Summary: [Q35] Failed to hot-plug a device whose membar > 2M into the vm
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: Red Hat Enterprise Linux 9
Classification: Red Hat
Component: edk2
Version: 9.0
Hardware: x86_64
OS: Linux
high
high
Target Milestone: rc
: 9.3
Assignee: Gerd Hoffmann
QA Contact: Yanghang Liu
URL:
Whiteboard:
: 2152130 (view as bug list)
Depends On: 2174749
Blocks:
TreeView+ depends on / blocked
 
Reported: 2022-02-16 10:35 UTC by Yanghang Liu
Modified: 2024-03-07 04:25 UTC (History)
21 users (show)

Fixed In Version: edk2-20230524-2.el9
Doc Type: If docs needed, set a value
Doc Text:
Clone Of:
Environment:
Last Closed: 2023-11-07 08:24:29 UTC
Type: ---
Target Upstream Version:
Embargoed:


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
Red Hat Issue Tracker RHELPLAN-112508 0 None None None 2022-02-16 10:47:51 UTC
Red Hat Product Errata RHSA-2023:6330 0 None None None 2023-11-07 08:25:07 UTC

Description Yanghang Liu 2022-02-16 10:35:03 UTC
Description of problem:
Failed to hot-plug a device whose membar > 2M into the vm

Version-Release number of selected component (if applicable):
host:
5.14.0-58.el9.x86_64
qemu-kvm-6.2.0-7.el9.x86_64
guest:
5.14.0-55.el9.x86_64


How reproducible:
100%

Steps to Reproduce:
(1) start a Q35 + OVMF domain
# virt-install --machine=q35 --noreboot --name=rhel90 --memory=4096 --vcpus=4 --graphics type=vnc,port=5990,listen=0.0.0.0 --import --noautoconsole  --network bridge=switch,model=virtio,mac=52:54:00:00:90:90 --disk path=/home/images/RHEL90.qcow2,bus=virtio,cache=none,format=qcow2,io=threads,size=20  --boot=uefi --boot nvram.template=/usr/share/edk2/ovmf/OVMF_VARS.fd
# virsh start rhel90

(2) hot-plug a PF into the domain
# virsh attach-device rhel90 0000\:3b\:00.0.xml

(3) check the PF info in the domain
# ifconfig <-- I can not get any PF info here

# lspci
...
04:00.0 Ethernet controller: Mellanox Technologies MT2892 Family [ConnectX-6 Dx]

# dmesg
[  279.937708] pci 0000:04:00.0: [15b3:101d] type 00 class 0x020000
[  279.941495] pci 0000:04:00.0: reg 0x10: [mem 0x00000000-0x01ffffff 64bit pref]
[  279.945293] pci 0000:04:00.0: reg 0x30: [mem 0x00000000-0x000fffff pref]
[  279.947345] pci 0000:04:00.0: Max Payload Size set to 128 (was 256, max 512)
[  279.950637] pci 0000:04:00.0: PME# supported from D3cold
[  279.954359] pci 0000:04:00.0: 126.016 Gb/s available PCIe bandwidth, limited by 8.0 GT/s PCIe x16 link at 0000:00:02.3 (capable of 252.048 Gb/s with 16.0 GT/s PCIe x16 link)
[  279.964555] pci 0000:04:00.0: BAR 0: no space for [mem size 0x02000000 64bit pref]
[  279.968099] pci 0000:04:00.0: BAR 0: failed to assign [mem size 0x02000000 64bit pref]
[  279.971755] pci 0000:04:00.0: BAR 6: assigned [mem 0xc1000000-0xc10fffff pref]
[  280.062655] mlx5_core 0000:04:00.0: Missing registers BAR, aborting
[  280.063989] mlx5_core 0000:04:00.0: mlx5_pci_init:768:(pid 1231): error requesting BARs, aborting
[  280.066176] mlx5_core 0000:04:00.0: probe_one:1480:(pid 1231): mlx5_pci_init failed with error code -19

Actual results:
The device does not work well in the vm


Expected results:
The device works well in the vm



Additional info:
(1) Not only PF, but all pci devices with a memory bar larger than 2M can reproduce this problem

(2)
This problem can be fixed after adding the following setup into the vm cfg:

-global pcie-root-port.pref64-reserve=64M  
or
  <qemu:commandline>
    <qemu:arg value='-global'/>
    <qemu:arg value='pcie-root-port.pref64-reserve=64M'/>
  </qemu:commandline>


(3) Using XL710 to repeat the above steps, the related domain error is as following:

# lspci -v -s 87:00.0
87:00.0 Ethernet controller: Intel Corporation Ethernet Controller XL710 for 40GbE QSFP+ (rev 02)
	Subsystem: Intel Corporation Ethernet Converged Network Adapter XL710-Q2
	Flags: bus master, fast devsel, latency 0, IRQ 112, NUMA node 1, IOMMU group 119
	Memory at d5000000 (64-bit, prefetchable) [size=16M]
	Memory at d6808000 (64-bit, prefetchable) [size=32K]
	Expansion ROM at d3800000 [disabled] [size=512K]
	Capabilities: [40] Power Management version 3
	Capabilities: [50] MSI: Enable- Count=1/1 Maskable+ 64bit+
	Capabilities: [70] MSI-X: Enable+ Count=129 Masked-
	Capabilities: [a0] Express Endpoint, MSI 00
	Capabilities: [e0] Vital Product Data
	Capabilities: [100] Advanced Error Reporting
	Capabilities: [140] Device Serial Number a8-90-15-ff-ff-fe-fd-3c
	Capabilities: [150] Alternative Routing-ID Interpretation (ARI)
	Capabilities: [160] Single Root I/O Virtualization (SR-IOV)
	Capabilities: [1a0] Transaction Processing Hints
	Capabilities: [1b0] Access Control Services
	Capabilities: [1d0] Secondary PCI Express
	Kernel driver in use: i40e
	Kernel modules: i40e


In the domain:
# dmesg
[  302.137403] pci 0000:04:00.0: [8086:1583] type 00 class 0x020000
[  302.139889] pci 0000:04:00.0: reg 0x10: [mem 0x00000000-0x00ffffff 64bit pref]
[  302.142861] pci 0000:04:00.0: reg 0x1c: [mem 0x00000000-0x00007fff 64bit pref]
[  302.144817] pci 0000:04:00.0: reg 0x30: [mem 0x00000000-0x0007ffff pref]
[  302.146366] pci 0000:04:00.0: Max Payload Size set to 128 (was 256, max 2048)
[  302.151852] pci 0000:04:00.0: BAR 0: no space for [mem size 0x01000000 64bit pref]
[  302.154193] pci 0000:04:00.0: BAR 0: failed to assign [mem size 0x01000000 64bit pref]
[  302.156556] pci 0000:04:00.0: BAR 6: assigned [mem 0x80200000-0x8027ffff pref]
[  302.158731] pci 0000:04:00.0: BAR 3: assigned [mem 0x800300000-0x800307fff 64bit pref]
[  302.202698] i40e: Intel(R) Ethernet Connection XL710 Network Driver
[  302.203365] i40e: Copyright (c) 2013 - 2019 Intel Corporation.
[  302.204273] i40e 0000:04:00.0: enabling device (0000 -> 0002)
[  302.206608] i40e 0000:04:00.0: Cannot map registers, bar size 0x0 too small, aborting
[  302.208038] i40e: probe of 0000:04:00.0 failed with error -12

Comment 1 Yanghang Liu 2022-02-16 10:52:05 UTC
> Additional info:
> (1) Not only PF, but all pci devices with a memory bar larger than 2M can reproduce this problem
> 
> (2)
> This problem can be fixed after adding the following setup into the vm cfg:
> 
> -global pcie-root-port.pref64-reserve=64M  
> or
>   <qemu:commandline>
>     <qemu:arg value='-global'/>
>     <qemu:arg value='pcie-root-port.pref64-reserve=64M'/>
>   </qemu:commandline>


1.Test "Hot-plug the pci-testdev whose membar=4M into the vm" scenario without "-global pcie-root-port.pref64-reserve=64M" in the vm cfg:
  
  # virsh qemu-monitor-command rhel90 --hmp "device_add pci-testdev,membar=4M,bus=pci.4"

    The related dmesg in the vm:

        # dmesg
        [   91.926757] pci 0000:04:00.0: [1b36:0005] type 00 class 0x00ff00
        [   91.930049] pci 0000:04:00.0: reg 0x10: [mem 0x00000000-0x00000fff]
        [   91.933174] pci 0000:04:00.0: reg 0x14: [io  0x0000-0x00ff]
        [   91.936015] pci 0000:04:00.0: reg 0x18: [mem 0x00000000-0x003fffff 64bit pref]
        [   91.941166] pci 0000:04:00.0: BAR 2: no space for [mem size 0x00400000 64bit pref]
        [   91.944529] pci 0000:04:00.0: BAR 2: failed to assign [mem size 0x00400000 64bit pref]
        [   91.947251] pci 0000:04:00.0: BAR 0: assigned [mem 0xc1000000-0xc1000fff]
        [   91.949654] pci 0000:04:00.0: BAR 1: assigned [io  0x6000-0x60ff]


2. Test "Hot-plug the pci-testdev whose membar=4M into the vm" scenario with "-global pcie-root-port.pref64-reserve=64M" in the vm cfg:

  # virsh qemu-monitor-command rhel90 --hmp "device_add pci-testdev,membar=4M,bus=pci.4"

   The related dmesg in the vm:

        # dmesg
        [   45.943065] pci 0000:04:00.0: [1b36:0005] type 00 class 0x00ff00
        [   45.946711] pci 0000:04:00.0: reg 0x10: [mem 0x00000000-0x00000fff]
        [   45.950183] pci 0000:04:00.0: reg 0x14: [io  0x0000-0x00ff]
        [   45.952725] pci 0000:04:00.0: reg 0x18: [mem 0x00000000-0x003fffff 64bit pref]
        [   45.958099] pci 0000:04:00.0: BAR 2: assigned [mem 0x80c000000-0x80c3fffff 64bit pref]
        [   45.962333] pci 0000:04:00.0: BAR 0: assigned [mem 0xc1000000-0xc1000fff]
        [   45.965884] pci 0000:04:00.0: BAR 1: assigned [io  0x6000-0x60ff]

Comment 2 Yanghang Liu 2022-07-19 03:30:25 UTC
Hi Michael,

May I ask if there is any chance that we can fix this issue on current 9.1 ?  If so, could you please help set the ITR ?

The reason I ask this is that QE thinks "hotplug a device whose membar > 2M into a vm" is a very basic scenario and maybe a blocker for other related tests, so it is better if this bug can be fixed with priority.

Comment 3 Igor Mammedov 2022-08-09 07:52:08 UTC
(In reply to Yanghang Liu from comment #2)
> Hi Michael,
> 
> May I ask if there is any chance that we can fix this issue on current 9.1 ?
> If so, could you please help set the ITR ?
> 
> The reason I ask this is that QE thinks "hotplug a device whose membar > 2M
> into a vm" is a very basic scenario and maybe a blocker for other related
> tests, so it is better if this bug can be fixed with priority.

There is no proven way to fix it in so far (so 9.1 is out of question).
At the moment you can either specify reservation hints explicitly or switch to
native PCI-E hotplug to get around the issue.

PS:
(I'm looking at allowing IO resource reallocation which might fix the problem
with default ACPI hotplug but it's still work in progress and there is no
guaranties that it will work out in the end)

Comment 4 Gerd Hoffmann 2022-12-12 07:07:39 UTC
> There is no proven way to fix it in so far (so 9.1 is out of question).
> At the moment you can either specify reservation hints explicitly or switch
> to
> native PCI-E hotplug to get around the issue.

FYI: Latest edk2 (2022-11+) scales the default 64-bit pci io window
size and the default 64-bit pci bridge window sizes with the available
physical address space.

Right now only rawhide has rpms.  f37, f36, c9s should get the edk2
rebase soon.

Comment 5 Dr. David Alan Gilbert 2022-12-12 09:46:59 UTC
*** Bug 2152130 has been marked as a duplicate of this bug. ***

Comment 8 Yanghang Liu 2023-04-06 09:36:58 UTC
(In reply to Gerd Hoffmann from comment #4)
 
> FYI: Latest edk2 (2022-11+) scales the default 64-bit pci io window
> size and the default 64-bit pci bridge window sizes with the available
> physical address space.
> 
> Right now only rawhide has rpms.  f37, f36, c9s should get the edk2
> rebase soon.

Hi Igor,

May I ask if we have a conclusion about which package this bug will eventually be fixed on ? 

Is it qemu-kvm or edk2-ovmf?

We may need to change this bug's component based on your feedback.

Comment 9 Gerd Hoffmann 2023-04-06 13:45:09 UTC
(In reply to Yanghang Liu from comment #8)
> (In reply to Gerd Hoffmann from comment #4)
>  
> > FYI: Latest edk2 (2022-11+) scales the default 64-bit pci io window
> > size and the default 64-bit pci bridge window sizes with the available
> > physical address space.
> > 
> > Right now only rawhide has rpms.  f37, f36, c9s should get the edk2
> > rebase soon.
> 
> Hi Igor,
> 
> May I ask if we have a conclusion about which package this bug will
> eventually be fixed on ? 

edk2-ovmf.  We tried for 9.2, but ran into live migration problems
(see bug 2171860), so we had do back out.  Will come for real
in 9.3, with luck in 9.2.z too.

Comment 10 Yanghang Liu 2023-04-07 10:14:15 UTC
This bug can still be reproduced in the edk2-ovmf-20221207gitfff6d81270b5-8.el9_2.noarch.

The main check point:
[1] start a Q35 + OVMF domain

[2] hot-plug a MT2892 PF into the domain

[3] check the PF status in the domain
# ifconfig <-- I can not get any PF info here
# dmesg 
...
[   47.599819] pci 0000:04:00.0: [15b3:101d] type 00 class 0x020000
[   47.600128] pci 0000:04:00.0: reg 0x10: [mem 0x00000000-0x01ffffff 64bit pref]
[   47.600597] pci 0000:04:00.0: reg 0x30: [mem 0x00000000-0x000fffff pref]
[   47.600778] pci 0000:04:00.0: Max Payload Size set to 128 (was 256, max 512)
[   47.602917] pci 0000:04:00.0: 63.008 Gb/s available PCIe bandwidth, limited by 8.0 GT/s PCIe x8 link at 0000:00:02.3 (capable of 252.048 Gb/s with 16.0 GT/s PCIe x16 link)
[   47.604415] pci 0000:04:00.0: BAR 0: no space for [mem size 0x02000000 64bit pref]
[   47.604419] pci 0000:04:00.0: BAR 0: failed to assign [mem size 0x02000000 64bit pref]
[   47.604425] pci 0000:04:00.0: BAR 6: assigned [mem 0x80200000-0x802fffff pref]
[   47.936797] mlx5_core 0000:04:00.0: Missing registers BAR, aborting
[   47.936801] mlx5_core 0000:04:00.0: mlx5_pci_init:839:(pid 4556): error requesting BARs, aborting
[   47.937043] mlx5_core 0000:04:00.0: probe_one:1679:(pid 4556): mlx5_pci_init failed with error code -19

Comment 11 Yanghang Liu 2023-04-07 10:15:09 UTC
(In reply to Gerd Hoffmann from comment #9)

> > 
> > May I ask if we have a conclusion about which package this bug will
> > eventually be fixed on ? 
> 
> edk2-ovmf.  We tried for 9.2, but ran into live migration problems
> (see bug 2171860), so we had do back out.  Will come for real
> in 9.3, with luck in 9.2.z too.

Hi  Gerd,

Thanks for the info.

I have moved the bug's component to edk2 and setup the ITR to 9.3.0.

Feel free to correct me if anything changes.

Comment 13 Yanan Fu 2023-07-04 01:39:24 UTC
QE bot(pre verify): Set 'Verified:Tested,SanityOnly' as gating/tier1 test pass.

Comment 16 Yanghang Liu 2023-07-05 02:18:35 UTC
Hi Gerd,  

My test result shows edk2-20230524-1.el9 can not fix my issue, could you please help check it ?

Test env:
5.14.0-334.el9.x86_64
qemu-kvm-8.0.0-6.el9.x86_64
edk2-ovmf-20230524-1.el9.noarch
libvirt-9.5.0-0rc1.1.el9.x86_64


Test result: FAILED

Test step:
(1) start a domain
# virt-install --machine=q35 --noreboot --name=rhel93 --memory=4096 --vcpus=4 --graphics type=vnc,port=5993,listen=0.0.0.0 --boot=uefi --network bridge=switch,model=virtio,mac=52:54:00:00:93:93 --import --noautoconsole --disk path=/home/images/RHEL93.qcow2,bus=virtio,cache=none,format=qcow2,io=threads,size=20 --osinfo detect=on,require=off

(2) hot-plug a MT2892 PF into domain
# lspci -s 60:00.0
60:00.0 Ethernet controller: Mellanox Technologies MT2892 Family [ConnectX-6 Dx]

# /bin/virsh attach-device rhel93 /tmp/device/0000:87:00.0.xml
Device attached successfully

(3) check the PF status in the domain
# dmesg 
[  111.800926] pci 0000:04:00.0: [15b3:101d] type 00 class 0x020000
[  111.801232] pci 0000:04:00.0: reg 0x10: [mem 0x00000000-0x01ffffff 64bit pref]
[  111.801511] pci 0000:04:00.0: reg 0x30: [mem 0x00000000-0x000fffff pref]
[  111.801636] pci 0000:04:00.0: Max Payload Size set to 128 (was 256, max 512)
[  111.803490] pci 0000:04:00.0: 63.008 Gb/s available PCIe bandwidth, limited by 8.0 GT/s PCIe x8 link at 0000:00:02.3 (capable of 252.048 Gb/s with 16.0 GT/s PCIe x16 link)
[  111.807277] pci 0000:04:00.0: BAR 0: no space for [mem size 0x02000000 64bit pref]
[  111.807282] pci 0000:04:00.0: BAR 0: failed to assign [mem size 0x02000000 64bit pref]
[  111.807288] pci 0000:04:00.0: BAR 6: assigned [mem 0x82400000-0x824fffff pref]
[  112.171492] mlx5_core 0000:04:00.0: Missing registers BAR, aborting
[  112.171495] mlx5_core 0000:04:00.0: mlx5_pci_init:839:(pid 1397): error requesting BARs, aborting
[  112.171706] mlx5_core 0000:04:00.0: probe_one:1687:(pid 1397): mlx5_pci_init failed with error code -19

(4)repeat the above test result with QL41112 PF, the QL41112 PF failed to be hot-plugged into domain as well.

The related guest dmesg: 
# dmesg
[   40.045152] pci 0000:04:00.0: [1077:8070] type 00 class 0x020000
[   40.045467] pci 0000:04:00.0: reg 0x10: [mem 0x00000000-0x0001ffff 64bit pref]
[   40.045568] pci 0000:04:00.0: reg 0x18: [mem 0x00000000-0x007fffff 64bit pref]
[   40.045666] pci 0000:04:00.0: reg 0x20: [mem 0x00000000-0x0000ffff 64bit pref]
[   40.045728] pci 0000:04:00.0: reg 0x30: [mem 0x00000000-0x0007ffff pref]
[   40.045843] pci 0000:04:00.0: Max Payload Size set to 128 (was 256, max 512)
[   40.051771] pci 0000:04:00.0: BAR 2: no space for [mem size 0x00800000 64bit pref]
[   40.051776] pci 0000:04:00.0: BAR 2: failed to assign [mem size 0x00800000 64bit pref]
[   40.051781] pci 0000:04:00.0: BAR 6: assigned [mem 0x82400000-0x8247ffff pref]
[   40.051784] pci 0000:04:00.0: BAR 0: assigned [mem 0x800300000-0x80031ffff 64bit pref]
[   40.051979] pci 0000:04:00.0: BAR 4: assigned [mem 0x800320000-0x80032ffff 64bit pref]
[   40.175261] QLogic FastLinQ 4xxxx Core Module qed
[   40.201075] qede init: QLogic FastLinQ 4xxxx Ethernet Driver qede
[   40.201223] qede 0000:04:00.0: enabling device (0000 -> 0002)
[   40.203280] [qed_init_pci:299()]No memory region found in bar #2
[   40.203510] [qed_probe:516()]init pci failed


Additional info:

I have tried this issue with edk2-ovmf-20230301gitf80f052277c8-3.el9.bz2174749.20230515.1346.noarch before,  but the MT2892/QL41112 PF can be hot-plugged into the domain at that time.

The details is in https://bugzilla.redhat.com/show_bug.cgi?id=2174749#c16

The test log at that time : 
http://10.73.72.41/log/bug/Bug2174749/2023_05_15_08:58:13_MT2892
http://10.73.72.41/log/bug/Bug2174749/2023_05_15_09:01:27_QL41112

Comment 17 Yanghang Liu 2023-07-05 10:20:02 UTC
Move the status to ASSIGNED

Comment 23 Gerd Hoffmann 2023-07-06 08:42:28 UTC
(In reply to Yanghang Liu from comment #16)
> Hi Gerd,  
> 
> My test result shows edk2-20230524-1.el9 can not fix my issue, could you
> please help check it ?

edk2-20230524-1.el9 has not yet the patches to re-enable the
dynamic mmio window.

Latest dynamic mmio window scratch build is here:
https://bugzilla.redhat.com/show_bug.cgi?id=2174749#c42

Alternatively wait for the next edk2 build which should
arrive next week when mirek is back from PTO.

Comment 25 Gerd Hoffmann 2023-07-11 08:06:54 UTC
(In reply to Gerd Hoffmann from comment #23)
> Alternatively wait for the next edk2 build which should
> arrive next week when mirek is back from PTO.

edk2-20230524-2.el9 is available now.

Comment 26 Yanghang Liu 2023-07-17 03:49:13 UTC
Test env: edk2-ovmf-20230524-2.el9.noarch

Test device:  MT2892, QL41112,82599ES,E810, XXV710

Test result:

60:00.0 Ethernet controller: Mellanox Technologies MT2892 Family [ConnectX-6 Dx]
        Subsystem: Mellanox Technologies Device 0083
        Flags: bus master, fast devsel, latency 0, IRQ 65, NUMA node 0, IOMMU group 3
        Memory at bc000000 (64-bit, prefetchable) [size=32M]
        Expansion ROM at b8900000 [disabled] [size=1M]
        Capabilities: [60] Express Endpoint, MSI 00
        Capabilities: [48] Vital Product Data
        Capabilities: [9c] MSI-X: Enable+ Count=64 Masked-
        Capabilities: [c0] Vendor Specific Information: Len=18 <?>
        Capabilities: [40] Power Management version 3
        Capabilities: [100] Advanced Error Reporting
        Capabilities: [150] Alternative Routing-ID Interpretation (ARI)
        Capabilities: [180] Single Root I/O Virtualization (SR-IOV)
        Capabilities: [1c0] Secondary PCI Express
        Capabilities: [230] Access Control Services
        Capabilities: [320] Lane Margining at the Receiver <?>
        Capabilities: [370] Physical Layer 16.0 GT/s <?>
        Capabilities: [420] Data Link Feature <?>
        Kernel driver in use: mlx5_core
        Kernel modules: mlx5_core


60:00.2 Ethernet controller: Mellanox Technologies ConnectX Family mlx5Gen Virtual Function
        Subsystem: Mellanox Technologies Device 0083
        Flags: bus master, fast devsel, latency 0, NUMA node 0, IOMMU group 132
        Memory at be800000 (64-bit, prefetchable) [virtual] [size=1M]
        Capabilities: [60] Express Endpoint, MSI 00
        Capabilities: [9c] MSI-X: Enable+ Count=12 Masked-
        Capabilities: [100] Vendor Specific Information: ID=0000 Rev=0 Len=00c <?>
        Capabilities: [150] Alternative Routing-ID Interpretation (ARI)
        Kernel driver in use: mlx5_core
        Kernel modules: mlx5_core


3b:00.0 Ethernet controller: QLogic Corp. FastLinQ QL41000 Series 10/25/40/50GbE Controller (rev 02)
        Subsystem: QLogic Corp. 10GE 2P QL41112HxCU-DE Adapter
        Flags: bus master, fast devsel, latency 0, IRQ 117, NUMA node 0, IOMMU group 6
        Memory at ac020000 (64-bit, prefetchable) [size=128K]
        Memory at ab800000 (64-bit, prefetchable) [size=8M]
        Memory at ac050000 (64-bit, prefetchable) [size=64K]
        Expansion ROM at aca00000 [disabled] [size=512K]
        Capabilities: [40] Power Management version 3
        Capabilities: [50] MSI: Enable- Count=1/8 Maskable+ 64bit+
        Capabilities: [70] Express Endpoint, MSI 00
        Capabilities: [b0] MSI-X: Enable+ Count=129 Masked-
        Capabilities: [d0] Vital Product Data
        Capabilities: [100] Advanced Error Reporting
        Capabilities: [148] Virtual Channel
        Capabilities: [168] Device Serial Number 00-00-00-00-00-00-00-00
        Capabilities: [178] Power Budgeting <?>
        Capabilities: [188] Alternative Routing-ID Interpretation (ARI)
        Capabilities: [198] Secondary PCI Express
        Capabilities: [1b8] Single Root I/O Virtualization (SR-IOV)
        Capabilities: [1f8] Transaction Processing Hints
        Capabilities: [284] Latency Tolerance Reporting
        Capabilities: [28c] Vendor Specific Information: ID=0002 Rev=3 Len=100 <?>
        Capabilities: [38c] Vendor Specific Information: ID=0001 Rev=1 Len=038 <?>
        Capabilities: [3c4] Precision Time Measurement
        Capabilities: [3d0] Vendor Specific Information: ID=0003 Rev=1 Len=054 <?>
        Capabilities: [424] Physical Resizable BAR
        Kernel driver in use: qede
        Kernel modules: qede


3b:02.0 Ethernet controller: QLogic Corp. FastLinQ QL41000 Series Gigabit Ethernet Controller (SR-IOV VF) (rev 02)
        Subsystem: QLogic Corp. 10GE 2P QL41112HxCU-DE Adapter
        Flags: bus master, fast devsel, latency 0, NUMA node 0, IOMMU group 133
        Memory at ac360000 (64-bit, prefetchable) [virtual] [size=32K]
        Memory at ac840000 (64-bit, prefetchable) [virtual] [size=4K]
        Memory at ac720000 (64-bit, prefetchable) [virtual] [size=8K]
        Capabilities: [70] Express Endpoint, MSI 00
        Capabilities: [b0] MSI-X: Enable+ Count=16 Masked-
        Capabilities: [100] Alternative Routing-ID Interpretation (ARI)
        Capabilities: [110] Transaction Processing Hints
        Kernel driver in use: qede
        Kernel modules: qede


d8:00.0 Ethernet controller: Intel Corporation 82599ES 10-Gigabit SFI/SFP+ Network Connection (rev 01)
        Subsystem: Intel Corporation Ethernet Server Adapter X520-2
        Flags: bus master, fast devsel, latency 0, IRQ 115, NUMA node 1, IOMMU group 185
        Memory at ee880000 (64-bit, non-prefetchable) [size=512K]
        I/O ports at e020 [size=32]
        Memory at eeb04000 (64-bit, non-prefetchable) [size=16K]
        Expansion ROM at eed80000 [disabled] [size=512K]
        Capabilities: [40] Power Management version 3
        Capabilities: [50] MSI: Enable- Count=1/1 Maskable+ 64bit+
        Capabilities: [70] MSI-X: Enable+ Count=64 Masked-
        Capabilities: [a0] Express Endpoint, MSI 00
        Capabilities: [e0] Vital Product Data
        Capabilities: [100] Advanced Error Reporting
        Capabilities: [140] Device Serial Number 00-1b-21-ff-ff-c3-d0-3c
        Capabilities: [150] Alternative Routing-ID Interpretation (ARI)
        Capabilities: [160] Single Root I/O Virtualization (SR-IOV)
        Kernel driver in use: ixgbe
        Kernel modules: ixgbe


d8:10.0 Ethernet controller: Intel Corporation 82599 Ethernet Controller Virtual Function (rev 01)
        Subsystem: Intel Corporation Device 7a11
        Flags: bus master, fast devsel, latency 0, NUMA node 1, IOMMU group 187
        Memory at eeb08000 (64-bit, non-prefetchable) [virtual] [size=16K]
        Memory at eec08000 (64-bit, non-prefetchable) [virtual] [size=16K]
        Capabilities: [70] MSI-X: Enable+ Count=3 Masked-
        Capabilities: [a0] Express Endpoint, MSI 00
        Capabilities: [100] Advanced Error Reporting
        Capabilities: [150] Alternative Routing-ID Interpretation (ARI)
        Kernel driver in use: ixgbevf
        Kernel modules: ixgbevf

3b:00.0 Ethernet controller: Intel Corporation Ethernet Controller E810-C for QSFP (rev 02)
        Subsystem: Intel Corporation Ethernet Network Adapter E810-C-Q2
        Flags: bus master, fast devsel, latency 0, IRQ 114, NUMA node 0, IOMMU group 65
        Memory at ae000000 (64-bit, prefetchable) [size=32M]
        Memory at b2010000 (64-bit, prefetchable) [size=64K]
        Expansion ROM at ab000000 [disabled] [size=1M]
        Capabilities: [40] Power Management version 3
        Capabilities: [50] MSI: Enable- Count=1/1 Maskable+ 64bit+
        Capabilities: [70] MSI-X: Enable+ Count=1024 Masked-
        Capabilities: [a0] Express Endpoint, MSI 00
        Capabilities: [e0] Vital Product Data
        Capabilities: [100] Advanced Error Reporting
        Capabilities: [148] Alternative Routing-ID Interpretation (ARI)
        Capabilities: [150] Device Serial Number 6c-fe-54-ff-ff-47-62-20
        Capabilities: [160] Single Root I/O Virtualization (SR-IOV)
        Capabilities: [1a0] Transaction Processing Hints
        Capabilities: [1b0] Access Control Services
        Capabilities: [1d0] Secondary PCI Express
        Capabilities: [200] Data Link Feature <?>
        Capabilities: [210] Physical Layer 16.0 GT/s <?>
        Capabilities: [250] Lane Margining at the Receiver <?>
        Kernel driver in use: ice
        Kernel modules: ice

3b:01.0 Ethernet controller: Intel Corporation Ethernet Adaptive Virtual Function (rev 02)
        Subsystem: Intel Corporation Device 0000
        Flags: bus master, fast devsel, latency 0, NUMA node 0, IOMMU group 188
        Memory at b1000000 (64-bit, prefetchable) [virtual] [size=128K]
        Memory at b2220000 (64-bit, prefetchable) [virtual] [size=16K]
        Capabilities: [70] MSI-X: Enable+ Count=17 Masked-
        Capabilities: [a0] Express Endpoint, MSI 00
        Capabilities: [100] Advanced Error Reporting
        Capabilities: [148] Alternative Routing-ID Interpretation (ARI)
        Capabilities: [1a0] Transaction Processing Hints
        Capabilities: [1d0] Access Control Services
        Kernel driver in use: iavf
        Kernel modules: iavf


87:02.0 Ethernet controller: Intel Corporation Ethernet Virtual Function 700 Series (rev 02)
        Subsystem: Intel Corporation Device 0000
        Flags: bus master, fast devsel, latency 0, NUMA node 1, IOMMU group 189
        Memory at d6400000 (64-bit, prefetchable) [virtual] [size=64K]
        Memory at d6910000 (64-bit, prefetchable) [virtual] [size=16K]
        Capabilities: [70] MSI-X: Enable+ Count=5 Masked-
        Capabilities: [a0] Express Endpoint, MSI 00
        Capabilities: [100] Advanced Error Reporting
        Capabilities: [150] Alternative Routing-ID Interpretation (ARI)
        Capabilities: [1a0] Transaction Processing Hints
        Capabilities: [1d0] Access Control Services
        Kernel driver in use: iavf
        Kernel modules: iavf


41:00.0 Ethernet controller: Intel Corporation Ethernet Controller XXV710 for 25GbE SFP28 (rev 02)
        Subsystem: Intel Corporation Ethernet 25G 2P XXV710 Adapter
        Flags: bus master, fast devsel, latency 0, IRQ 109, IOMMU group 40
        Memory at 91000000 (64-bit, prefetchable) [size=16M]
        Memory at 92808000 (64-bit, prefetchable) [size=32K]
        Expansion ROM at 92b00000 [disabled] [size=512K]
        Capabilities: [40] Power Management version 3
        Capabilities: [50] MSI: Enable- Count=1/1 Maskable+ 64bit+
        Capabilities: [70] MSI-X: Enable+ Count=129 Masked-
        Capabilities: [a0] Express Endpoint, MSI 00
        Capabilities: [e0] Vital Product Data
        Capabilities: [100] Advanced Error Reporting
        Capabilities: [140] Device Serial Number 40-eb-b5-ff-ff-fe-fd-3c
        Capabilities: [150] Alternative Routing-ID Interpretation (ARI)
        Capabilities: [160] Single Root I/O Virtualization (SR-IOV)
        Capabilities: [1a0] Transaction Processing Hints
        Capabilities: [1b0] Access Control Services
        Capabilities: [1d0] Secondary PCI Express
        Kernel driver in use: i40e
        Kernel modules: i40e



Test result:
2023-07-14 18:14:55 | PASS - hot plug 1 MT2892 pf into rhel93 domain
2023-07-14 18:17:06 | PASS - hot plug 2 MT2892 pf into rhel93 domain
2023-07-14 18:09:48 | PASS - hot plug 1 MT2892 vf into rhel93 domain
2023-07-14 18:12:50 | PASS - hot plug 7 MT2892 vf into rhel93 domain

2023-07-14 18:22:19 | PASS - hot plug 1 QL41112 pf into rhel93 domain
2023-07-14 18:27:46 | PASS - hot plug 2 QL41112 pf into rhel93 domain
2023-07-17 11:08:12 | PASS - hot plug 1 QL41112 vf into rhel93 domain
2023-07-17 11:14:37 | PASS - hot plug 7 QL41112 vf into rhel93 domain

2023-07-14 18:20:39 | PASS - hot plug 1 82599ES vf into rhel93 domain
2023-07-14 18:23:32 | PASS - hot plug 7 82599ES vf into rhel93 domain
2023-07-14 18:16:27 | PASS - hot plug 1 82599ES pf into rhel93 domain
2023-07-14 18:18:39 | PASS - hot plug 2 82599ES pf into rhel93 domain

2023-07-14 18:11:45 | PASS - hot plug 1 E810 vf into rhel93 domain
2023-07-14 18:14:36 | PASS - hot plug 7 E810 vf into rhel93 domain
2023-07-14 18:07:52 | PASS - hot plug 1 E810 pf into rhel93 domain
2023-07-14 18:09:52 | PASS - hot plug 2 E810 pf into rhel93 domain

2023-07-17 10:57:10 | PASS - hot plug 1 XXV710 pf into rhel93 domain
2023-07-17 11:02:17 | PASS - hot plug 2 XXV710 pf into rhel93 domain
2023-07-17 11:07:24 | PASS - hot plug 1 XXV710 vf into rhel93 domain
2023-07-17 11:13:18 | PASS - hot plug 7 XXV710 vf into rhel93 domain

Comment 28 Yanghang Liu 2023-07-17 09:04:08 UTC
Test env: edk2-ovmf-20230524-2.el9.noarch

Test device:  SFC9220 

1a:00.0 Ethernet controller: Solarflare Communications SFC9220 10/40G Ethernet Controller (rev 02)
        Subsystem: Solarflare Communications SFN8522-R2 8000 Series 10G Adapter
        Flags: bus master, fast devsel, latency 0, IRQ 117, NUMA node 0, IOMMU group 33
        I/O ports at 4100 [size=256]
        Memory at 9e000000 (64-bit, non-prefetchable) [size=8M]
        Memory at a6904000 (64-bit, non-prefetchable) [size=16K]
        Expansion ROM at a6a40000 [disabled] [size=256K]
        Capabilities: [40] Power Management version 3
        Capabilities: [70] Express Endpoint, MSI 00
        Capabilities: [b0] MSI-X: Enable+ Count=32 Masked-
        Capabilities: [d0] Vital Product Data
        Capabilities: [100] Advanced Error Reporting
        Capabilities: [148] Device Serial Number 00-0f-53-ff-ff-4d-8c-30
        Capabilities: [158] Alternative Routing-ID Interpretation (ARI)
        Capabilities: [168] Secondary PCI Express
        Capabilities: [198] Single Root I/O Virtualization (SR-IOV)
        Capabilities: [1d8] Transaction Processing Hints
        Capabilities: [26c] L1 PM Substates
        Kernel driver in use: sfc
        Kernel modules: sfc

1a:00.2 Ethernet controller: Solarflare Communications SFC9220 10/40G Ethernet Controller (Virtual Function) (rev 02)
        Subsystem: Solarflare Communications Device 8017
        Flags: bus master, fast devsel, latency 0, NUMA node 0, IOMMU group 190
        Memory at a2800000 (64-bit, non-prefetchable) [virtual] [size=1M]
        Memory at a6908000 (64-bit, non-prefetchable) [virtual] [size=16K]
        Capabilities: [70] Express Endpoint, MSI 00
        Capabilities: [b0] MSI-X: Enable+ Count=4 Masked-
        Capabilities: [100] Alternative Routing-ID Interpretation (ARI)
        Capabilities: [110] Transaction Processing Hints
        Kernel driver in use: sfc
        Kernel modules: sfc



Test result:
2023-07-14 18:43:02 | FAIL - hot plug 1 SFC9220 pf to rhel93 domain
2023-07-14 18:48:56 | FAIL - hot plug 2 SFC9220 pf into rhel93
2023-07-17 11:04:35 | PASS - hot plug 1 SFC9220 vf to rhel93 domain
2023-07-17 11:10:41 | PASS - hot plug 7 SFC9220 vf into rhel93

Comment 29 Gerd Hoffmann 2023-07-18 13:59:16 UTC
> 1a:00.0 Ethernet controller: Solarflare Communications SFC9220 10/40G
> Ethernet Controller (rev 02)
>         Subsystem: Solarflare Communications SFN8522-R2 8000 Series 10G
> Adapter

>         Memory at 9e000000 (64-bit, non-prefetchable) [size=8M]
>         Memory at a6904000 (64-bit, non-prefetchable) [size=16K]

> Test result:
> 2023-07-14 18:43:02 | FAIL - hot plug 1 SFC9220 pf to rhel93 domain
> 2023-07-14 18:48:56 | FAIL - hot plug 2 SFC9220 pf into rhel93

As expected.  The big (8M) non-prefetchable bar continues to need manual
configuration of the pcie root port (mem-reserve property).

> 2023-07-17 11:04:35 | PASS - hot plug 1 SFC9220 vf to rhel93 domain
> 2023-07-17 11:10:41 | PASS - hot plug 7 SFC9220 vf into rhel93

Good.

Comment 30 Yanghang Liu 2023-07-18 15:36:42 UTC
(In reply to Gerd Hoffmann from comment #29)
> > 1a:00.0 Ethernet controller: Solarflare Communications SFC9220 10/40G
> > Ethernet Controller (rev 02)
> >         Subsystem: Solarflare Communications SFN8522-R2 8000 Series 10G
> > Adapter
> 
> >         Memory at 9e000000 (64-bit, non-prefetchable) [size=8M]
> >         Memory at a6904000 (64-bit, non-prefetchable) [size=16K]
> 
> > Test result:
> > 2023-07-14 18:43:02 | FAIL - hot plug 1 SFC9220 pf to rhel93 domain
> > 2023-07-14 18:48:56 | FAIL - hot plug 2 SFC9220 pf into rhel93
> 
> As expected.  The big (8M) non-prefetchable bar continues to need manual
> configuration of the pcie root port (mem-reserve property).

Hi Gerd :) 

Thanks for the confirmation.

In my tests, I found the mem-reserve property of pcie root port is not exposed via libvirt.

May I ask if you have any suggestions about we open a libvirt bug to request for enabling it ?
 
I am testing to see if SPC9220 PF can be hot-plugged into the VM (whose pcie-root-port has the mem-reserve property set) via qemu-kvm cmdline now.

The related qemu-kvm cmd is like:
-device '{"driver":"pcie-root-port","port":20,"chassis":5,"id":"pci.5","mem-reserve":16777216,"bus":"pcie.0","addr":"0x2.0x4"}' \

I will update my test result in the comment soon.

Comment 31 Yanghang Liu 2023-07-19 02:42:01 UTC
(In reply to Yanghang Liu from comment #28)
> Test env: edk2-ovmf-20230524-2.el9.noarch
> 
> Test device:  SFC9220 
> 
> 1a:00.0 Ethernet controller: Solarflare Communications SFC9220 10/40G
> Ethernet Controller (rev 02)
>         Subsystem: Solarflare Communications SFN8522-R2 8000 Series 10G
> Adapter
>         Flags: bus master, fast devsel, latency 0, IRQ 117, NUMA node 0,
> IOMMU group 33
>         I/O ports at 4100 [size=256]
>         Memory at 9e000000 (64-bit, non-prefetchable) [size=8M]
>         Memory at a6904000 (64-bit, non-prefetchable) [size=16K]
>         Expansion ROM at a6a40000 [disabled] [size=256K]
>         Capabilities: [40] Power Management version 3
>         Capabilities: [70] Express Endpoint, MSI 00
>         Capabilities: [b0] MSI-X: Enable+ Count=32 Masked-
>         Capabilities: [d0] Vital Product Data
>         Capabilities: [100] Advanced Error Reporting
>         Capabilities: [148] Device Serial Number 00-0f-53-ff-ff-4d-8c-30
>         Capabilities: [158] Alternative Routing-ID Interpretation (ARI)
>         Capabilities: [168] Secondary PCI Express
>         Capabilities: [198] Single Root I/O Virtualization (SR-IOV)
>         Capabilities: [1d8] Transaction Processing Hints
>         Capabilities: [26c] L1 PM Substates
>         Kernel driver in use: sfc
>         Kernel modules: sfc
> 
> 1a:00.2 Ethernet controller: Solarflare Communications SFC9220 10/40G
> Ethernet Controller (Virtual Function) (rev 02)
>         Subsystem: Solarflare Communications Device 8017
>         Flags: bus master, fast devsel, latency 0, NUMA node 0, IOMMU group
> 190
>         Memory at a2800000 (64-bit, non-prefetchable) [virtual] [size=1M]
>         Memory at a6908000 (64-bit, non-prefetchable) [virtual] [size=16K]
>         Capabilities: [70] Express Endpoint, MSI 00
>         Capabilities: [b0] MSI-X: Enable+ Count=4 Masked-
>         Capabilities: [100] Alternative Routing-ID Interpretation (ARI)
>         Capabilities: [110] Transaction Processing Hints
>         Kernel driver in use: sfc
>         Kernel modules: sfc
> 
> 
> 
> Test result:
> 2023-07-14 18:43:02 | FAIL - hot plug 1 SFC9220 pf to rhel93 domain
> 2023-07-14 18:48:56 | FAIL - hot plug 2 SFC9220 pf into rhel93
> 2023-07-17 11:04:35 | PASS - hot plug 1 SFC9220 vf to rhel93 domain
> 2023-07-17 11:10:41 | PASS - hot plug 7 SFC9220 vf into rhel93


Test with SFC9220 PF + 16M pcie-root-port.

Test result: PASS

Test step:
(1) bind the two PFs' driver to vfio-pci

(2) start a Q35 + OVMF VM 

/usr/libexec/qemu-kvm \
-name guest=rhel93,debug-threads=on \
-blockdev '{"driver":"file","filename":"/usr/share/edk2/ovmf/OVMF_CODE.secboot.fd","node-name":"libvirt-pflash0-storage","auto-read-only":true,"discard":"unmap"}' \
-blockdev '{"node-name":"libvirt-pflash0-format","read-only":true,"driver":"raw","file":"libvirt-pflash0-storage"}' \
-blockdev '{"driver":"file","filename":"/var/lib/libvirt/qemu/nvram/rhel93_VARS.fd","node-name":"libvirt-pflash1-storage","auto-read-only":true,"discard":"unmap"}' \
-blockdev '{"node-name":"libvirt-pflash1-format","read-only":false,"driver":"raw","file":"libvirt-pflash1-storage"}' \
-machine pc-q35-rhel9.2.0,usb=off,smm=on,dump-guest-core=off,memory-backend=pc.ram,pflash0=libvirt-pflash0-format,pflash1=libvirt-pflash1-format \
-accel kvm \
-cpu host,migratable=on \
-m 8192 \
-object '{"qom-type":"memory-backend-ram","id":"pc.ram","size":8589934592}' \
-overcommit mem-lock=off \
-smp 4,sockets=4,dies=1,cores=1,threads=1 \
-uuid ce70e79f-8854-490a-8b0b-f5261a9b8bad \
-no-user-config \
-nodefaults \
-rtc base=utc,driftfix=slew \
-global kvm-pit.lost_tick_policy=delay \
-no-shutdown \
-global ICH9-LPC.disable_s3=1 \
-global ICH9-LPC.disable_s4=1 \
-boot strict=on \
-device '{"driver":"pcie-root-port","port":16,"chassis":1,"id":"pci.1","bus":"pcie.0","multifunction":true,"addr":"0x2"}' \
-device '{"driver":"pcie-root-port","port":17,"chassis":2,"id":"pci.2","bus":"pcie.0","addr":"0x2.0x1"}' \
-device '{"driver":"pcie-root-port","port":18,"chassis":3,"id":"pci.3","bus":"pcie.0","addr":"0x2.0x2"}' \
-device '{"driver":"pcie-root-port","port":19,"chassis":4,"id":"pci.4","bus":"pcie.0","addr":"0x2.0x3"}' \
-device '{"driver":"pcie-root-port","port":20,"chassis":5,"id":"pci.5","mem-reserve":16777216,"bus":"pcie.0","addr":"0x2.0x4"}' \
-device '{"driver":"pcie-root-port","port":21,"chassis":6,"id":"pci.6","mem-reserve":16777216,"bus":"pcie.0","addr":"0x2.0x5"}' \
-device '{"driver":"pcie-root-port","port":22,"chassis":7,"id":"pci.7","bus":"pcie.0","addr":"0x2.0x6"}' \
-device '{"driver":"pcie-root-port","port":23,"chassis":8,"id":"pci.8","bus":"pcie.0","addr":"0x2.0x7"}' \
-device '{"driver":"pcie-root-port","port":24,"chassis":9,"id":"pci.9","bus":"pcie.0","multifunction":true,"addr":"0x3"}' \
-device '{"driver":"pcie-root-port","port":25,"chassis":10,"id":"pci.10","bus":"pcie.0","addr":"0x3.0x1"}' \
-device '{"driver":"pcie-root-port","port":26,"chassis":11,"id":"pci.11","bus":"pcie.0","addr":"0x3.0x2"}' \
-device '{"driver":"pcie-root-port","port":27,"chassis":12,"id":"pci.12","bus":"pcie.0","addr":"0x3.0x3"}' \
-device '{"driver":"pcie-root-port","port":28,"chassis":13,"id":"pci.13","bus":"pcie.0","addr":"0x3.0x4"}' \
-device '{"driver":"pcie-root-port","port":29,"chassis":14,"id":"pci.14","bus":"pcie.0","addr":"0x3.0x5"}' \
-blockdev '{"node-name": "file_image1", "driver": "file", "auto-read-only": true, "discard": "unmap", "aio": "threads", "filename": "/home/images/RHEL93.qcow2", "cache": {"direct": true, "no-flush": false}}' \
-blockdev '{"node-name": "drive_image1", "driver": "qcow2", "read-only": false, "cache": {"direct": true, "no-flush": false}, "file": "file_image1"}' \
-device '{"driver": "virtio-blk-pci", "id": "image1", "drive": "drive_image1", "bootindex": 1, "write-cache": "on", "bus": "pci.2", "addr": "0x0"}' \
-netdev '{"type":"tap","vhost":true,"id":"hostnet0"}' \
-device '{"driver":"virtio-net-pci","netdev":"hostnet0","id":"net0","mac":"52:54:00:41:5b:56","bus":"pci.1","addr":"0x0"}' \
-vnc 0.0.0.0:93 \
-device '{"driver":"virtio-vga","id":"video0","max_outputs":1,"bus":"pcie.0","addr":"0x1"}' \
-device '{"driver":"virtio-balloon-pci","id":"balloon0","bus":"pci.3","addr":"0x0"}' \
-object '{"qom-type":"rng-random","id":"objrng0","filename":"/dev/urandom"}' \
-device '{"driver":"virtio-rng-pci","rng":"objrng0","id":"rng0","bus":"pci.4","addr":"0x0"}' \
-monitor stdio \
-qmp tcp:0:5555,server,nowait \

(3) hot-plug two SFC9220 PFs into the VM (whose pcie-root-port has setup mem-reserve property to 16M)

(qemu) device_add vfio-pci,host=0000:1a:00.0,id=hostdev0,bus=pci.5
(qemu) device_add vfio-pci,host=0000:1a:00.1,id=hostdev1,bus=pci.6

(4) check if the two PFs have been hot-plugged into the VM 

# ifconfig
enp5s0np0: flags=4163<UP,BROADCAST,RUNNING,MULTICAST>  mtu 1500
        inet 192.168.200.103  netmask 255.255.255.0  broadcast 192.168.200.255
        inet6 2001::6e74:5d13:9b7c:d348  prefixlen 64  scopeid 0x0<global>
        inet6 fe80::2d03:dfb1:ac5e:4d3c  prefixlen 64  scopeid 0x20<link>
        ether 00:0f:53:4d:8c:30  txqueuelen 1000  (Ethernet)
        RX packets 7  bytes 1110 (1.0 KiB)
        RX errors 0  dropped 0  overruns 0  frame 0
        TX packets 23  bytes 2674 (2.6 KiB)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0
        device interrupt 22  

enp6s0np1: flags=4163<UP,BROADCAST,RUNNING,MULTICAST>  mtu 1500
        inet 192.168.200.171  netmask 255.255.255.0  broadcast 192.168.200.255
        inet6 fe80::158a:d2dd:a6c2:714e  prefixlen 64  scopeid 0x20<link>
        inet6 2001::b8b5:3c3b:ab95:8698  prefixlen 64  scopeid 0x0<global>
        ether 00:0f:53:4d:8c:31  txqueuelen 1000  (Ethernet)
        RX packets 2  bytes 468 (468.0 B)
        RX errors 0  dropped 0  overruns 0  frame 0
        TX packets 20  bytes 2314 (2.2 KiB)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

# lspci
05:00.0 Ethernet controller: Solarflare Communications SFC9220 10/40G Ethernet Controller (rev 02)
06:00.0 Ethernet controller: Solarflare Communications SFC9220 10/40G Ethernet Controller (rev 02)


# dmesg
[   48.567503] pci 0000:05:00.0: [1924:0a03] type 00 class 0x020000  <--- The first PF is hot-plugged into VM at this time
[   48.568112] pci 0000:05:00.0: reg 0x10: [io  0x0000-0x00ff]
[   48.568511] pci 0000:05:00.0: reg 0x18: [mem 0x00000000-0x007fffff 64bit]
[   48.569045] pci 0000:05:00.0: reg 0x20: [mem 0x00000000-0x00003fff 64bit]
[   48.569212] pci 0000:05:00.0: reg 0x30: [mem 0x00000000-0x0003ffff pref]
[   48.569459] pci 0000:05:00.0: Max Payload Size set to 128 (was 256, max 1024)
[   48.571327] pci 0000:05:00.0: supports D1 D2
[   48.576328] pci 0000:05:00.0: BAR 2: assigned [mem 0x81000000-0x817fffff 64bit]
[   48.576669] pci 0000:05:00.0: BAR 6: assigned [mem 0x81800000-0x8183ffff pref]
[   48.576674] pci 0000:05:00.0: BAR 4: assigned [mem 0x81840000-0x81843fff 64bit]
[   48.577034] pci 0000:05:00.0: BAR 0: assigned [io  0x5000-0x50ff]
[   48.639886] Solarflare NET driver
[   48.640922] sfc 0000:05:00.0: Solarflare NIC detected
[   48.648894] sfc 0000:05:00.0: Part Number : SFN8522
[   48.648946] sfc 0000:05:00.0: enabling device (0000 -> 0003)
[   48.661751] sfc 0000:05:00.0: no PTP support
[   48.734037] sfc 0000:05:00.0 enp5s0np0: renamed from eth0
[   48.762177] sfc 0000:05:00.0 enp5s0np0: link up at 10000Mbps full-duplex (MTU 1500)
[   48.838016] IPv6: ADDRCONF(NETDEV_CHANGE): enp5s0np0: link becomes ready
[  152.924900] pci 0000:06:00.0: [1924:0a03] type 00 class 0x020000  <--- The second PF is hot-plugged into VM at this time
[  152.925306] pci 0000:06:00.0: reg 0x10: [io  0x0000-0x00ff]
[  152.925646] pci 0000:06:00.0: reg 0x18: [mem 0x00000000-0x007fffff 64bit]
[  152.925907] pci 0000:06:00.0: reg 0x20: [mem 0x00000000-0x00003fff 64bit]
[  152.926030] pci 0000:06:00.0: reg 0x30: [mem 0x00000000-0x0003ffff pref]
[  152.926189] pci 0000:06:00.0: Max Payload Size set to 128 (was 256, max 1024)
[  152.927515] pci 0000:06:00.0: supports D1 D2
[  152.944474] pci 0000:06:00.0: BAR 2: assigned [mem 0x80000000-0x807fffff 64bit]
[  152.944603] pci 0000:06:00.0: BAR 6: assigned [mem 0x80800000-0x8083ffff pref]
[  152.944605] pci 0000:06:00.0: BAR 4: assigned [mem 0x80840000-0x80843fff 64bit]
[  152.944733] pci 0000:06:00.0: BAR 0: assigned [io  0x7000-0x70ff]
[  152.946150] sfc 0000:06:00.0: Solarflare NIC detected
[  152.959655] sfc 0000:06:00.0: Part Number : SFN8522
[  152.962646] sfc 0000:06:00.0: enabling device (0000 -> 0003)
[  152.970826] sfc 0000:06:00.0: no PTP support
[  152.998568] sfc 0000:06:00.0 enp6s0np1: renamed from eth0
[  153.014551] sfc 0000:06:00.0 enp6s0np1: link up at 10000Mbps full-duplex (MTU 1500)
[  153.098966] IPv6: ADDRCONF(NETDEV_CHANGE): enp6s0np1: link becomes ready

(5) reboot the VM and check the PFs info against

# ifconfig
enp5s0np0: flags=4163<UP,BROADCAST,RUNNING,MULTICAST>  mtu 1500
        inet 192.168.200.103  netmask 255.255.255.0  broadcast 192.168.200.255
        inet6 fe80::2d03:dfb1:ac5e:4d3c  prefixlen 64  scopeid 0x20<link>
        inet6 2001::6e74:5d13:9b7c:d348  prefixlen 64  scopeid 0x0<global>
        ether 00:0f:53:4d:8c:30  txqueuelen 1000  (Ethernet)
        RX packets 7  bytes 1110 (1.0 KiB)
        RX errors 0  dropped 0  overruns 0  frame 0
        TX packets 26  bytes 2866 (2.7 KiB)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0
        device interrupt 22  

enp6s0np1: flags=4163<UP,BROADCAST,RUNNING,MULTICAST>  mtu 1500
        inet 192.168.200.171  netmask 255.255.255.0  broadcast 192.168.200.255
        inet6 2001::b8b5:3c3b:ab95:8698  prefixlen 64  scopeid 0x0<global>
        inet6 fe80::158a:d2dd:a6c2:714e  prefixlen 64  scopeid 0x20<link>
        ether 00:0f:53:4d:8c:31  txqueuelen 1000  (Ethernet)
        RX packets 10  bytes 1302 (1.2 KiB)
        RX errors 0  dropped 0  overruns 0  frame 0
        TX packets 22  bytes 2520 (2.4 KiB)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0
        device interrupt 23  

# dmesg
[    2.303562] virtio_net virtio1 enp1s0: renamed from eth0
[    4.848376] sfc 0000:05:00.0: Solarflare NIC detected
[    4.854592] sfc 0000:05:00.0: Part Number : SFN8522
[    4.863097] sfc 0000:05:00.0: no PTP support
[    4.963889] sfc 0000:06:00.0: Solarflare NIC detected
[    4.968948] sfc 0000:06:00.0: Part Number : SFN8522
[    4.974428] sfc 0000:06:00.0: no PTP support
[    5.013352] sfc 0000:05:00.0 enp5s0np0: renamed from eth0
[    5.114354] sfc 0000:06:00.0 enp6s0np1: renamed from eth1
[    7.054082] sfc 0000:05:00.0 enp5s0np0: link up at 10000Mbps full-duplex (MTU 1500)
[    7.127095] IPv6: ADDRCONF(NETDEV_CHANGE): enp5s0np0: link becomes ready
[    7.135970] sfc 0000:06:00.0 enp6s0np1: link up at 10000Mbps full-duplex (MTU 1500)
[    8.137395] IPv6: ADDRCONF(NETDEV_CHANGE): enp6s0np1: link becomes ready

Comment 32 Gerd Hoffmann 2023-07-19 13:23:37 UTC
> May I ask if you have any suggestions about we open a libvirt bug to request
> for enabling it ?

Yes, makes sense.  Even with the dynamic mmio window removing the need
for manual configuration in most cases there are some corner cases like
this left which will need it.

Comment 33 Yanghang Liu 2023-07-21 03:57:36 UTC
(In reply to Gerd Hoffmann from comment #32)
> > May I ask if you have any suggestions about we open a libvirt bug to request
> > for enabling it ?
> 
> Yes, makes sense.  Even with the dynamic mmio window removing the need
> for manual configuration in most cases there are some corner cases like
> this left which will need it.

Thanks Gerd.

I have opened a libvirt bug to tracking the support:  Bug 2224472 - [RFE] Request to expose pcie-root-port's mem-reserve option

Comment 34 Yanghang Liu 2023-07-25 02:35:24 UTC
Verify this bug based on the test result in Comment 26, Comment 28, Comment 29 and Comment 31.

Comment 36 errata-xmlrpc 2023-11-07 08:24:29 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory (Moderate: edk2 security, bug fix, and enhancement update), and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHSA-2023:6330

Comment 37 Red Hat Bugzilla 2024-03-07 04:25:10 UTC
The needinfo request[s] on this closed bug have been removed as they have been unresolved for 120 days


Note You need to log in before you can comment on or make changes to this bug.