Note: This bug is displayed in read-only format because the product is no longer active in Red Hat Bugzilla.
RHEL Engineering is moving the tracking of its product development work on RHEL 6 through RHEL 9 to Red Hat Jira (issues.redhat.com). If you're a Red Hat customer, please continue to file support cases via the Red Hat customer portal. If you're not, please head to the "RHEL project" in Red Hat Jira and file new tickets here. Individual Bugzilla bugs in the statuses "NEW", "ASSIGNED", and "POST" are being migrated throughout September 2023. Bugs of Red Hat partners with an assigned Engineering Partner Manager (EPM) are migrated in late September as per pre-agreed dates. Bugs against components "kernel", "kernel-rt", and "kpatch" are only migrated if still in "NEW" or "ASSIGNED". If you cannot log in to RH Jira, please consult article #7032570. That failing, please send an e-mail to the RH Jira admins at rh-issues@redhat.com to troubleshoot your issue as a user management inquiry. The email creates a ServiceNow ticket with Red Hat. Individual Bugzilla bugs that are migrated will be moved to status "CLOSED", resolution "MIGRATED", and set with "MigratedToJIRA" in "Keywords". The link to the successor Jira issue will be found under "Links", have a little "two-footprint" icon next to it, and direct you to the "RHEL project" in Red Hat Jira (issue links are of type "https://issues.redhat.com/browse/RHEL-XXXX", where "X" is a digit). This same link will be available in a blue banner at the top of the page informing you that that bug has been migrated.

Bug 2186783

Summary: [seabios] dynamic mmio window
Product: Red Hat Enterprise Linux 9 Reporter: Gerd Hoffmann <kraxel>
Component: seabiosAssignee: Gerd Hoffmann <kraxel>
Status: CLOSED MIGRATED QA Contact: Xueqiang Wei <xuwei>
Severity: medium Docs Contact:
Priority: medium    
Version: 9.3CC: coli, jinzhao, juzhang, nanliu, virt-bugs, virt-maint, xiaohli, xuwei, yanghliu, ymankad, zhguo
Target Milestone: rcKeywords: MigratedToJIRA, RFE, Triaged
Target Release: ---Flags: pm-rhel: mirror+
Hardware: Unspecified   
OS: Unspecified   
Whiteboard:
Fixed In Version: Doc Type: If docs needed, set a value
Doc Text:
Story Points: ---
Clone Of: Environment:
Last Closed: 2023-09-22 13:46:57 UTC Type: Bug
Regression: --- Mount Type: ---
Documentation: --- CRM:
Verified Versions: Category: ---
oVirt Team: --- RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: --- Target Upstream Version:
Embargoed:

Description Gerd Hoffmann 2023-04-14 13:35:18 UTC
Description of problem:
bring seabios behavior in sync with edk2
 * detect available physical address space
 * use larger mmio window and pcie root port windows if possible.

Comment 1 Gerd Hoffmann 2023-04-14 13:45:29 UTC
test build: https://kojihub.stream.centos.org/koji/taskinfo?taskID=2126704
QE, can you run this through regression testing please?

Comment 2 Gerd Hoffmann 2023-04-14 13:48:47 UTC
Expected changes: In case seabios can figure what the physical address space is, and the guest has RAM above 4G, seabios should map 64-bit pci bars high, simliar to edk2 with the dynamic mmio window enabled.

Comment 3 Xueqiang Wei 2023-04-21 09:24:25 UTC
(In reply to Gerd Hoffmann from comment #1)
> test build: https://kojihub.stream.centos.org/koji/taskinfo?taskID=2126704
> QE, can you run this through regression testing please?

Tested seabios test loop on amd host and intel host, no new issue was found and the automation bug encountered has been fixed. I will ask other feature owners to do the testing.

Versions:
kernel-5.14.0-299.el9.x86_64
qemu-kvm-8.0.0-0.rc1.el9.candidate
seabios-bin-1.16.1-1.el9.bz2186783.20230414.1540.noarch
guest: rhel9.3/win2019


Job link:
Amd host: http://virtqetools.lab.eng.pek2.redhat.com/kvm_autotest_job_log/?jobid=7755124
Intel host: http://virtqetools.lab.eng.pek2.redhat.com/kvm_autotest_job_log/?jobid=7760918

Comment 6 Yanghang Liu 2023-05-15 09:03:49 UTC
Regression Test Result: PASS

Test feature: vfio-vf/vfio-pf

Test Device: SFC9220

Test version: seabios-bin-1.16.1-1.el9.bz2186783.20230414.1540.noarch

Test Details:
2023-05-11 00:34:28 | PASS - boot a domain rhel93 with SFC9220 vf
2023-05-11 00:36:57 | PASS - boot a 2 domain, each with 2 SFC9220 vf
2023-05-11 00:42:21 | PASS - reboot a rhel93 domain with 1 SFC9220 vf
2023-05-11 00:56:43 | PASS - shutdown a rhel93 with 1 SFC9220 vf
2023-05-11 01:19:55 | PASS - repeated SFC9220 vf device use test in rhel93 domain
2023-05-11 01:21:41 | PASS - hot plug 1 SFC9220 vf to rhel93 domain
2023-05-11 01:23:22 | PASS - hot unplug 1 SFC9220 vf from rhel93 domain
2023-05-11 01:24:41 | PASS - boot a rhel93 domain with 1 pf and 2 vf of SFC9220
2023-05-11 01:28:22 | PASS - hot unplug max SFC9220 vf from rhel93
2023-05-11 01:31:40 | PASS - hot plug max SFC9220 vf into rhel93
2023-05-11 01:34:12 | PASS - boot a rhel93 domain with 8 SFC9220 vfs
2023-05-11 01:35:21 | PASS - boot a rhel93 with specified address SFC9220 vf
2023-05-11 01:37:26 | PASS - boot a rhel93 domain with 7 SFC9220 vf
2023-05-11 01:38:39 | PASS - boot a rhel93 domain with multifunction=on SFC9220 vf
2023-05-11 01:40:03 | PASS - boot a domain rhel93 with SFC9220 vf and virtual nics
2023-05-11 01:43:13 | PASS - boot a rhel93 domain with 7 SFC9220 vf
2023-05-11 01:45:18 | PASS - hot unplug 4 SFC9220 vf from rhel93 domain
2023-05-11 01:48:07 | PASS - hot plug 7 SFC9220 vf into rhel93
2023-05-11 01:49:43 | PASS - hotplug and hotunplug SFC9220 vf after hotunpluging virtio net device test in rhel93 domain
2023-05-11 01:52:06 | PASS - hotplug and hotunplug SFC9220 vf 1 round in rhel93 domain
2023-05-11 01:53:15 | PASS - boot a rhel93 vm with a vlan SFC9220 vf
2023-05-11 02:22:05 | PASS - SFC9220 vf memory leaks check test in rhel93
2023-05-11 02:29:06 | PASS - boot a rhel93 vm with a vhost=on macvtap whose source device is SFC9220 vf
2023-05-11 02:36:34 | PASS - boot a rhel93 vm with a vhost=off macvtap whose source device is SFC9220 vf
2023-05-11 02:37:49 | PASS - boot a rhel93 vm with a vlan + vhost=on macvtap whose source device is SFC9220 vf
2023-05-11 02:38:28 | PASS - change SFC9220 vf number while other pf is unbound
2023-05-11 02:39:11 | PASS - change SFC9220 vf number while the vf of other pf is unbound
2023-05-11 02:40:59 | PASS - change SFC9220 vf number while the vf of other pf is used in rhel93 domain
2023-05-11 02:42:44 | PASS - change SFC9220 vf mac address test in rhel93 domain
2023-05-11 02:43:54 | PASS - numa test with SFC9220 vf in rhel93 domain
2023-05-11 02:45:14 | PASS - ballon test with SFC9220 vf in rhel93 domain
2023-05-11 02:46:53 | PASS - change SFC9220 vf number while vf is used in rhel93 domain
2023-05-11 02:53:10 | PASS - check the irq distribution of SFC9220 vf when irqbalance is inactive
2023-05-11 05:09:58 | PASS - SFC9220 vf hugepage 1G basic test in rhel93 domain
2023-05-15 03:20:33 | PASS - boot a domain rhel93 with SFC9220 pf
2023-05-15 03:27:42 | PASS - reboot a rhel93 domain with 1 SFC9220 pf
2023-05-15 03:42:01 | PASS - shutdown a rhel93 with 1 SFC9220 pf
2023-05-15 04:03:00 | PASS - repeated SFC9220 pf device use test in rhel93 domain
2023-05-15 04:04:41 | PASS - hot plug 1 SFC9220 pf to rhel93 domain
2023-05-15 04:06:10 | PASS - hot unplug 1 SFC9220 pf from rhel93 domain
2023-05-15 04:07:21 | PASS - boot a rhel93 with specified address SFC9220 pf
2023-05-15 04:08:38 | PASS - boot a rhel93 domain with multifunction=on SFC9220 pf
2023-05-15 04:10:04 | PASS - boot a domain rhel93 with SFC9220 pf and virtual nics
2023-05-15 04:11:07 | PASS - boot a rhel93 domain with 2 SFC9220 pf
2023-05-15 04:12:48 | PASS - hot unplug 2 SFC9220 pf from rhel93 domain
2023-05-15 04:14:52 | PASS - hot plug 2 SFC9220 pf into rhel93
2023-05-15 04:17:11 | PASS - hotplug and hotunplug SFC9220 pf 1 round in rhel93 domain
2023-05-15 04:25:10 | PASS - SFC9220 pf memory leaks check test in rhel93
2023-05-15 04:26:31 | PASS - change SFC9220 pf mac address test in rhel93 domain
2023-05-15 04:27:39 | PASS - ballon test with SFC9220 pf in rhel93 domain
2023-05-15 04:28:35 | PASS - numa test with SFC9220 pf in rhel93 domain
2023-05-15 04:34:41 | PASS - check the irq distribution of SFC9220 pf when irqbalance is inactive
2023-05-15 04:58:53 | PASS - SFC9220 pf hugepage 1G basic test in rhel93 domain

Comment 7 Gerd Hoffmann 2023-05-17 11:55:56 UTC
Updated test package
https://kojihub.stream.centos.org/koji/taskinfo?taskID=2230762

Comment 8 Xueqiang Wei 2023-05-23 09:43:57 UTC
(In reply to Gerd Hoffmann from comment #7)
> Updated test package
> https://kojihub.stream.centos.org/koji/taskinfo?taskID=2230762

Test the scratch build, guest can not boot up, VNC show info: Guest has not initialized the display (yet).
Gerd, could you please help check it? For details, please refer to the following debug log.

Versions:
kernel-5.14.0-306.el9.x86_64
qemu-kvm-8.0.0-2.el9
seabios-bin-1.16.1-1.el9.bz2186783.20230517.0736.noarch


1. qemu command lines
/usr/libexec/qemu-kvm \
     -S  \
     -name 'avocado-vt-vm1'  \
     -sandbox on  \
     -machine q35,memory-backend=mem-machine_mem \
     -device '{"id": "pcie-root-port-0", "driver": "pcie-root-port", "multifunction": true, "bus": "pcie.0", "addr": "0x1", "chassis": 1}' \
     -device '{"id": "pcie-pci-bridge-0", "driver": "pcie-pci-bridge", "addr": "0x0", "bus": "pcie-root-port-0"}'  \
     -nodefaults \
     -device '{"driver": "VGA", "bus": "pcie-pci-bridge-0", "addr": "0x1"}' \
     -m 62464 \
     -object '{"size": 65498251264, "id": "mem-machine_mem", "qom-type": "memory-backend-ram"}'  \
     -smp 32,maxcpus=32,cores=16,threads=1,dies=1,sockets=2  \
     -cpu 'Icelake-Server',ds=on,ss=on,dtes64=on,vmx=on,pdcm=on,hypervisor=on,tsc-adjust=on,avx512ifma=on,sha-ni=on,rdpid=on,fsrm=on,md-clear=on,stibp=on,arch-capabilities=on,xsaves=on,ibpb=on,ibrs=on,amd-stibp=on,amd-ssbd=on,rdctl-no=on,ibrs-all=on,skip-l1dfl-vmentry=on,mds-no=on,pschange-mc-no=on,tsx-ctrl=on,hle=off,rtm=off,mpx=off,intel-pt=off,kvm_pv_unhalt=on \
     -chardev socket,id=qmp_id_qmpmonitor1,server=on,wait=off,path=/var/tmp/avocado_vdawwe6j/monitor-qmpmonitor1-20230523-013140-qjNmdBxM  \
     -mon chardev=qmp_id_qmpmonitor1,mode=control \
     -chardev socket,id=qmp_id_catch_monitor,server=on,wait=off,path=/var/tmp/avocado_vdawwe6j/monitor-catch_monitor-20230523-013140-qjNmdBxM  \
     -mon chardev=qmp_id_catch_monitor,mode=control \
     -device '{"ioport": 1285, "driver": "pvpanic", "id": "idmeixgr"}' \
     -chardev socket,id=chardev_serial0,server=on,wait=off,path=/var/tmp/avocado_vdawwe6j/serial-serial0-20230523-013140-qjNmdBxM \
     -device '{"id": "serial0", "driver": "isa-serial", "chardev": "chardev_serial0"}'  \
     -chardev socket,id=seabioslog_id_20230523-013140-qjNmdBxM,path=/var/tmp/avocado_vdawwe6j/seabios-20230523-013140-qjNmdBxM,server=on,wait=off \
     -device isa-debugcon,chardev=seabioslog_id_20230523-013140-qjNmdBxM,iobase=0x402 \
     -device '{"id": "pcie-root-port-1", "port": 1, "driver": "pcie-root-port", "addr": "0x1.0x1", "bus": "pcie.0", "chassis": 2}' \
     -device '{"driver": "qemu-xhci", "id": "usb1", "bus": "pcie-root-port-1", "addr": "0x0"}' \
     -device '{"driver": "usb-tablet", "id": "usb-tablet1", "bus": "usb1.0", "port": "1"}' \
     -blockdev '{"node-name": "file_image1", "driver": "file", "auto-read-only": true, "discard": "unmap", "aio": "threads", "filename": "/home/kvm_autotest_root/images/rhel930-64-virtio.qcow2", "cache": {"direct": true, "no-flush": false}}' \
     -blockdev '{"node-name": "drive_image1", "driver": "qcow2", "read-only": false, "cache": {"direct": true, "no-flush": false}, "file": "file_image1"}' \
     -device '{"id": "pcie-root-port-2", "port": 2, "driver": "pcie-root-port", "addr": "0x1.0x2", "bus": "pcie.0", "chassis": 3}' \
     -device '{"driver": "virtio-blk-pci", "id": "image1", "drive": "drive_image1", "bootindex": 0, "write-cache": "on", "bus": "pcie-root-port-2", "addr": "0x0"}' \
     -device '{"id": "pcie-root-port-3", "port": 3, "driver": "pcie-root-port", "addr": "0x1.0x3", "bus": "pcie.0", "chassis": 4}' \
     -device '{"driver": "virtio-net-pci", "mac": "9a:79:de:5c:b7:f1", "id": "iddWMJFP", "netdev": "id0BlfQz", "bus": "pcie-root-port-3", "addr": "0x0"}'  \
     -netdev tap,id=id0BlfQz,vhost=on  \
     -vnc :0  \
     -rtc base=utc,clock=host,driftfix=slew  \
     -boot menu=off,order=cdn,once=c,strict=off \
     -enable-kvm \
     -monitor stdio \
     -chardev file,id=firmware,path=/tmp/seabios_debug.log \
     -device isa-debugcon,iobase=0x402,chardev=firmware \


2. debug log info
SeaBIOS (version 1.16.1-1.el9.bz2186783.20230517.0736)
BUILD: gcc: (GCC) 11.3.1 20221121 (Red Hat 11.3.1-4) binutils: version 2.35.2-39.el9
No Xen hypervisor found.
Running on QEMU (q35)
physbits: signature="GenuineIntel", pae=yes, lm=yes, phys-bits=46, valid=yes
cpuid 0x40000000: eax 40000001, signature 'KVMKVMKVM'
Running on KVM
Found QEMU fw_cfg
QEMU fw_cfg DMA interface supported
qemu/e820: addr 0x00000000feffc000 len 0x0000000000004000 [reserved]
qemu/e820: addr 0x0000000000000000 len 0x0000000080000000 [RAM]
qemu/e820: addr 0x0000000100000000 len 0x0000000ec0000000 [RAM]
Relocating init from 0x000d80c0 to 0x7efec100 (size 81504)
Moving pm_base to 0x600
boot order:
1: /pci@i0cf8/pci-bridge@1,2/scsi@0/disk@0,0
kvmclock: at 0xeb660 (msr 0x4b564d01)
kvmclock: stable tsc, 2394 MHz
CPU Mhz=2394 (kvmclock)
=== PCI bus & bridge init ===
PCI: pci_bios_init_bus_rec bus = 0x0
PCI: pci_bios_init_bus_rec bdf = 0x8
PCI: primary bus = 0x0
PCI: secondary bus = 0xff -> 0x1
PCI: pci_bios_init_bus_rec bus = 0x1
PCI: pci_bios_init_bus_rec bdf = 0x100
PCI: primary bus = 0x0 -> 0x1
PCI: secondary bus = 0xff -> 0x2
PCI: pci_bios_init_bus_rec bus = 0x2
PCI: QEMU resource reserve cap device ID doesn't match.
PCI: subordinate bus = 0x0 -> 0x2
PCI: QEMU resource reserve cap not found
PCI: subordinate bus = 0x0 -> 0x2
PCI: pci_bios_init_bus_rec bdf = 0x9
PCI: primary bus = 0x0
PCI: secondary bus = 0xff -> 0x3
PCI: pci_bios_init_bus_rec bus = 0x3
PCI: QEMU resource reserve cap not found
PCI: subordinate bus = 0x0 -> 0x3
PCI: pci_bios_init_bus_rec bdf = 0xa
PCI: primary bus = 0x0
PCI: secondary bus = 0xff -> 0x4
PCI: pci_bios_init_bus_rec bus = 0x4
PCI: QEMU resource reserve cap not found
PCI: subordinate bus = 0x0 -> 0x4
PCI: pci_bios_init_bus_rec bdf = 0xb
PCI: primary bus = 0x0
PCI: secondary bus = 0xff -> 0x5
PCI: pci_bios_init_bus_rec bus = 0x5
PCI: QEMU resource reserve cap not found
PCI: subordinate bus = 0x0 -> 0x5
=== PCI device probing ===
Found 13 PCI devices (max PCI bus is 05)
PCIe: using q35 mmconfig at 0xb0000000
=== PCI new allocation pass #1 ===
PCI: check devices
PCI: QEMU resource reserve cap not found
PCI: secondary bus 5 size 00000000 type io
PCI: secondary bus 5 size 00200000 type mem
PCI: secondary bus 5 size 800000000 type prefmem
PCI: QEMU resource reserve cap not found
PCI: secondary bus 4 size 00000000 type io
PCI: secondary bus 4 size 00200000 type mem
PCI: secondary bus 4 size 800000000 type prefmem
PCI: QEMU resource reserve cap not found
PCI: secondary bus 3 size 00000000 type io
PCI: secondary bus 3 size 00200000 type mem
PCI: secondary bus 3 size 800000000 type prefmem
PCI: QEMU resource reserve cap device ID doesn't match.
PCI: secondary bus 2 size 00001000 type io
PCI: secondary bus 2 size 00200000 type mem
PCI: secondary bus 2 size 800000000 type prefmem
PCI: QEMU resource reserve cap not found
PCI: secondary bus 1 size 00001000 type io
PCI: secondary bus 1 size 00400000 type mem
PCI: secondary bus 1 size 800000000 type prefmem
=== PCI new allocation pass #2 ===
PCI: IO: c000 - d05f
PCI: 32: 00000000c0000000 - 00000000fec00000
PCI: out of 32bit address space

Comment 9 Gerd Hoffmann 2023-05-24 06:41:41 UTC
updated scratch build
https://kojihub.stream.centos.org/koji/taskinfo?taskID=2258268

Comment 10 liunana 2023-05-24 08:07:46 UTC
I can boot up one seabios vm with seabios-1.16.1-1.el9.bz2186783.20230517.0736.x86_64.
Test Env:
    seabios-1.16.1-1.el9.bz2186783.20230517.0736.x86_64
    5.14.0-316.el9.x86_64
    qemu-kvm-8.0.0-2.el9.x86_64
    intel-eaglestream-spr-07.khw1.lab.eng.bos.redhat.com
Guest: seabios RHEL9.3


I will update test results with the latest build later.
Thanks.


Best regards
Nana

Comment 11 Xueqiang Wei 2023-05-29 07:55:27 UTC
(In reply to Gerd Hoffmann from comment #9)
> updated scratch build
> https://kojihub.stream.centos.org/koji/taskinfo?taskID=2258268

Tested seabios test loop with the new scratch build, no new bug was found.

Versions:
kernel-5.14.0-316.el9.x86_64
qemu-kvm-8.0.0-4.el9
seabios-bin-1.16.1-1.el9.bz2186783.20230524.0826.noarch

Job link:
http://fileshare.hosts.qa.psi.pek2.redhat.com/pub/logs/seabios_test_loop_with_seabios-bin-1.16.1-1.el9.bz2186783.20230524.0826/results.html
http://virtqetools.lab.eng.pek2.redhat.com/kvm_autotest_job_log/?jobid=7900098

Comment 15 RHEL Program Management 2023-09-22 13:24:23 UTC
Issue migration from Bugzilla to Jira is in process at this time. This will be the last message in Jira copied from the Bugzilla bug.