Bug 1344299
Summary: | PCIe: Add an option to PCIe ports to disable IO port space support | ||||||||
---|---|---|---|---|---|---|---|---|---|
Product: | Red Hat Enterprise Linux 7 | Reporter: | Marcel Apfelbaum <marcel> | ||||||
Component: | qemu-kvm-rhev | Assignee: | Marcel Apfelbaum <marcel> | ||||||
Status: | CLOSED ERRATA | QA Contact: | jingzhao <jinzhao> | ||||||
Severity: | low | Docs Contact: | |||||||
Priority: | low | ||||||||
Version: | 7.3 | CC: | ailan, chayang, ehabkost, hhan, jinzhao, jsuchane, juzhang, laine, lersek, marcel, michen, mtessun, virt-maint, xfu | ||||||
Target Milestone: | rc | Keywords: | FutureFeature | ||||||
Target Release: | 7.4 | ||||||||
Hardware: | Unspecified | ||||||||
OS: | Unspecified | ||||||||
Whiteboard: | |||||||||
Fixed In Version: | qemu-kvm-rhev-2.10.0-7.el7 | Doc Type: | If docs needed, set a value | ||||||
Doc Text: | Story Points: | --- | |||||||
Clone Of: | |||||||||
: | 1408810 1434740 (view as bug list) | Environment: | |||||||
Last Closed: | 2018-04-11 00:09:32 UTC | Type: | Bug | ||||||
Regression: | --- | Mount Type: | --- | ||||||
Documentation: | --- | CRM: | |||||||
Verified Versions: | Category: | --- | |||||||
oVirt Team: | --- | RHEL 7.3 requirements from Atomic Host: | |||||||
Cloudforms Team: | --- | Target Upstream Version: | |||||||
Embargoed: | |||||||||
Bug Depends On: | 1437113 | ||||||||
Bug Blocks: | 1408810, 1410577, 1410578, 1434740 | ||||||||
Attachments: |
|
Description
Marcel Apfelbaum
2016-06-09 11:19:23 UTC
As I mentioned during a meeting last week, since the default setting should be that ioport space is *disabled*, the option should be one that *enables* IO port space support. This would eliminate the double negative in the description - "If you need ioport space for a device, you should disable the disable-ioport-space option". Much easier to say "if you need ioport space for a device, you should enable the ioport-space option". Created attachment 1250779 [details]
seabios log
Created attachment 1250781 [details]
dmesg of guest
(In reply to jingzhao from comment #4) > Hi Marcel > > According to above comments and > https://bugzilla.redhat.com/show_bug.cgi?id=1410578#c3, could you help to > confirm the following questions? > > 1.where can QE find the "disable-ioport-space" parameter? The feature is not yet implemented upstream. > > 2.Following is the tested steps on qemu-kvm-rhev-2.8.0-4.el7.x86_64 and I > didn't hit the panic and halt issue > > 1) Boot guest with qemu command line (20 root ports)[1] > Try to add a leagcy PCI device to the root port, say e1000. > 2) Checked the seabios log and dmesg of guest > > seabios log: WARNING - Unable to allocate resource at vp_find_vq:301! > demsg: [ 6.432787] pci 0000:00:10.0: BAR 13: failed to assign [io > size 0x1000] (details please check the attachments) > > Is that right? > This is a related bug, however not this one. We have a BZ for it, if you have problems finding it let me know. Use non virtio devices to hit the problem, e.g e1000 > 3. How can QE verify the bz ? guest work correctly when without using IO > ports? > First step is to recreate the problem. Then wait for the implementation of the new parameter and disable the io ports. The devices will not work correctly if they require IO ports, however the SeaBIOS will not panic. Other way to check this BZ is to use virtio devices connected to Root Ports and run lspci on a linux Guest or Device Manager in Windows guests and see that no IO ports ranges are used by the Ports. Thanks, Marcel > [1] > sh mulit-root-off.sh 20 > [root@localhost test]# cat mulit-root-off.sh > #!/bin/sh > > MACHINE=q35 > SMP=4,cores=2,threads=2,sockets=1 > MEM=2G > GUEST_IMG=/home/test/q35-seabios.qcow2 > IMG_FORMAT=qcow2 > > CLI="/usr/libexec/qemu-kvm -enable-kvm -M $MACHINE -nodefaults -smp $SMP -m > $MEM -name vm1 -serial unix:/tmp/console,server,nowait -drive > file=$GUEST_IMG,if=none,id=guest-img,format=$IMG_FORMAT,werror=stop, > rerror=stop -device > ide-hd,drive=guest-img,bus=ide.0,unit=0,id=os-disk,bootindex=1 -spice > port=5931,disable-ticketing -monitor stdio -qmp tcp:0:6666,server,nowait > -boot menu=on,reboot-timeout=8,strict=on" > > while [ ${i:=0} -lt ${1:-0} ] > do > blkDiskID=$((i)) > CLI="$CLI -device ioh3420,bus=pcie.0,id=root.$i,slot=$i" > qemu-img create -f qcow2 /home/test/disk/disk$blkDiskId 100M > # CLI="$CLI -device > xio3130-downstream,bus=upstream$i,id=downstream$i,chassis=$((i+1))" > CLI="$CLI -drive > file=/home/test/disk/disk$blkDiskId,if=none,id=disk$i,format=qcow2 " > CLI="$CLI -device > virtio-blk-pci,scsi=off,drive=disk$i,id=virtio-blk$i,bus=root.$i" > i=$((i+1)) > done > > $CLI > > Thanks > Jing Zhao Hi Marcel Thanks your help. According to your comments, there have 2 reproduced method. Could you help to check it? 1) Boot up guest with more than 10 device (e1000) a. the test result of qemu-kvm-rhev-2.8.0-4.el7.x86_64: guest boot up without any error b. the test result of with the new parameter, guest boot up correctly without io ports, guest will panic with io ports 2) Boot up guest with virtio-net-pci a. the test result of with new parameter, will see following result "Region 2: I/O ports at 3080 [size=32]" Thanks Jing Zhao (In reply to jingzhao from comment #8) > Hi Marcel > > Thanks your help. > > According to your comments, there have 2 reproduced method. Could you help > to check it? > > 1) Boot up guest with more than 10 device (e1000) > > a. the test result of qemu-kvm-rhev-2.8.0-4.el7.x86_64: guest boot up > without any error > b. the test result of with the new parameter, guest boot up correctly > without io ports, guest will panic with io ports > Without the new parameter (today) the seabios (guest) panics. With the new parameter, the seabios will not panic, but the e1000 devices will not work since they require IO space. > 2) Boot up guest with virtio-net-pci > > a. the test result of with new parameter, will see following result > > "Region 2: I/O ports at 3080 [size=32]" > virto-net-pci connected to Root Ports are modern devices and do not require IO ports. However, if configured as legacy devices (disable-modern=on,disable-legacy=off) they will behave like e1000. Thanks, Marcel > Thanks > Jing Zhao (In reply to Marcel Apfelbaum from comment #7) > (In reply to jingzhao from comment #4) > > Hi Marcel > > > > According to above comments and > > https://bugzilla.redhat.com/show_bug.cgi?id=1410578#c3, could you help to > > confirm the following questions? > > > > 1.where can QE find the "disable-ioport-space" parameter? > > The feature is not yet implemented upstream. > > What is the status please? Thanks. (In reply to Jaroslav Suchanek from comment #10) > (In reply to Marcel Apfelbaum from comment #7) > > (In reply to jingzhao from comment #4) > > > Hi Marcel > > > > > > According to above comments and > > > https://bugzilla.redhat.com/show_bug.cgi?id=1410578#c3, could you help to > > > confirm the following questions? > > > > > > 1.where can QE find the "disable-ioport-space" parameter? > > > > The feature is not yet implemented upstream. > > > > > What is the status please? > > Thanks. Hi, Work in progress, not yet submitted upstream. Thanks, Marcel Related: [Qemu-devel] [PATCH 0/2] hw/pcie: disable IO port fwd by default for pcie-root-port. http://mid.mail-archive.com/20170906142658.58298-1-marcel@redhat.com (See also the discussion, with questions relevant for guest firmware, i.e. blocked bug 1434740.) Hm, to my knowledge, this feature has been added by now to upstream QEMU, in the following commits: a35fe226558a hw/pci: introduce pcie-pci-bridge device 70e1ee59bb94 hw/pci: introduce bridge-only vendor-specific capability to provide some hints to firmware 226263fb5cda hw/pci: add QEMU-specific PCI capability to the Generic PCI Express Root Port c1800a162765 docs: update documentation considering PCIE-PCI bridge and 8e36c336d943 hw/gen_pcie_root_port: make IO RO 0 on IO disabled I think they'll have to be backported to downstream -- possibly more patches than just the ones listed above. Linux guests should run with pci=hpiosize=0[,hpmemsize=0 for MEM] and the PCIE Root Port with: -device pcie-root-port,io-reserve=0 Fix included in qemu-kvm-rhev-2.10.0-7.el7 Hi Marcel According to comment 7, I am not sure the bz is fixed Hit issue when boot up 15 e1000 nic on seabios #!/bin/sh MACHINE=q35 SMP=4,cores=2,threads=2,sockets=1 MEM=2G GUEST_IMG=/home/test/rhel75-seabios.qcow2 IMG_FORMAT=qcow2 size=32 CLI="/usr/libexec/qemu-kvm -enable-kvm -M $MACHINE -nodefaults -smp $SMP -m $MEM -name vm1 -drive file=$GUEST_IMG,if=none,id=guest-img,format=$IMG_FORMAT,werror=stop,rerror=stop -device ide-hd,drive=guest-img,bus=ide.0,unit=0,id=os-disk,bootindex=1 -spice port=5931,disable-ticketing -vga qxl -monitor stdio -qmp tcp:0:6666,server,nowait -boot menu=on,reboot-timeout=8,strict=on -chardev file,path=/home/seabios.log,id=seabios -device isa-debugcon,chardev=seabios,iobase=0x402" while [ ${i:=0} -lt ${1:-0} ] do blkDiskID=$((i)) CLI="$CLI -device pcie-root-port,bus=pcie.0,id=root.$i,slot=$i,io-reserve=0" n=$((50-$i)) CLI="$CLI -device e1000,netdev=tap$i,mac=9a:6a:6b:6c:6d:$n,bus=root.$i -netdev tap,id=tap$i" i=$((i+1)) done $CLI the seabios log: === PCI new allocation pass #2 === PCI: out of I/O address space and guest didn't boot up Thanks Jing (In reply to jingzhao from comment #22) > Hi Marcel > > According to comment 7, I am not sure the bz is fixed > > Hit issue when boot up 15 e1000 nic on seabios > > #!/bin/sh > MACHINE=q35 > SMP=4,cores=2,threads=2,sockets=1 > MEM=2G > GUEST_IMG=/home/test/rhel75-seabios.qcow2 > IMG_FORMAT=qcow2 > size=32 > > CLI="/usr/libexec/qemu-kvm -enable-kvm -M $MACHINE -nodefaults -smp $SMP -m > $MEM -name vm1 -drive > file=$GUEST_IMG,if=none,id=guest-img,format=$IMG_FORMAT,werror=stop, > rerror=stop -device > ide-hd,drive=guest-img,bus=ide.0,unit=0,id=os-disk,bootindex=1 -spice > port=5931,disable-ticketing -vga qxl -monitor stdio -qmp > tcp:0:6666,server,nowait -boot menu=on,reboot-timeout=8,strict=on -chardev > file,path=/home/seabios.log,id=seabios -device > isa-debugcon,chardev=seabios,iobase=0x402" > while [ ${i:=0} -lt ${1:-0} ] > do > blkDiskID=$((i)) > CLI="$CLI -device > pcie-root-port,bus=pcie.0,id=root.$i,slot=$i,io-reserve=0" > n=$((50-$i)) > CLI="$CLI -device e1000,netdev=tap$i,mac=9a:6a:6b:6c:6d:$n,bus=root.$i > -netdev tap,id=tap$i" > i=$((i+1)) > done > > $CLI > > > the seabios log: > === PCI new allocation pass #2 === > PCI: out of I/O address space > > and guest didn't boot up > > Thanks > Jing Hi, The e1000 device requires IO in order to work, so giving a hint to firmware to not reserve IO would not help. Please use virtio-net-pci device to check it. Also we do not support plugging PCI devices into PCIe Root Ports, only PCI express devices. You could try the e1000e device, I am no sure it works without IO, but it worth a try. Thanks, Marcel Verified bug with 3.10.0-807.el7.x86_64 & qemu-kvm-rhev-2.10.0-10.el7.x86_64 & OVMF-20171011-3.git92d07e48907f.el7.noarch. qemu command line: /usr/libexec/qemu-kvm -enable-kvm -M q35 -nodefaults -smp 4,cores=2,threads=2,sockets=1 -m 4G -name vm1 -drive file=/usr/share/OVMF/OVMF_CODE.secboot.fd,if=pflash,format=raw,unit=0,readonly=on -drive file=/usr/share/OVMF/OVMF_VARS.fd,if=pflash,format=raw,unit=1 -debugcon file:/home/test/ovmf.log -drive file=/usr/share/OVMF/UefiShell.iso,if=none,cache=none,snapshot=off,aio=native,media=cdrom,id=cdrom1 -device ahci,id=ahci0 -device ide-cd,drive=cdrom1,id=ide-cd1,bus=ahci0.1 -global isa-debugcon.iobase=0x402 -drive file=/home/verify-bugs/rhel7.5-secureboot.qcow2,if=none,id=guest-img,format=qcow2,werror=stop,rerror=stop -device ide-hd,drive=guest-img,bus=ide.0,unit=0,id=os-disk,bootindex=1 -spice port=5931,disable-ticketing -vga qxl -monitor stdio -qmp tcp:0:6666,server,nowait -boot menu=on,reboot-timeout=8,strict=on -device pcie-root-port,bus=pcie.0,id=root.0,slot=0,io-reserve=0 -device e1000,netdev=tap0,mac=9a:6a:6b:6c:6d:50,bus=root.0 -netdev tap,id=tap0 -device pcie-root-port,bus=pcie.0,id=root.1,slot=1,io-reserve=0 -device e1000,netdev=tap1,mac=9a:6a:6b:6c:6d:49,bus=root.1 -netdev tap,id=tap1 -device pcie-root-port,bus=pcie.0,id=root.2,slot=2,io-reserve=0 -device e1000,netdev=tap2,mac=9a:6a:6b:6c:6d:48,bus=root.2 -netdev tap,id=tap2 -device pcie-root-port,bus=pcie.0,id=root.3,slot=3,io-reserve=0 -device e1000,netdev=tap3,mac=9a:6a:6b:6c:6d:47,bus=root.3 -netdev tap,id=tap3 -device pcie-root-port,bus=pcie.0,id=root.4,slot=4,io-reserve=0 -device e1000,netdev=tap4,mac=9a:6a:6b:6c:6d:46,bus=root.4 -netdev tap,id=tap4 -device pcie-root-port,bus=pcie.0,id=root.5,slot=5,io-reserve=0 -device e1000,netdev=tap5,mac=9a:6a:6b:6c:6d:45,bus=root.5 -netdev tap,id=tap5 -device pcie-root-port,bus=pcie.0,id=root.6,slot=6,io-reserve=0 -device e1000,netdev=tap6,mac=9a:6a:6b:6c:6d:44,bus=root.6 -netdev tap,id=tap6 -device pcie-root-port,bus=pcie.0,id=root.7,slot=7,io-reserve=0 -device e1000,netdev=tap7,mac=9a:6a:6b:6c:6d:43,bus=root.7 -netdev tap,id=tap7 -device pcie-root-port,bus=pcie.0,id=root.8,slot=8,io-reserve=0 -device e1000,netdev=tap8,mac=9a:6a:6b:6c:6d:42,bus=root.8 -netdev tap,id=tap8 -device pcie-root-port,bus=pcie.0,id=root.9,slot=9,io-reserve=0 -device e1000,netdev=tap9,mac=9a:6a:6b:6c:6d:41,bus=root.9 -netdev tap,id=tap9 -device pcie-root-port,bus=pcie.0,id=root.10,slot=10,io-reserve=0 -device e1000,netdev=tap10,mac=9a:6a:6b:6c:6d:40,bus=root.10 -netdev tap,id=tap10 -device pcie-root-port,bus=pcie.0,id=root.11,slot=11,io-reserve=0 -device e1000,netdev=tap11,mac=9a:6a:6b:6c:6d:39,bus=root.11 -netdev tap,id=tap11 -device pcie-root-port,bus=pcie.0,id=root.12,slot=12,io-reserve=0 -device e1000,netdev=tap12,mac=9a:6a:6b:6c:6d:38,bus=root.12 -netdev tap,id=tap12 -device pcie-root-port,bus=pcie.0,id=root.13,slot=13,io-reserve=0 -device e1000,netdev=tap13,mac=9a:6a:6b:6c:6d:37,bus=root.13 -netdev tap,id=tap13 -device pcie-root-port,bus=pcie.0,id=root.14,slot=14,io-reserve=0 -device e1000,netdev=tap14,mac=9a:6a:6b:6c:6d:36,bus=root.14 -netdev tap,id=tap14 -device pcie-root-port,bus=pcie.0,id=root.15,slot=15,io-reserve=0 -device e1000,netdev=tap15,mac=9a:6a:6b:6c:6d:35,bus=root.15 -netdev tap,id=tap15 -vnc :1 -device pcie-root-port,bus=pcie.0,id=root.16,slot=16,io-reserve=0 -device e1000,netdev=tap16,mac=9a:6a:6b:6c:6d:34,bus=root.16 -netdev tap,id=tap16 -device pcie-root-port,bus=pcie.0,id=root.17,slot=17,io-reserve=0 -device e1000,netdev=tap17,mac=9a:6a:6b:6c:6d:33,bus=root.17 -netdev tap,id=tap17 -device pcie-root-port,bus=pcie.0,id=root.18,slot=18,io-reserve=0 -device e1000,netdev=tap18,mac=9a:6a:6b:6c:6d:32,bus=root.18 -netdev tap,id=tap18 -device pcie-root-port,bus=pcie.0,id=root.19,slot=19,io-reserve=0 -device e1000,netdev=tap19,mac=9a:6a:6b:6c:6d:31,bus=root.19 -netdev tap,id=tap19 -device pcie-root-port,bus=pcie.0,id=root.20,slot=20,io-reserve=0 -device e1000,netdev=tap20,mac=9a:6a:6b:6c:6d:30,bus=root.20 -netdev tap,id=tap20 -device pcie-root-port,bus=pcie.0,id=root.21,slot=21,io-reserve=0 -device e1000,netdev=tap21,mac=9a:6a:6b:6c:6d:29,bus=root.21 -netdev tap,id=tap21 -device pcie-root-port,bus=pcie.0,id=root.22,slot=22,io-reserve=0 -device e1000,netdev=tap22,mac=9a:6a:6b:6c:6d:28,bus=root.22 -netdev tap,id=tap22 -device pcie-root-port,bus=pcie.0,id=root.23,slot=23,io-reserve=0 -device e1000,netdev=tap23,mac=9a:6a:6b:6c:6d:27,bus=root.23 -netdev tap,id=tap23 -device pcie-root-port,bus=pcie.0,id=root.24,slot=24,io-reserve=0 -device e1000,netdev=tap24,mac=9a:6a:6b:6c:6d:26,bus=root.24 -netdev tap,id=tap24 -device pcie-root-port,bus=pcie.0,id=root.25,slot=25,io-reserve=0 -device e1000,netdev=tap25,mac=9a:6a:6b:6c:6d:25,bus=root.25 -netdev tap,id=tap25 -device pcie-root-port,bus=pcie.0,id=root.26,slot=26,io-reserve=0 -device e1000,netdev=tap26,mac=9a:6a:6b:6c:6d:24,bus=root.26 -netdev tap,id=tap26 -device pcie-root-port,bus=pcie.0,id=root.27,slot=27,io-reserve=0 -device e1000,netdev=tap27,mac=9a:6a:6b:6c:6d:23,bus=root.27 -netdev tap,id=tap27 As this bug works with 28 pcie-root-port devices. So This is fixed. If my steps are wrong, please let me know. If boot guest with 29 pcie-root-port devices. qemu-kvm will core dumped. I think it is separate problem. So I filed a new bug 1520858 to track it I agree we can treat it as a different BZ. Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory, and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://access.redhat.com/errata/RHSA-2018:1104 |