Hide Forgot
Description of problem: If user started a guest with 296 interfaces, Libvirt automatically generated 10 PCI/PCI bridges. All of them are plugged into pci root. It leads seabios out of memory. No seabios info in guest although it can startup. https://bugzilla.redhat.com/show_bug.cgi?id=1271457#c3 explain why. Version-Release number of selected component (if applicable): libvirt-1.3.2-1.el7.x86_64 qemu-kvm-rhev-2.5.0-2.el7.x86_64 seabios-bin-1.7.5-11.el7.noarch How reproducible: 100% Steps to Reproduce: 1. Add 296 interfaces in domain xml <interface type='network'> <source network='default' bridge='virbr0'/> <model type='rtl8139'/> </interface> <interface type='network'> <source network='default' bridge='virbr0'/> <model type='rtl8139'/> </interface> ..... repeat ..... <controller type='pci' index='0' model='pci-root'/> #virsh start vm1 Domain vm1 started # virsh dumpxml vm1 | grep "<interface" | wc -l 296 2. check pci bridge # virsh dumpxml vm1 | grep pci-bridge -a6 <controller type='pci' index='0' model='pci-root'/> <controller type='pci' index='1' model='pci-bridge'> <model name='pci-bridge'/> <target chassisNr='1'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x06' function='0x0'/> </controller> <controller type='pci' index='2' model='pci-bridge'> <model name='pci-bridge'/> <target chassisNr='2'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x0a' function='0x0'/> </controller> <controller type='pci' index='3' model='pci-bridge'> <model name='pci-bridge'/> <target chassisNr='3'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x0b' function='0x0'/> </controller> <controller type='pci' index='4' model='pci-bridge'> <model name='pci-bridge'/> <target chassisNr='4'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x0c' function='0x0'/> </controller> <controller type='pci' index='5' model='pci-bridge'> <model name='pci-bridge'/> <target chassisNr='5'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x0d' function='0x0'/> </controller> <controller type='pci' index='6' model='pci-bridge'> <model name='pci-bridge'/> <target chassisNr='6'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x0e' function='0x0'/> </controller> <controller type='pci' index='7' model='pci-bridge'> <model name='pci-bridge'/> <target chassisNr='7'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x0f' function='0x0'/> </controller> <controller type='pci' index='8' model='pci-bridge'> <model name='pci-bridge'/> <target chassisNr='8'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x10' function='0x0'/> </controller> <controller type='pci' index='9' model='pci-bridge'> <model name='pci-bridge'/> <target chassisNr='9'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x11' function='0x0'/> </controller> <controller type='pci' index='10' model='pci-bridge'> <model name='pci-bridge'/> <target chassisNr='10'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x12' function='0x0'/> </controller> Actual results: vm can startup but seabios info cannot be displayed. Expected results: Libvirt may limit maximum pci bridges that are plugged into 1 pci-root / pci-bridge to 8. OR libvirt may generate nested pci bridges Additional info:
A short discussion with Alex Williamson on IRC led to the following facts: 1) There is no way for libvirt to programmatically determine the maximum number of bridges we can safely attach to the root bus. 2) due to qemu "flattening out" the bus hierarchy, there is no performance penalty do having a "long chain of bridges" vs. having many bridges all attached to the root bus. Based on this, I think we all agree that the best solution is to put a single pci-bridge on the root bus, then a single pci-bridge on the 1st pci-bridge, and so-on. This could actually simplify the code that auto-adds the bridges - it currently does a "dry run" pass to determine how many bridges are needed without actually adding any or assigning any addresses, then creates that many bridges, and finally assigns addresses (taking care of pci controllers first). A revised algorithm might do a single pass, adding and addressing new PCI controllers exactly when needed - since the other slots on the existing bridge would already be filled with other devices, the pci-bridge controllers would end up being daisy-chained. This could also work better with our need to auto-add PCIe controllers on demand (see the discussion in Bug 1330024)
Actually I need to retract my idea from Comment 1 - it turns out there is no difference in the amount of IO port space used whether we daisy-chain the bridges, or create a flatter topology. And since we have no method of determining the maximum number of usable bridges (other than experimentation), I'm closing this as CANTFIX (some would argue that it's NOTABUG).