Description of problem: I have 2 pxb controllers, 1st pxb has bus_nr=8, 2nd pxb has bus_nr=10. I plug 2 pci bridges into 1st pxb. And plug 1 nic to pci bridge#0, 1 nic to pci bridge#1. The 2 pci bridges are visible in guest, but only 1 nic is visible in guest. AFAIK, for every pxb-pci device, the bus number specified with bus_nr=XXX will be assigned as the extra root bus's own bus number. All bus numbers between that specified and the next bus_nr property (255 if no other bus_nrs) will be available only for assignment to PCI controllers plugged into the hierarchy starting with this expander bus In this case, busNr(0)=8, busNr(1)=10, I guess bus=8 is used for 1st pxb controller, and bus=9 is used for the integrated pci bridge. So I thought there is no bus room in 1st pxb for other pci controllers, and any pci controllers which were manually plugged into 1st pxb should not usable. So I think libvirt should better disallow this stupid configuration. And limit the number of pci controllers plugged into pxb(n) to busNr(n+1) - busNr (n) -2 Version-Release number of selected component (if applicable): libvirt-2.0.0-3.el7.x86_64 qemu-kvm-rhev-2.6.0-15.el7.x86_64 How reproducible: 100% Steps to Reproduce: 1. start vm with 2 pxb, 1st pxb with busNr=8, 2nd pxb with busNr=10, and 2 pci bridges which are attached to 1st pxb <controller type='pci' index='0' model='pci-root'> <alias name='pci.0'/> </controller> <controller type='pci' index='1' model='pci-expander-bus'> <model name='pxb'/> <target busNr='8'/> <alias name='pci.1'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x08' function='0x0'/> </controller> <controller type='pci' index='2' model='pci-expander-bus'> <model name='pxb'/> <target busNr='10'/> <alias name='pci.2'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x09' function='0x0'/> </controller> <controller type='pci' index='3' model='pci-bridge'> <model name='pci-bridge'/> <target chassisNr='3'/> <alias name='pci.3'/> <address type='pci' domain='0x0000' bus='0x01' slot='0x00' function='0x0'/> </controller> <controller type='pci' index='4' model='pci-bridge'> <model name='pci-bridge'/> <target chassisNr='4'/> <alias name='pci.4'/> <address type='pci' domain='0x0000' bus='0x01' slot='0x01' function='0x0'/> </controller> <interface type='network'> <mac address='52:54:00:b6:22:c5'/> <source network='default' bridge='virbr0'/> <target dev='vnet0'/> <model type='rtl8139'/> <alias name='net0'/> <address type='pci' domain='0x0000' bus='0x03' slot='0x10' function='0x0'/> </interface> <interface type='network'> <mac address='52:54:00:10:b9:e4'/> <source network='default' bridge='virbr0'/> <target dev='vnet1'/> <model type='rtl8139'/> <alias name='net1'/> <address type='pci' domain='0x0000' bus='0x04' slot='0x11' function='0x0'/> </interface> #virsh start vm1-pxb #ps -ef|grep qemu -device pxb,bus_nr=8,id=pci.1,bus=pci.0,addr=0x8 \ -device pxb,bus_nr=10,id=pci.2,bus=pci.0,addr=0x9 \ -device pci-bridge,chassis_nr=3,id=pci.3,bus=pci.1,addr=0x0 \ -device pci-bridge,chassis_nr=4,id=pci.4,bus=pci.1,addr=0x1 \ -netdev tap,fd=26,id=hostnet0 \ -device rtl8139,netdev=hostnet0,id=net0,mac=52:54:00:b6:22:c5,bus=pci.3, addr=0x10 \ -netdev tap,fd=28,id=hostnet1 \ -device rtl8139,netdev=hostnet1,id=net1,mac=52:54:00:10:b9:e4,bus=pci.4,addr=0x11 2. check in guest #lspci -v 00:08.0 Host bridge: Red Hat, Inc. Device 0009 Subsystem: Red Hat, Inc Device 1100 Physical Slot: 8 Flags: 66MHz, fast devsel 00:09.0 Host bridge: Red Hat, Inc. Device 0009 Subsystem: Red Hat, Inc Device 1100 Physical Slot: 9 Flags: 66MHz, fast devsel 08:00.0 PCI bridge: Red Hat, Inc. QEMU PCI-PCI bridge (prog-if 00 [Normal decode]) Flags: bus master, 66MHz, fast devsel, latency 0 Bus: primary=08, secondary=09, subordinate=0c, sec-latency=0 I/O behind bridge: 0000c000-0000dfff Memory behind bridge: fc000000-fc5fffff Prefetchable memory behind bridge: 00000000fe800000-00000000febfffff Capabilities: [40] Slot ID: 0 slots, First+, chassis 08 09:00.0 PCI bridge: Red Hat, Inc. QEMU PCI-PCI bridge (prog-if 00 [Normal decode]) Flags: 66MHz, fast devsel Memory at fc400000 (64-bit, non-prefetchable) [size=256] Bus: primary=09, secondary=0a, subordinate=0b, sec-latency=0 I/O behind bridge: 0000d000-0000dfff Memory behind bridge: fc200000-fc3fffff Prefetchable memory behind bridge: 00000000fea00000-00000000febfffff Capabilities: [4c] MSI: Enable- Count=1/1 Maskable+ 64bit+ Capabilities: [48] Slot ID: 0 slots, First+, chassis 03 Capabilities: [40] Hot-plug capable 09:01.0 PCI bridge: Red Hat, Inc. QEMU PCI-PCI bridge (prog-if 00 [Normal decode]) Flags: bus master, 66MHz, fast devsel, latency 0 Memory at fc401000 (64-bit, non-prefetchable) [size=256] Bus: primary=09, secondary=0c, subordinate=0c, sec-latency=0 I/O behind bridge: 0000c000-0000cfff Memory behind bridge: fc000000-fc1fffff Prefetchable memory behind bridge: 00000000fe800000-00000000fe9fffff Capabilities: [4c] MSI: Enable- Count=1/1 Maskable+ 64bit+ Capabilities: [48] Slot ID: 0 slots, First+, chassis 04 Capabilities: [40] Hot-plug capable 0a:00.0 PCI bridge: Red Hat, Inc. QEMU PCI-PCI bridge (prog-if 00 [Normal decode]) Flags: 66MHz, fast devsel Bus: primary=0a, secondary=0b, subordinate=0b, sec-latency=0 Capabilities: [40] Slot ID: 0 slots, First+, chassis 0a 0c:11.0 Ethernet controller: Realtek Semiconductor Co., Ltd. RTL-8100/8101L/8139 PCI Fast Ethernet Adapter (rev 20) Subsystem: Red Hat, Inc QEMU Virtual Machine Flags: bus master, fast devsel, latency 0, IRQ 11 I/O ports at c000 [size=256] Memory at fc040000 (32-bit, non-prefetchable) [size=256] Expansion ROM at fc000000 [disabled] [size=256K] Kernel driver in use: 8139cp 3. Actual results: 2 pci bridges attached to pxb are visible, 2 internal pci bridges are visible as well. Only 1 nic is visible Expected results: Libvirt disallows this configuration. In this case, disallow to plug pci bridges into 1st pxb controller. Additional info:
Another similar issue is that libvirt should limit the number of pci controllers plugged into pci root bus to lowest busNr(n) -1 when pxb is present. I have 1 pxb controller, with busNr=1 in domain xml. I also have 1 pci bridge which is attached to pci root bus. I plug 1 nic to pxb and 1 nic to pci bridge. Then I create vm. Checked lspci in guest, only the nic plugged into pci bridge is visible. Both the pci bridge integrated into pxb and the nic plugged into pxb are invisible. <controller type='pci' index='0' model='pci-root'> <alias name='pci.0'/> </controller> <controller type='pci' index='1' model='pci-expander-bus'> <model name='pxb'/> <target busNr='1'/> <alias name='pci.1'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x08' function='0x0'/> </controller> <controller type='pci' index='2' model='pci-bridge'> <model name='pci-bridge'/> <target chassisNr='2'/> <alias name='pci.2'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x09' function='0x0'/> </controller> check in guest #lspci -v 00:08.0 Host bridge: Red Hat, Inc. Device 0009 Subsystem: Red Hat, Inc Device 1100 Physical Slot: 8 Flags: 66MHz, fast devsel 00:09.0 PCI bridge: Red Hat, Inc. QEMU PCI-PCI bridge (prog-if 00 [Normal decode]) Flags: bus master, 66MHz, fast devsel, latency 0 Memory at fc219000 (64-bit, non-prefetchable) [size=256] Bus: primary=00, secondary=01, subordinate=01, sec-latency=0 I/O behind bridge: 0000c000-0000cfff Memory behind bridge: fc000000-fc1fffff Prefetchable memory behind bridge: 00000000fea00000-00000000febfffff Capabilities: [4c] MSI: Enable- Count=1/1 Maskable+ 64bit+ Capabilities: [48] Slot ID: 0 slots, First+, chassis 02 Capabilities: [40] Hot-plug capable 01:0d.0 Ethernet controller: Realtek Semiconductor Co., Ltd. RTL-8100/8101L/8139 PCI Fast Ethernet Adapter (rev 20) Subsystem: Red Hat, Inc QEMU Virtual Machine Physical Slot: 13-2 Flags: bus master, fast devsel, latency 0, IRQ 11 I/O ports at c000 [size=256] Memory at fc040000 (32-bit, non-prefetchable) [size=256] Expansion ROM at fc000000 [disabled] [size=256K] Kernel driver in use: 8139cp In this case, bus numbers between 1 and 255 should be available only for pci controllers plugged into pxb. So libvirt should limit the number of pci controllers plugged into pci root bus to lowest busNr(n) -1 when pxb is present.
pxb-pcie has the similar problem.
Another similar issue is that libvirt always auto-assigns 254 to 1st pxb controller. The number of pci controllers which have been attached into pxb controller is not taken into account by libvirt. As a result, guest cannot boot up. pxb-pcie has the similar problem e.g. 1. define a vm with 1 pxb and 3 pci bridges which are attached into pxb <controller type='pci' index='0' model='pci-root'/> <controller type='pci' index='1' model='pci-expander-bus'> </controller> <controller type='pci' index='2' model='pci-bridge'> <address type='pci' domain='0x0000' bus='0x01' slot='0x0' function='0x0'/> </controller> <controller type='pci' index='3' model='pci-bridge'> <address type='pci' domain='0x0000' bus='0x01' slot='0x1' function='0x0'/> </controller> <controller type='pci' index='4' model='pci-bridge'> <address type='pci' domain='0x0000' bus='0x01' slot='0x2' function='0x0'/> </controller> 2.check domain xml <controller type='pci' index='0' model='pci-root'/> <controller type='pci' index='1' model='pci-expander-bus'> <model name='pxb'/> <target busNr='254'/> ----> always defaults to 254 <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0'/> </controller> <controller type='pci' index='2' model='pci-bridge'> <model name='pci-bridge'/> <target chassisNr='2'/> <address type='pci' domain='0x0000' bus='0x01' slot='0x00' function='0x0'/> </controller> <controller type='pci' index='3' model='pci-bridge'> <model name='pci-bridge'/> <target chassisNr='3'/> <address type='pci' domain='0x0000' bus='0x01' slot='0x01' function='0x0'/> </controller> <controller type='pci' index='4' model='pci-bridge'> <model name='pci-bridge'/> <target chassisNr='4'/> <address type='pci' domain='0x0000' bus='0x01' slot='0x02' function='0x0'/> </controller> 3. check guest status Cannot boot up
This bug was closed deferred as a result of bug triage. Please reopen if you disagree and provide justification why this bug should get enough priority. Most important would be information about impact on customer or layered product. Please indicate requested target release.