RHEL Engineering is moving the tracking of its product development work on RHEL 6 through RHEL 9 to Red Hat Jira (issues.redhat.com). If you're a Red Hat customer, please continue to file support cases via the Red Hat customer portal. If you're not, please head to the "RHEL project" in Red Hat Jira and file new tickets here. Individual Bugzilla bugs in the statuses "NEW", "ASSIGNED", and "POST" are being migrated throughout September 2023. Bugs of Red Hat partners with an assigned Engineering Partner Manager (EPM) are migrated in late September as per pre-agreed dates. Bugs against components "kernel", "kernel-rt", and "kpatch" are only migrated if still in "NEW" or "ASSIGNED". If you cannot log in to RH Jira, please consult article #7032570. That failing, please send an e-mail to the RH Jira admins at rh-issues@redhat.com to troubleshoot your issue as a user management inquiry. The email creates a ServiceNow ticket with Red Hat. Individual Bugzilla bugs that are migrated will be moved to status "CLOSED", resolution "MIGRATED", and set with "MigratedToJIRA" in "Keywords". The link to the successor Jira issue will be found under "Links", have a little "two-footprint" icon next to it, and direct you to the "RHEL project" in Red Hat Jira (issue links are of type "https://issues.redhat.com/browse/RHEL-XXXX", where "X" is a digit). This same link will be available in a blue banner at the top of the page informing you that that bug has been migrated.
Bug 1345738 - [Q35 ] guest didn't boot up with ovmf when there have 2 pxb-pcies with 33 switches
Summary: [Q35 ] guest didn't boot up with ovmf when there have 2 pxb-pcies with 33 swi...
Keywords:
Status: CLOSED NOTABUG
Alias: None
Product: Red Hat Enterprise Linux 7
Classification: Red Hat
Component: qemu-kvm-rhev
Version: 7.3
Hardware: Unspecified
OS: Unspecified
medium
medium
Target Milestone: rc
: ---
Assignee: Marcel Apfelbaum
QA Contact: Virtualization Bugs
URL:
Whiteboard:
: 1345719 (view as bug list)
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2016-06-13 05:36 UTC by jingzhao
Modified: 2016-08-14 11:54 UTC (History)
9 users (show)

Fixed In Version:
Doc Type: If docs needed, set a value
Doc Text:
Clone Of:
Environment:
Last Closed: 2016-06-20 09:38:46 UTC
Target Upstream Version:
Embargoed:


Attachments (Terms of Use)
ovmf log (87.46 KB, text/plain)
2016-06-13 05:36 UTC, jingzhao
no flags Details
ovmf log of 32 downstream config (90.16 KB, text/plain)
2016-06-17 07:30 UTC, jingzhao
no flags Details

Description jingzhao 2016-06-13 05:36:04 UTC
Created attachment 1167234 [details]
ovmf log

Description of problem:
Guest didn't boot up successfully when used ovmf and boot up with 2 pxb-pcies and 33 switches

Version-Release number of selected component (if applicable):
kernel-3.10.0-433.el7.x86_64
qemu-kvm-rhev-2.6.0-5.el7.x86_64
OVMF-20160608-1.git988715a.el7.noarch

How reproducible:
3/3

Steps to Reproduce:
1. Boot up vm with following cli (2 pxb-pcies with 33 switches):
/usr/libexec/qemu-kvm \
-M q35 \
-cpu SandyBridge \
-monitor stdio \
-m 4G \
-vga qxl \
-drive file=/usr/share/OVMF/OVMF_CODE.secboot.fd,if=pflash,format=raw,unit=0,readonly=on \
-drive file=/usr/share/OVMF/OVMF_VARS.fd,if=pflash,format=raw,unit=1 \
-debugcon file:/home/q35.ovmf.log \
-global isa-debugcon.iobase=0x402 \
-spice port=5932,disable-ticketing \
-smp 4,sockets=4,cores=1,threads=1 \
-object memory-backend-ram,size=1024M,id=ram-node0 \
-numa node,nodeid=0,cpus=0,memdev=ram-node0 \
-object memory-backend-ram,size=1024M,id=ram-node1 \
-numa node,nodeid=1,cpus=1,memdev=ram-node1 \
-object memory-backend-ram,size=1024M,id=ram-node2 \
-numa node,nodeid=2,cpus=2,memdev=ram-node2 \
-object memory-backend-ram,size=1024M,id=ram-node3 \
-numa node,nodeid=3,cpus=3,memdev=ram-node3 \
-device pxb-pcie,id=bridge1,bus=pcie.0,numa_node=0,bus_nr=8 \
-device ioh3420,bus=bridge1,id=root1.0,slot=1 \
-device x3130-upstream,bus=root1.0,id=upstream1.1 \
-device xio3130-downstream,bus=upstream1.1,id=downstream1.1,chassis=2 \
-device virtio-net-pci,bus=downstream1.1,netdev=tap10,mac=9a:6a:6b:6c:6d:6e -netdev tap,id=tap10 \
-device ioh3420,bus=bridge1,id=root1.1,slot=2 \
-device x3130-upstream,bus=root1.1,id=upstream1.2 \
-device xio3130-downstream,bus=upstream1.2,id=downstream1.2,chassis=3 \
-drive if=none,id=drive0,file=/home/pxb-ovmf.qcow2 \
-device virtio-blk-pci,drive=drive0,scsi=off,bus=downstream1.2 \
-device ioh3420,bus=bridge1,id=root1.2,slot=3 \
-device x3130-upstream,bus=root1.2,id=upstream1.3 \
-device xio3130-downstream,bus=upstream1.3,id=downstream1.3,chassis=4 \
-device ioh3420,bus=bridge1,id=root1.3,slot=4 \
-device x3130-upstream,bus=root1.3,id=upstream1.4 \
-device xio3130-downstream,bus=upstream1.4,id=downstream1.4,chassis=5 \
-device ioh3420,bus=bridge1,id=root1.4,slot=5 \
-device x3130-upstream,bus=root1.4,id=upstream1.5 \
-device xio3130-downstream,bus=upstream1.5,id=downstream1.5,chassis=6 \
-device ioh3420,bus=bridge1,id=root1.5,slot=6 \
-device x3130-upstream,bus=root1.5,id=upstream1.6 \
-device xio3130-downstream,bus=upstream1.6,id=downstream1.6,chassis=7 \
-device ioh3420,bus=bridge1,id=root1.6,slot=7 \
-device x3130-upstream,bus=root1.6,id=upstream1.7 \
-device xio3130-downstream,bus=upstream1.7,id=downstream1.7,chassis=8 \
-device ioh3420,bus=bridge1,id=root1.7,slot=8 \
-device x3130-upstream,bus=root1.7,id=upstream1.8 \
-device xio3130-downstream,bus=upstream1.8,id=downstream1.8,chassis=9 \
-device ioh3420,bus=bridge1,id=root1.8,slot=9 \
-device x3130-upstream,bus=root1.8,id=upstream1.9 \
-device xio3130-downstream,bus=upstream1.9,id=downstream1.9,chassis=10 \
-device ioh3420,bus=bridge1,id=root1.9,slot=10 \
-device x3130-upstream,bus=root1.9,id=upstream1.10 \
-device xio3130-downstream,bus=upstream1.10,id=downstream1.10,chassis=11 \
-device ioh3420,bus=bridge1,id=root1.10,slot=11 \
-device x3130-upstream,bus=root1.10,id=upstream1.11 \
-device xio3130-downstream,bus=upstream1.11,id=downstream1.11,chassis=12 \
-device ioh3420,bus=bridge1,id=root1.11,slot=12 \
-device x3130-upstream,bus=root1.11,id=upstream1.12 \
-device xio3130-downstream,bus=upstream1.12,id=downstream1.12,chassis=13 \
-device ioh3420,bus=bridge1,id=root1.12,slot=13 \
-device x3130-upstream,bus=root1.12,id=upstream1.13 \
-device xio3130-downstream,bus=upstream1.13,id=downstream1.13,chassis=14 \
-device ioh3420,bus=bridge1,id=root1.13,slot=14 \
-device x3130-upstream,bus=root1.13,id=upstream1.14 \
-device xio3130-downstream,bus=upstream1.14,id=downstream1.14,chassis=15 \
-device ioh3420,bus=bridge1,id=root1.14,slot=15 \
-device x3130-upstream,bus=root1.14,id=upstream1.15 \
-device xio3130-downstream,bus=upstream1.15,id=downstream1.15,chassis=16 \
-device ioh3420,bus=bridge1,id=root1.15,slot=16 \
-device x3130-upstream,bus=root1.15,id=upstream1.16 \
-device xio3130-downstream,bus=upstream1.16,id=downstream1.16,chassis=17 \
-device ioh3420,bus=bridge1,id=root1.16,slot=17 \
-device x3130-upstream,bus=root1.16,id=upstream1.17 \
-device xio3130-downstream,bus=upstream1.17,id=downstream1.17,chassis=18 \
-device ioh3420,bus=bridge1,id=root1.17,slot=18 \
-device x3130-upstream,bus=root1.17,id=upstream1.18 \
-device xio3130-downstream,bus=upstream1.18,id=downstream1.18,chassis=19 \
-device ioh3420,bus=bridge1,id=root1.18,slot=19 \
-device x3130-upstream,bus=root1.18,id=upstream1.19 \
-device xio3130-downstream,bus=upstream1.19,id=downstream1.19,chassis=20 \
-device ioh3420,bus=bridge1,id=root1.19,slot=20 \
-device x3130-upstream,bus=root1.19,id=upstream1.20 \
-device xio3130-downstream,bus=upstream1.20,id=downstream1.20,chassis=21 \
-device ioh3420,bus=bridge1,id=root1.20,slot=21 \
-device x3130-upstream,bus=root1.20,id=upstream1.21 \
-device xio3130-downstream,bus=upstream1.21,id=downstream1.21,chassis=22 \
-device ioh3420,bus=bridge1,id=root1.21,slot=22 \
-device x3130-upstream,bus=root1.21,id=upstream1.22 \
-device xio3130-downstream,bus=upstream1.22,id=downstream1.22,chassis=23 \
-device ioh3420,bus=bridge1,id=root1.22,slot=23 \
-device x3130-upstream,bus=root1.22,id=upstream1.23 \
-device xio3130-downstream,bus=upstream1.23,id=downstream1.23,chassis=24 \
-device ioh3420,bus=bridge1,id=root1.23,slot=24 \
-device x3130-upstream,bus=root1.23,id=upstream1.24 \
-device xio3130-downstream,bus=upstream1.24,id=downstream1.24,chassis=25 \
-device ioh3420,bus=bridge1,id=root1.24,slot=25 \
-device x3130-upstream,bus=root1.24,id=upstream1.25 \
-device xio3130-downstream,bus=upstream1.25,id=downstream1.25,chassis=26 \
-device ioh3420,bus=bridge1,id=root1.25,slot=26 \
-device x3130-upstream,bus=root1.25,id=upstream1.26 \
-device xio3130-downstream,bus=upstream1.26,id=downstream1.26,chassis=27 \
-device ioh3420,bus=bridge1,id=root1.26,slot=27 \
-device x3130-upstream,bus=root1.26,id=upstream1.27 \
-device xio3130-downstream,bus=upstream1.27,id=downstream1.27,chassis=28 \
-device ioh3420,bus=bridge1,id=root1.27,slot=28 \
-device x3130-upstream,bus=root1.27,id=upstream1.28 \
-device xio3130-downstream,bus=upstream1.28,id=downstream1.28,chassis=29 \
-device ioh3420,bus=bridge1,id=root1.28,slot=29 \
-device x3130-upstream,bus=root1.28,id=upstream1.29 \
-device xio3130-downstream,bus=upstream1.29,id=downstream1.29,chassis=30 \
-device ioh3420,bus=bridge1,id=root1.29,slot=30 \
-device x3130-upstream,bus=root1.29,id=upstream1.30 \
-device xio3130-downstream,bus=upstream1.30,id=downstream1.30,chassis=31 \
-device ioh3420,bus=bridge1,id=root1.30,slot=31 \
-device x3130-upstream,bus=root1.30,id=upstream1.31 \
-device xio3130-downstream,bus=upstream1.31,id=downstream1.31,chassis=32 \
-device ioh3420,bus=bridge1,id=root1.31,slot=32 \
-device x3130-upstream,bus=root1.31,id=upstream1.32 \
-device xio3130-downstream,bus=upstream1.32,id=downstream1.32,chassis=33 \
-device pxb-pcie,id=bridge2,bus=pcie.0,numa_node=1,bus_nr=20 \
-device ioh3420,bus=bridge2,id=root2.0,slot=33 \
-device x3130-upstream,bus=root2.0,id=upstream2.1 \
-device xio3130-downstream,bus=upstream2.1,id=downstream2.1,chassis=34 \

Actual results:
guest didn't boot up successfully

Expected results:
guest can boot up successfully

Additional info:
guest can boot up successfully with 1 PXB/32 switches
guest can boot up successfully with 3 PXB/83 switches on seabios

Comment 2 Laszlo Ersek 2016-06-13 13:04:48 UTC
Possibly a duplicate of bug 1333238.

Also, whenever reporting an OVMF-related bug, please capture and attach the OVMF debug log:

  -chardev file,id=debug_port_log,path=debug_port.log \
  -device isa-debugcon,iobase=0x402,chardev=debug_port_log \

Thanks.

Comment 3 Laszlo Ersek 2016-06-13 13:06:15 UTC
Hm, I just checked the OVMF version in comment 0 (OVMF-20160608-1.git988715a.el7.noarch), so it's not a duplicate. That version contains the fix for bug 1333238.

Comment 4 Laszlo Ersek 2016-06-13 13:27:21 UTC
Also,

-drive file=/usr/share/OVMF/OVMF_VARS.fd,if=pflash,format=raw,unit=1 \

is wrong. If you don't use libvirt, then please create a copy of the "/usr/share/OVMF/OVMF_VARS.fd" file first, for the VM, and pass that copy to the VM.

Comment 5 Laszlo Ersek 2016-06-13 13:29:22 UTC
Sigh, I see the OVMF debug log was actually attached alongside the report. I guess I'm running low on coffee.

Comment 6 Laszlo Ersek 2016-06-13 13:45:17 UTC
Okay, I think I know what the problem is. It is incorrect configuration of the pxb-pcie devices (= QEMU command line problem).

Namely, you create the following root bridges:
- the default / main root bridge
- the first pxb-pcie (extra) root bridge, called "bridge1", with bus_nr=8
- the second pxb-pcie (extra) root bridge, called "bridge2", with bus_nr=20

Accordingly, the OVMF debug log contains the following messages:

> InitRootBridge: populated root bus 0, with room for 7 subordinate bus(es)
> InitRootBridge: populated root bus 8, with room for 11 subordinate bus(es)
> InitRootBridge: populated root bus 20, with room for 235 subordinate bus(es)

But then you try to plug 32 switches (each with one downstream port) into "bridge1". That cannot work -- each downstream port functions as a separate bridge, and requires a dedicated bus number range (consisting of at least one bus value). So, for 32 switches, each with one downstream port, you need to leave room for at least 32 buses behind "bridge1"; you can't squeeze them into a range of size 11 (20-8-1).

In additional info, you state that "guest can boot up successfully with 1 PXB/32 switches" -- that's exactly because in that case, there is no "bridge2" with "bus_nr=20" that limits the bus number range available to "bridge1".

In brief, if you change

  -device pxb-pcie,id=bridge2,bus=pcie.0,numa_node=1,bus_nr=20 \

to

  -device pxb-pcie,id=bridge2,bus=pcie.0,numa_node=1,bus_nr=128 \

for example, then it should work. Please retest.

Comment 7 jingzhao 2016-06-14 05:47:24 UTC
(In reply to Laszlo Ersek from comment #6)
> Okay, I think I know what the problem is. It is incorrect configuration of
> the pxb-pcie devices (= QEMU command line problem).
> 
> Namely, you create the following root bridges:
> - the default / main root bridge
> - the first pxb-pcie (extra) root bridge, called "bridge1", with bus_nr=8
> - the second pxb-pcie (extra) root bridge, called "bridge2", with bus_nr=20
> 
> Accordingly, the OVMF debug log contains the following messages:
> 
> > InitRootBridge: populated root bus 0, with room for 7 subordinate bus(es)
> > InitRootBridge: populated root bus 8, with room for 11 subordinate bus(es)
> > InitRootBridge: populated root bus 20, with room for 235 subordinate bus(es)
> 
> But then you try to plug 32 switches (each with one downstream port) into
> "bridge1". That cannot work -- each downstream port functions as a separate
> bridge, and requires a dedicated bus number range (consisting of at least
> one bus value). So, for 32 switches, each with one downstream port, you need
> to leave room for at least 32 buses behind "bridge1"; you can't squeeze them
> into a range of size 11 (20-8-1).
> 
> In additional info, you state that "guest can boot up successfully with 1
> PXB/32 switches" -- that's exactly because in that case, there is no
> "bridge2" with "bus_nr=20" that limits the bus number range available to
> "bridge1".
> 
> In brief, if you change
> 
>   -device pxb-pcie,id=bridge2,bus=pcie.0,numa_node=1,bus_nr=20 \
> 
> to
> 
>   -device pxb-pcie,id=bridge2,bus=pcie.0,numa_node=1,bus_nr=128 \
> 
> for example, then it should work. Please retest.

Yes, guest boot up when I changed the config. But there have some questions about it. could you help me ?
1.How should I know the range for every pxb-pcie bus? check the ovmf log, any other method? other words, how should I set the bus_nr parameter, any limit?
2.the parameter of "bus_nr" is for the range of devices which connected to pxb-pcie, not for a number of pxb-pcie bus?

Thanks 
Jing Zhao

Comment 8 Laszlo Ersek 2016-06-14 14:24:57 UTC
(In reply to jingzhao from comment #7)
> (In reply to Laszlo Ersek from comment #6)

> > In brief, if you change
> > 
> >   -device pxb-pcie,id=bridge2,bus=pcie.0,numa_node=1,bus_nr=20 \
> > 
> > to
> > 
> >   -device pxb-pcie,id=bridge2,bus=pcie.0,numa_node=1,bus_nr=128 \
> > 
> > for example, then it should work. Please retest.
> 
> Yes, guest boot up when I changed the config.

Great, thank you!

> But there have some questions
> about it. could you help me ?
> 1.How should I know the range for every pxb-pcie bus? check the ovmf log,
> any other method? other words, how should I set the bus_nr parameter, any
> limit?

This is a valid question.

The bus_nr properties *subdivide* the bus number range [0x00, 0xFF] (inclusive). For every pxb-pcie device, the bus number specified with bus_nr=XXX will be assigned as the extra root bus's own bus number, and the bus numbers *up to and excluding* the next bus_nr property will be available for bridges and downstream PCIe ports that are behind that extra root bus.

So, in this example, you specified the following: bus_nr=8 and bus_nr=20. This implies the following:

- The main (default) root bus receives bus number 0.
- The first pxb-pcie extra root bus receives bus number 8.
- The second pxb-pcie extra root bus receives bus number 20.
- Bridges and PCIe downstream ports hanging off of the default root bus have
  bus numbers 1, 2, ..., 7 available.
- Bridges and PCIe downstream ports hanging off of the first pxb-pcie root bus
  have bus numbers 9, 10, 11, ... 19 available.
- Bridges and PCIe downstream ports hanging off of the second pxb-pcie root bus
  have bus numbers 21, 22, ... 255 available.

So, if you have a pxb-pcie device called "root-bridge-N", with bus_nr=M, and you know that you want to plug K downstream ports in it (through K switches), then for the next pxb-pcie device, called "root-bridge-(N+1)", you should pick bus_nr=(M+K+1).

> 2.the parameter of "bus_nr" is for the range of devices which connected to
> pxb-pcie, not for a number of pxb-pcie bus?

The bus_nr property affects *both*. With Marcel and others we discussed this question for a long time on various upstream lists, when I was working on bug 1193080 (= OVMF support for PXB).

The rule is simple: bus_nr first deterimes the bus number of the extra root bridge itself, and second, it determines the bus number range for all bridges and downstream PCIe ports behind the root bridge. That range starts at bus_nr+1, and it ends just before the next lowest bus_nr property. If there is no next bus_nr property, then 255 is used (as inclusive maximum).


BTW, if you verify this bug, you can set 1193080 to VERIFIED immediately, if you want. With this test case, you are exercising the code that I wrote for bug 1193080.

For the BZ at hand, given your successful testing here, I propose NOTABUG. For 1193080, I propose VERIFIED (see above), but I'll leave it to you.

Thanks!

Comment 9 jingzhao 2016-06-15 05:26:52 UTC
(In reply to Laszlo Ersek from comment #8)
> (In reply to jingzhao from comment #7)
> > (In reply to Laszlo Ersek from comment #6)
> 
> > > In brief, if you change
> > > 
> > >   -device pxb-pcie,id=bridge2,bus=pcie.0,numa_node=1,bus_nr=20 \
> > > 
> > > to
> > > 
> > >   -device pxb-pcie,id=bridge2,bus=pcie.0,numa_node=1,bus_nr=128 \
> > > 
> > > for example, then it should work. Please retest.
> > 
> > Yes, guest boot up when I changed the config.
> 
> Great, thank you!
> 
> > But there have some questions
> > about it. could you help me ?
> > 1.How should I know the range for every pxb-pcie bus? check the ovmf log,
> > any other method? other words, how should I set the bus_nr parameter, any
> > limit?
> 
> This is a valid question.
> 
> The bus_nr properties *subdivide* the bus number range [0x00, 0xFF]
> (inclusive). For every pxb-pcie device, the bus number specified with
> bus_nr=XXX will be assigned as the extra root bus's own bus number, and the
> bus numbers *up to and excluding* the next bus_nr property will be available
> for bridges and downstream PCIe ports that are behind that extra root bus.
> 
> So, in this example, you specified the following: bus_nr=8 and bus_nr=20.
> This implies the following:
> 
> - The main (default) root bus receives bus number 0.
> - The first pxb-pcie extra root bus receives bus number 8.
> - The second pxb-pcie extra root bus receives bus number 20.
> - Bridges and PCIe downstream ports hanging off of the default root bus have
>   bus numbers 1, 2, ..., 7 available.
> - Bridges and PCIe downstream ports hanging off of the first pxb-pcie root
> bus
>   have bus numbers 9, 10, 11, ... 19 available.
> - Bridges and PCIe downstream ports hanging off of the second pxb-pcie root
> bus
>   have bus numbers 21, 22, ... 255 available.
> 
> So, if you have a pxb-pcie device called "root-bridge-N", with bus_nr=M, and
> you know that you want to plug K downstream ports in it (through K
> switches), then for the next pxb-pcie device, called "root-bridge-(N+1)",
> you should pick bus_nr=(M+K+1).
> 
> > 2.the parameter of "bus_nr" is for the range of devices which connected to
> > pxb-pcie, not for a number of pxb-pcie bus?
> 
> The bus_nr property affects *both*. With Marcel and others we discussed this
> question for a long time on various upstream lists, when I was working on
> bug 1193080 (= OVMF support for PXB).
> 
> The rule is simple: bus_nr first deterimes the bus number of the extra root
> bridge itself, and second, it determines the bus number range for all
> bridges and downstream PCIe ports behind the root bridge. That range starts
> at bus_nr+1, and it ends just before the next lowest bus_nr property. If
> there is no next bus_nr property, then 255 is used (as inclusive maximum).
> 
> 
> BTW, if you verify this bug, you can set 1193080 to VERIFIED immediately, if
> you want. With this test case, you are exercising the code that I wrote for
> bug 1193080.
> 
> For the BZ at hand, given your successful testing here, I propose NOTABUG.
> For 1193080, I propose VERIFIED (see above), but I'll leave it to you.
> 
> Thanks!

Thanks very much, Laszlo.
As your suggestion, but also failed when I changed the second bus_nr=41, guest also didn't boot up successfully (the second bus_nr = 8(the first bus_nr)+32 switches which connected to the first pxb device +1), following is the ovmf log information
PciHostBridgeGetRootBridges: 2 extra root buses reported by QEMU
InitRootBridge: populated root bus 0, with room for 7 subordinate bus(es)
InitRootBridge: populated root bus 8, with room for 32 subordinate bus(es)
InitRootBridge: populated root bus 41, with room for 214 subordinate bus(es)

Ps: the following is the command line:

[root@localhost home]# cat pxb-q35-ovmf1.sh 
/usr/libexec/qemu-kvm \
-M q35 \
-cpu SandyBridge \
-monitor stdio \
-m 4G \
-vga qxl \
-drive file=/usr/share/OVMF/OVMF_CODE.secboot.fd,if=pflash,format=raw,unit=0,readonly=on \
-drive file=/usr/share/OVMF/OVMF_VARS.fd,if=pflash,format=raw,unit=1 \
-debugcon file:/home/q35.ovmf.log \
-global isa-debugcon.iobase=0x402 \
-spice port=5932,disable-ticketing \
-smp 4,sockets=4,cores=1,threads=1 \
-object memory-backend-ram,size=1024M,id=ram-node0 \
-numa node,nodeid=0,cpus=0,memdev=ram-node0 \
-object memory-backend-ram,size=1024M,id=ram-node1 \
-numa node,nodeid=1,cpus=1,memdev=ram-node1 \
-object memory-backend-ram,size=1024M,id=ram-node2 \
-numa node,nodeid=2,cpus=2,memdev=ram-node2 \
-object memory-backend-ram,size=1024M,id=ram-node3 \
-numa node,nodeid=3,cpus=3,memdev=ram-node3 \
-device pxb-pcie,id=bridge1,bus=pcie.0,numa_node=0,bus_nr=8 \
-device ioh3420,bus=bridge1,id=root1.0,slot=1 \
-device x3130-upstream,bus=root1.0,id=upstream1.1 \
-device xio3130-downstream,bus=upstream1.1,id=downstream1.1,chassis=2 \
-device virtio-net-pci,bus=downstream1.1,netdev=tap10,mac=9a:6a:6b:6c:6d:6e -netdev tap,id=tap10 \
-device ioh3420,bus=bridge1,id=root1.1,slot=2 \
-device x3130-upstream,bus=root1.1,id=upstream1.2 \
-device xio3130-downstream,bus=upstream1.2,id=downstream1.2,chassis=3 \
-drive if=none,id=drive0,file=/home/pxb-ovmf.qcow2 \
-device virtio-blk-pci,drive=drive0,scsi=off,bus=downstream1.2 \
-device ioh3420,bus=bridge1,id=root1.2,slot=3 \
-device x3130-upstream,bus=root1.2,id=upstream1.3 \
-device xio3130-downstream,bus=upstream1.3,id=downstream1.3,chassis=4 \
-device ioh3420,bus=bridge1,id=root1.3,slot=4 \
-device x3130-upstream,bus=root1.3,id=upstream1.4 \
-device xio3130-downstream,bus=upstream1.4,id=downstream1.4,chassis=5 \
-device ioh3420,bus=bridge1,id=root1.4,slot=5 \
-device x3130-upstream,bus=root1.4,id=upstream1.5 \
-device xio3130-downstream,bus=upstream1.5,id=downstream1.5,chassis=6 \
-device ioh3420,bus=bridge1,id=root1.5,slot=6 \
-device x3130-upstream,bus=root1.5,id=upstream1.6 \
-device xio3130-downstream,bus=upstream1.6,id=downstream1.6,chassis=7 \
-device ioh3420,bus=bridge1,id=root1.6,slot=7 \
-device x3130-upstream,bus=root1.6,id=upstream1.7 \
-device xio3130-downstream,bus=upstream1.7,id=downstream1.7,chassis=8 \
-device ioh3420,bus=bridge1,id=root1.7,slot=8 \
-device x3130-upstream,bus=root1.7,id=upstream1.8 \
-device xio3130-downstream,bus=upstream1.8,id=downstream1.8,chassis=9 \
-device ioh3420,bus=bridge1,id=root1.8,slot=9 \
-device x3130-upstream,bus=root1.8,id=upstream1.9 \
-device xio3130-downstream,bus=upstream1.9,id=downstream1.9,chassis=10 \
-device ioh3420,bus=bridge1,id=root1.9,slot=10 \
-device x3130-upstream,bus=root1.9,id=upstream1.10 \
-device xio3130-downstream,bus=upstream1.10,id=downstream1.10,chassis=11 \
-device ioh3420,bus=bridge1,id=root1.10,slot=11 \
-device x3130-upstream,bus=root1.10,id=upstream1.11 \
-device xio3130-downstream,bus=upstream1.11,id=downstream1.11,chassis=12 \
-device ioh3420,bus=bridge1,id=root1.11,slot=12 \
-device x3130-upstream,bus=root1.11,id=upstream1.12 \
-device xio3130-downstream,bus=upstream1.12,id=downstream1.12,chassis=13 \
-device ioh3420,bus=bridge1,id=root1.12,slot=13 \
-device x3130-upstream,bus=root1.12,id=upstream1.13 \
-device xio3130-downstream,bus=upstream1.13,id=downstream1.13,chassis=14 \
-device ioh3420,bus=bridge1,id=root1.13,slot=14 \
-device x3130-upstream,bus=root1.13,id=upstream1.14 \
-device xio3130-downstream,bus=upstream1.14,id=downstream1.14,chassis=15 \
-device ioh3420,bus=bridge1,id=root1.14,slot=15 \
-device x3130-upstream,bus=root1.14,id=upstream1.15 \
-device xio3130-downstream,bus=upstream1.15,id=downstream1.15,chassis=16 \
-device ioh3420,bus=bridge1,id=root1.15,slot=16 \
-device x3130-upstream,bus=root1.15,id=upstream1.16 \
-device xio3130-downstream,bus=upstream1.16,id=downstream1.16,chassis=17 \
-device ioh3420,bus=bridge1,id=root1.16,slot=17 \
-device x3130-upstream,bus=root1.16,id=upstream1.17 \
-device xio3130-downstream,bus=upstream1.17,id=downstream1.17,chassis=18 \
-device ioh3420,bus=bridge1,id=root1.17,slot=18 \
-device x3130-upstream,bus=root1.17,id=upstream1.18 \
-device xio3130-downstream,bus=upstream1.18,id=downstream1.18,chassis=19 \
-device ioh3420,bus=bridge1,id=root1.18,slot=19 \
-device x3130-upstream,bus=root1.18,id=upstream1.19 \
-device xio3130-downstream,bus=upstream1.19,id=downstream1.19,chassis=20 \
-device ioh3420,bus=bridge1,id=root1.19,slot=20 \
-device x3130-upstream,bus=root1.19,id=upstream1.20 \
-device xio3130-downstream,bus=upstream1.20,id=downstream1.20,chassis=21 \
-device ioh3420,bus=bridge1,id=root1.20,slot=21 \
-device x3130-upstream,bus=root1.20,id=upstream1.21 \
-device xio3130-downstream,bus=upstream1.21,id=downstream1.21,chassis=22 \
-device ioh3420,bus=bridge1,id=root1.21,slot=22 \
-device x3130-upstream,bus=root1.21,id=upstream1.22 \
-device xio3130-downstream,bus=upstream1.22,id=downstream1.22,chassis=23 \
-device ioh3420,bus=bridge1,id=root1.22,slot=23 \
-device x3130-upstream,bus=root1.22,id=upstream1.23 \
-device xio3130-downstream,bus=upstream1.23,id=downstream1.23,chassis=24 \
-device ioh3420,bus=bridge1,id=root1.23,slot=24 \
-device x3130-upstream,bus=root1.23,id=upstream1.24 \
-device xio3130-downstream,bus=upstream1.24,id=downstream1.24,chassis=25 \
-device ioh3420,bus=bridge1,id=root1.24,slot=25 \
-device x3130-upstream,bus=root1.24,id=upstream1.25 \
-device xio3130-downstream,bus=upstream1.25,id=downstream1.25,chassis=26 \
-device ioh3420,bus=bridge1,id=root1.25,slot=26 \
-device x3130-upstream,bus=root1.25,id=upstream1.26 \
-device xio3130-downstream,bus=upstream1.26,id=downstream1.26,chassis=27 \
-device ioh3420,bus=bridge1,id=root1.26,slot=27 \
-device x3130-upstream,bus=root1.26,id=upstream1.27 \
-device xio3130-downstream,bus=upstream1.27,id=downstream1.27,chassis=28 \
-device ioh3420,bus=bridge1,id=root1.27,slot=28 \
-device x3130-upstream,bus=root1.27,id=upstream1.28 \
-device xio3130-downstream,bus=upstream1.28,id=downstream1.28,chassis=29 \
-device ioh3420,bus=bridge1,id=root1.28,slot=29 \
-device x3130-upstream,bus=root1.28,id=upstream1.29 \
-device xio3130-downstream,bus=upstream1.29,id=downstream1.29,chassis=30 \
-device ioh3420,bus=bridge1,id=root1.29,slot=30 \
-device x3130-upstream,bus=root1.29,id=upstream1.30 \
-device xio3130-downstream,bus=upstream1.30,id=downstream1.30,chassis=31 \
-device ioh3420,bus=bridge1,id=root1.30,slot=31 \
-device x3130-upstream,bus=root1.30,id=upstream1.31 \
-device xio3130-downstream,bus=upstream1.31,id=downstream1.31,chassis=32 \
-device ioh3420,bus=bridge1,id=root1.31,slot=32 \
-device x3130-upstream,bus=root1.31,id=upstream1.32 \
-device xio3130-downstream,bus=upstream1.32,id=downstream1.32,chassis=33 \
-device pxb-pcie,id=bridge2,bus=pcie.0,numa_node=1,bus_nr=41 \
-device ioh3420,bus=bridge2,id=root2.0,slot=33 \
-device x3130-upstream,bus=root2.0,id=upstream2.1 \
-device xio3130-downstream,bus=upstream2.1,id=downstream2.1,chassis=34 \
-device ioh3420,bus=bridge2,id=root2.1,slot=34 \
-device x3130-upstream,bus=root2.1,id=upstream2.2 \
-device xio3130-downstream,bus=upstream2.2,id=downstream2.2,chassis=35 \
-device ioh3420,bus=bridge2,id=root2.2,slot=35 \
-device x3130-upstream,bus=root2.2,id=upstream2.3 \
-device xio3130-downstream,bus=upstream2.3,id=downstream2.3,chassis=36 \
-device ioh3420,bus=bridge2,id=root2.3,slot=36 \
-device x3130-upstream,bus=root2.3,id=upstream2.4 \
-device xio3130-downstream,bus=upstream2.4,id=downstream2.4,chassis=37 \
-device ioh3420,bus=bridge2,id=root2.4,slot=37 \
-device x3130-upstream,bus=root2.4,id=upstream2.5 \
-device xio3130-downstream,bus=upstream2.5,id=downstream2.5,chassis=38 \
-device ioh3420,bus=bridge2,id=root2.5,slot=38 \
-device x3130-upstream,bus=root2.5,id=upstream2.6 \
-device xio3130-downstream,bus=upstream2.6,id=downstream2.6,chassis=39 \
-device ioh3420,bus=bridge2,id=root2.6,slot=39 \
-device x3130-upstream,bus=root2.6,id=upstream2.7 \
-device xio3130-downstream,bus=upstream2.7,id=downstream2.7,chassis=40 \
-device ioh3420,bus=bridge2,id=root2.7,slot=40 \
-device x3130-upstream,bus=root2.7,id=upstream2.8 \
-device xio3130-downstream,bus=upstream2.8,id=downstream2.8,chassis=41 \
-device ioh3420,bus=bridge2,id=root2.8,slot=41 \
-device x3130-upstream,bus=root2.8,id=upstream2.9 \
-device xio3130-downstream,bus=upstream2.9,id=downstream2.9,chassis=42 \
-device ioh3420,bus=bridge2,id=root2.9,slot=42 \
-device x3130-upstream,bus=root2.9,id=upstream2.10 \
-device xio3130-downstream,bus=upstream2.10,id=downstream2.10,chassis=43 \
-device ioh3420,bus=bridge2,id=root2.10,slot=43 \
-device x3130-upstream,bus=root2.10,id=upstream2.11 \
-device xio3130-downstream,bus=upstream2.11,id=downstream2.11,chassis=44 \
-device ioh3420,bus=bridge2,id=root2.11,slot=44 \
-device x3130-upstream,bus=root2.11,id=upstream2.12 \
-device xio3130-downstream,bus=upstream2.12,id=downstream2.12,chassis=45 \
-device ioh3420,bus=bridge2,id=root2.12,slot=45 \
-device x3130-upstream,bus=root2.12,id=upstream2.13 \
-device xio3130-downstream,bus=upstream2.13,id=downstream2.13,chassis=46 \
-device ioh3420,bus=bridge2,id=root2.13,slot=46 \
-device x3130-upstream,bus=root2.13,id=upstream2.14 \
-device xio3130-downstream,bus=upstream2.14,id=downstream2.14,chassis=47 \
-device ioh3420,bus=bridge2,id=root2.14,slot=47 \
-device x3130-upstream,bus=root2.14,id=upstream2.15 \
-device xio3130-downstream,bus=upstream2.15,id=downstream2.15,chassis=48 \
-device ioh3420,bus=bridge2,id=root2.15,slot=48 \
-device x3130-upstream,bus=root2.15,id=upstream2.16 \
-device xio3130-downstream,bus=upstream2.16,id=downstream2.16,chassis=49 \
-device ioh3420,bus=bridge2,id=root2.16,slot=49 \
-device x3130-upstream,bus=root2.16,id=upstream2.17 \
-device xio3130-downstream,bus=upstream2.17,id=downstream2.17,chassis=50 \
-device ioh3420,bus=bridge2,id=root2.17,slot=50 \
-device x3130-upstream,bus=root2.17,id=upstream2.18 \
-device xio3130-downstream,bus=upstream2.18,id=downstream2.18,chassis=51 \
-device ioh3420,bus=bridge2,id=root2.18,slot=51 \
-device x3130-upstream,bus=root2.18,id=upstream2.19 \
-device xio3130-downstream,bus=upstream2.19,id=downstream2.19,chassis=52 \
-device ioh3420,bus=bridge2,id=root2.19,slot=52 \
-device x3130-upstream,bus=root2.19,id=upstream2.20 \
-device xio3130-downstream,bus=upstream2.20,id=downstream2.20,chassis=53 \
-device ioh3420,bus=bridge2,id=root2.20,slot=53 \
-device x3130-upstream,bus=root2.20,id=upstream2.21 \
-device xio3130-downstream,bus=upstream2.21,id=downstream2.21,chassis=54 \
-device ioh3420,bus=bridge2,id=root2.21,slot=54 \
-device x3130-upstream,bus=root2.21,id=upstream2.22 \
-device xio3130-downstream,bus=upstream2.22,id=downstream2.22,chassis=55 \
-device ioh3420,bus=bridge2,id=root2.22,slot=55 \
-device x3130-upstream,bus=root2.22,id=upstream2.23 \
-device xio3130-downstream,bus=upstream2.23,id=downstream2.23,chassis=56 \
-device ioh3420,bus=bridge2,id=root2.23,slot=56 \
-device x3130-upstream,bus=root2.23,id=upstream2.24 \
-device xio3130-downstream,bus=upstream2.24,id=downstream2.24,chassis=57 \
-device ioh3420,bus=bridge2,id=root2.24,slot=57 \
-device x3130-upstream,bus=root2.24,id=upstream2.25 \
-device xio3130-downstream,bus=upstream2.25,id=downstream2.25,chassis=58 \
-device ioh3420,bus=bridge2,id=root2.25,slot=58 \
-device x3130-upstream,bus=root2.25,id=upstream2.26 \
-device xio3130-downstream,bus=upstream2.26,id=downstream2.26,chassis=59 \
-device ioh3420,bus=bridge2,id=root2.26,slot=59 \
-device x3130-upstream,bus=root2.26,id=upstream2.27 \
-device xio3130-downstream,bus=upstream2.27,id=downstream2.27,chassis=60 \
-device ioh3420,bus=bridge2,id=root2.27,slot=60 \
-device x3130-upstream,bus=root2.27,id=upstream2.28 \
-device xio3130-downstream,bus=upstream2.28,id=downstream2.28,chassis=61 \
-device ioh3420,bus=bridge2,id=root2.28,slot=61 \
-device x3130-upstream,bus=root2.28,id=upstream2.29 \
-device xio3130-downstream,bus=upstream2.29,id=downstream2.29,chassis=62 \
-device ioh3420,bus=bridge2,id=root2.29,slot=62 \
-device x3130-upstream,bus=root2.29,id=upstream2.30 \
-device xio3130-downstream,bus=upstream2.30,id=downstream2.30,chassis=63 \
-device ioh3420,bus=bridge2,id=root2.30,slot=63 \
-device x3130-upstream,bus=root2.30,id=upstream2.31 \
-device xio3130-downstream,bus=upstream2.31,id=downstream2.31,chassis=64 \


Thanks

Comment 10 Marcel Apfelbaum 2016-06-15 09:29:55 UTC
(In reply to jingzhao from comment #9)
> (In reply to Laszlo Ersek from comment #8)
> > (In reply to jingzhao from comment #7)
> > > (In reply to Laszlo Ersek from comment #6)
> > 
> > > > In brief, if you change
> > > > 
> > > >   -device pxb-pcie,id=bridge2,bus=pcie.0,numa_node=1,bus_nr=20 \
> > > > 
> > > > to
> > > > 
> > > >   -device pxb-pcie,id=bridge2,bus=pcie.0,numa_node=1,bus_nr=128 \
> > > > 
> > > > for example, then it should work. Please retest.
> > > 
> > > Yes, guest boot up when I changed the config.
> > 
> > Great, thank you!
> > 
> > > But there have some questions
> > > about it. could you help me ?
> > > 1.How should I know the range for every pxb-pcie bus? check the ovmf log,
> > > any other method? other words, how should I set the bus_nr parameter, any
> > > limit?
> > 
> > This is a valid question.
> > 
> > The bus_nr properties *subdivide* the bus number range [0x00, 0xFF]
> > (inclusive). For every pxb-pcie device, the bus number specified with
> > bus_nr=XXX will be assigned as the extra root bus's own bus number, and the
> > bus numbers *up to and excluding* the next bus_nr property will be available
> > for bridges and downstream PCIe ports that are behind that extra root bus.
> > 
> > So, in this example, you specified the following: bus_nr=8 and bus_nr=20.
> > This implies the following:
> > 
> > - The main (default) root bus receives bus number 0.
> > - The first pxb-pcie extra root bus receives bus number 8.
> > - The second pxb-pcie extra root bus receives bus number 20.
> > - Bridges and PCIe downstream ports hanging off of the default root bus have
> >   bus numbers 1, 2, ..., 7 available.
> > - Bridges and PCIe downstream ports hanging off of the first pxb-pcie root
> > bus
> >   have bus numbers 9, 10, 11, ... 19 available.
> > - Bridges and PCIe downstream ports hanging off of the second pxb-pcie root
> > bus
> >   have bus numbers 21, 22, ... 255 available.
> > 
> > So, if you have a pxb-pcie device called "root-bridge-N", with bus_nr=M, and
> > you know that you want to plug K downstream ports in it (through K
> > switches), then for the next pxb-pcie device, called "root-bridge-(N+1)",
> > you should pick bus_nr=(M+K+1).
> > 
> > > 2.the parameter of "bus_nr" is for the range of devices which connected to
> > > pxb-pcie, not for a number of pxb-pcie bus?
> > 
> > The bus_nr property affects *both*. With Marcel and others we discussed this
> > question for a long time on various upstream lists, when I was working on
> > bug 1193080 (= OVMF support for PXB).
> > 
> > The rule is simple: bus_nr first deterimes the bus number of the extra root
> > bridge itself, and second, it determines the bus number range for all
> > bridges and downstream PCIe ports behind the root bridge. That range starts
> > at bus_nr+1, and it ends just before the next lowest bus_nr property. If
> > there is no next bus_nr property, then 255 is used (as inclusive maximum).
> > 
> > 
> > BTW, if you verify this bug, you can set 1193080 to VERIFIED immediately, if
> > you want. With this test case, you are exercising the code that I wrote for
> > bug 1193080.
> > 
> > For the BZ at hand, given your successful testing here, I propose NOTABUG.
> > For 1193080, I propose VERIFIED (see above), but I'll leave it to you.
> > 
> > Thanks!
> 
> Thanks very much, Laszlo.
> As your suggestion, but also failed when I changed the second bus_nr=41,
> guest also didn't boot up successfully (the second bus_nr = 8(the first
> bus_nr)+32 switches which connected to the first pxb device +1), following
> is the ovmf log information
> PciHostBridgeGetRootBridges: 2 extra root buses reported by QEMU
> InitRootBridge: populated root bus 0, with room for 7 subordinate bus(es)
> InitRootBridge: populated root bus 8, with room for 32 subordinate bus(es)
> InitRootBridge: populated root bus 41, with room for 214 subordinate bus(es)
> 
> Ps: the following is the command line:
> 
> [root@localhost home]# cat pxb-q35-ovmf1.sh 
> /usr/libexec/qemu-kvm \
> -M q35 \
> -cpu SandyBridge \
> -monitor stdio \
> -m 4G \
> -vga qxl \
> -drive
> file=/usr/share/OVMF/OVMF_CODE.secboot.fd,if=pflash,format=raw,unit=0,
> readonly=on \
> -drive file=/usr/share/OVMF/OVMF_VARS.fd,if=pflash,format=raw,unit=1 \
> -debugcon file:/home/q35.ovmf.log \
> -global isa-debugcon.iobase=0x402 \
> -spice port=5932,disable-ticketing \
> -smp 4,sockets=4,cores=1,threads=1 \
> -object memory-backend-ram,size=1024M,id=ram-node0 \
> -numa node,nodeid=0,cpus=0,memdev=ram-node0 \
> -object memory-backend-ram,size=1024M,id=ram-node1 \
> -numa node,nodeid=1,cpus=1,memdev=ram-node1 \
> -object memory-backend-ram,size=1024M,id=ram-node2 \
> -numa node,nodeid=2,cpus=2,memdev=ram-node2 \
> -object memory-backend-ram,size=1024M,id=ram-node3 \
> -numa node,nodeid=3,cpus=3,memdev=ram-node3 \
> -device pxb-pcie,id=bridge1,bus=pcie.0,numa_node=0,bus_nr=8 \
> -device ioh3420,bus=bridge1,id=root1.0,slot=1 \
> -device x3130-upstream,bus=root1.0,id=upstream1.1 \
> -device xio3130-downstream,bus=upstream1.1,id=downstream1.1,chassis=2 \
> -device virtio-net-pci,bus=downstream1.1,netdev=tap10,mac=9a:6a:6b:6c:6d:6e
> -netdev tap,id=tap10 \
> -device ioh3420,bus=bridge1,id=root1.1,slot=2 \
> -device x3130-upstream,bus=root1.1,id=upstream1.2 \
> -device xio3130-downstream,bus=upstream1.2,id=downstream1.2,chassis=3 \
> -drive if=none,id=drive0,file=/home/pxb-ovmf.qcow2 \
> -device virtio-blk-pci,drive=drive0,scsi=off,bus=downstream1.2 \
> -device ioh3420,bus=bridge1,id=root1.2,slot=3 \
> -device x3130-upstream,bus=root1.2,id=upstream1.3 \
> -device xio3130-downstream,bus=upstream1.3,id=downstream1.3,chassis=4 \
> -device ioh3420,bus=bridge1,id=root1.3,slot=4 \
> -device x3130-upstream,bus=root1.3,id=upstream1.4 \
> -device xio3130-downstream,bus=upstream1.4,id=downstream1.4,chassis=5 \
> -device ioh3420,bus=bridge1,id=root1.4,slot=5 \
> -device x3130-upstream,bus=root1.4,id=upstream1.5 \
> -device xio3130-downstream,bus=upstream1.5,id=downstream1.5,chassis=6 \
> -device ioh3420,bus=bridge1,id=root1.5,slot=6 \
> -device x3130-upstream,bus=root1.5,id=upstream1.6 \
> -device xio3130-downstream,bus=upstream1.6,id=downstream1.6,chassis=7 \
> -device ioh3420,bus=bridge1,id=root1.6,slot=7 \
> -device x3130-upstream,bus=root1.6,id=upstream1.7 \
> -device xio3130-downstream,bus=upstream1.7,id=downstream1.7,chassis=8 \
> -device ioh3420,bus=bridge1,id=root1.7,slot=8 \
> -device x3130-upstream,bus=root1.7,id=upstream1.8 \
> -device xio3130-downstream,bus=upstream1.8,id=downstream1.8,chassis=9 \
> -device ioh3420,bus=bridge1,id=root1.8,slot=9 \
> -device x3130-upstream,bus=root1.8,id=upstream1.9 \
> -device xio3130-downstream,bus=upstream1.9,id=downstream1.9,chassis=10 \
> -device ioh3420,bus=bridge1,id=root1.9,slot=10 \
> -device x3130-upstream,bus=root1.9,id=upstream1.10 \
> -device xio3130-downstream,bus=upstream1.10,id=downstream1.10,chassis=11 \
> -device ioh3420,bus=bridge1,id=root1.10,slot=11 \
> -device x3130-upstream,bus=root1.10,id=upstream1.11 \
> -device xio3130-downstream,bus=upstream1.11,id=downstream1.11,chassis=12 \
> -device ioh3420,bus=bridge1,id=root1.11,slot=12 \
> -device x3130-upstream,bus=root1.11,id=upstream1.12 \
> -device xio3130-downstream,bus=upstream1.12,id=downstream1.12,chassis=13 \
> -device ioh3420,bus=bridge1,id=root1.12,slot=13 \
> -device x3130-upstream,bus=root1.12,id=upstream1.13 \
> -device xio3130-downstream,bus=upstream1.13,id=downstream1.13,chassis=14 \
> -device ioh3420,bus=bridge1,id=root1.13,slot=14 \
> -device x3130-upstream,bus=root1.13,id=upstream1.14 \
> -device xio3130-downstream,bus=upstream1.14,id=downstream1.14,chassis=15 \
> -device ioh3420,bus=bridge1,id=root1.14,slot=15 \
> -device x3130-upstream,bus=root1.14,id=upstream1.15 \
> -device xio3130-downstream,bus=upstream1.15,id=downstream1.15,chassis=16 \
> -device ioh3420,bus=bridge1,id=root1.15,slot=16 \
> -device x3130-upstream,bus=root1.15,id=upstream1.16 \
> -device xio3130-downstream,bus=upstream1.16,id=downstream1.16,chassis=17 \
> -device ioh3420,bus=bridge1,id=root1.16,slot=17 \
> -device x3130-upstream,bus=root1.16,id=upstream1.17 \
> -device xio3130-downstream,bus=upstream1.17,id=downstream1.17,chassis=18 \
> -device ioh3420,bus=bridge1,id=root1.17,slot=18 \
> -device x3130-upstream,bus=root1.17,id=upstream1.18 \
> -device xio3130-downstream,bus=upstream1.18,id=downstream1.18,chassis=19 \
> -device ioh3420,bus=bridge1,id=root1.18,slot=19 \
> -device x3130-upstream,bus=root1.18,id=upstream1.19 \
> -device xio3130-downstream,bus=upstream1.19,id=downstream1.19,chassis=20 \
> -device ioh3420,bus=bridge1,id=root1.19,slot=20 \
> -device x3130-upstream,bus=root1.19,id=upstream1.20 \
> -device xio3130-downstream,bus=upstream1.20,id=downstream1.20,chassis=21 \
> -device ioh3420,bus=bridge1,id=root1.20,slot=21 \
> -device x3130-upstream,bus=root1.20,id=upstream1.21 \
> -device xio3130-downstream,bus=upstream1.21,id=downstream1.21,chassis=22 \
> -device ioh3420,bus=bridge1,id=root1.21,slot=22 \
> -device x3130-upstream,bus=root1.21,id=upstream1.22 \
> -device xio3130-downstream,bus=upstream1.22,id=downstream1.22,chassis=23 \
> -device ioh3420,bus=bridge1,id=root1.22,slot=23 \
> -device x3130-upstream,bus=root1.22,id=upstream1.23 \
> -device xio3130-downstream,bus=upstream1.23,id=downstream1.23,chassis=24 \
> -device ioh3420,bus=bridge1,id=root1.23,slot=24 \
> -device x3130-upstream,bus=root1.23,id=upstream1.24 \
> -device xio3130-downstream,bus=upstream1.24,id=downstream1.24,chassis=25 \
> -device ioh3420,bus=bridge1,id=root1.24,slot=25 \
> -device x3130-upstream,bus=root1.24,id=upstream1.25 \
> -device xio3130-downstream,bus=upstream1.25,id=downstream1.25,chassis=26 \
> -device ioh3420,bus=bridge1,id=root1.25,slot=26 \
> -device x3130-upstream,bus=root1.25,id=upstream1.26 \
> -device xio3130-downstream,bus=upstream1.26,id=downstream1.26,chassis=27 \
> -device ioh3420,bus=bridge1,id=root1.26,slot=27 \
> -device x3130-upstream,bus=root1.26,id=upstream1.27 \
> -device xio3130-downstream,bus=upstream1.27,id=downstream1.27,chassis=28 \
> -device ioh3420,bus=bridge1,id=root1.27,slot=28 \
> -device x3130-upstream,bus=root1.27,id=upstream1.28 \
> -device xio3130-downstream,bus=upstream1.28,id=downstream1.28,chassis=29 \
> -device ioh3420,bus=bridge1,id=root1.28,slot=29 \
> -device x3130-upstream,bus=root1.28,id=upstream1.29 \
> -device xio3130-downstream,bus=upstream1.29,id=downstream1.29,chassis=30 \
> -device ioh3420,bus=bridge1,id=root1.29,slot=30 \
> -device x3130-upstream,bus=root1.29,id=upstream1.30 \
> -device xio3130-downstream,bus=upstream1.30,id=downstream1.30,chassis=31 \
> -device ioh3420,bus=bridge1,id=root1.30,slot=31 \
> -device x3130-upstream,bus=root1.30,id=upstream1.31 \
> -device xio3130-downstream,bus=upstream1.31,id=downstream1.31,chassis=32 \
> -device ioh3420,bus=bridge1,id=root1.31,slot=32 \
> -device x3130-upstream,bus=root1.31,id=upstream1.32 \
> -device xio3130-downstream,bus=upstream1.32,id=downstream1.32,chassis=33 \
> -device pxb-pcie,id=bridge2,bus=pcie.0,numa_node=1,bus_nr=41 \
> -device ioh3420,bus=bridge2,id=root2.0,slot=33 \
> -device x3130-upstream,bus=root2.0,id=upstream2.1 \
> -device xio3130-downstream,bus=upstream2.1,id=downstream2.1,chassis=34 \
> -device ioh3420,bus=bridge2,id=root2.1,slot=34 \
> -device x3130-upstream,bus=root2.1,id=upstream2.2 \
> -device xio3130-downstream,bus=upstream2.2,id=downstream2.2,chassis=35 \
> -device ioh3420,bus=bridge2,id=root2.2,slot=35 \
> -device x3130-upstream,bus=root2.2,id=upstream2.3 \
> -device xio3130-downstream,bus=upstream2.3,id=downstream2.3,chassis=36 \
> -device ioh3420,bus=bridge2,id=root2.3,slot=36 \
> -device x3130-upstream,bus=root2.3,id=upstream2.4 \
> -device xio3130-downstream,bus=upstream2.4,id=downstream2.4,chassis=37 \
> -device ioh3420,bus=bridge2,id=root2.4,slot=37 \
> -device x3130-upstream,bus=root2.4,id=upstream2.5 \
> -device xio3130-downstream,bus=upstream2.5,id=downstream2.5,chassis=38 \
> -device ioh3420,bus=bridge2,id=root2.5,slot=38 \
> -device x3130-upstream,bus=root2.5,id=upstream2.6 \
> -device xio3130-downstream,bus=upstream2.6,id=downstream2.6,chassis=39 \
> -device ioh3420,bus=bridge2,id=root2.6,slot=39 \
> -device x3130-upstream,bus=root2.6,id=upstream2.7 \
> -device xio3130-downstream,bus=upstream2.7,id=downstream2.7,chassis=40 \
> -device ioh3420,bus=bridge2,id=root2.7,slot=40 \
> -device x3130-upstream,bus=root2.7,id=upstream2.8 \
> -device xio3130-downstream,bus=upstream2.8,id=downstream2.8,chassis=41 \
> -device ioh3420,bus=bridge2,id=root2.8,slot=41 \
> -device x3130-upstream,bus=root2.8,id=upstream2.9 \
> -device xio3130-downstream,bus=upstream2.9,id=downstream2.9,chassis=42 \
> -device ioh3420,bus=bridge2,id=root2.9,slot=42 \
> -device x3130-upstream,bus=root2.9,id=upstream2.10 \
> -device xio3130-downstream,bus=upstream2.10,id=downstream2.10,chassis=43 \
> -device ioh3420,bus=bridge2,id=root2.10,slot=43 \
> -device x3130-upstream,bus=root2.10,id=upstream2.11 \
> -device xio3130-downstream,bus=upstream2.11,id=downstream2.11,chassis=44 \
> -device ioh3420,bus=bridge2,id=root2.11,slot=44 \
> -device x3130-upstream,bus=root2.11,id=upstream2.12 \
> -device xio3130-downstream,bus=upstream2.12,id=downstream2.12,chassis=45 \
> -device ioh3420,bus=bridge2,id=root2.12,slot=45 \
> -device x3130-upstream,bus=root2.12,id=upstream2.13 \
> -device xio3130-downstream,bus=upstream2.13,id=downstream2.13,chassis=46 \
> -device ioh3420,bus=bridge2,id=root2.13,slot=46 \
> -device x3130-upstream,bus=root2.13,id=upstream2.14 \
> -device xio3130-downstream,bus=upstream2.14,id=downstream2.14,chassis=47 \
> -device ioh3420,bus=bridge2,id=root2.14,slot=47 \
> -device x3130-upstream,bus=root2.14,id=upstream2.15 \
> -device xio3130-downstream,bus=upstream2.15,id=downstream2.15,chassis=48 \
> -device ioh3420,bus=bridge2,id=root2.15,slot=48 \
> -device x3130-upstream,bus=root2.15,id=upstream2.16 \
> -device xio3130-downstream,bus=upstream2.16,id=downstream2.16,chassis=49 \
> -device ioh3420,bus=bridge2,id=root2.16,slot=49 \
> -device x3130-upstream,bus=root2.16,id=upstream2.17 \
> -device xio3130-downstream,bus=upstream2.17,id=downstream2.17,chassis=50 \
> -device ioh3420,bus=bridge2,id=root2.17,slot=50 \
> -device x3130-upstream,bus=root2.17,id=upstream2.18 \
> -device xio3130-downstream,bus=upstream2.18,id=downstream2.18,chassis=51 \
> -device ioh3420,bus=bridge2,id=root2.18,slot=51 \
> -device x3130-upstream,bus=root2.18,id=upstream2.19 \
> -device xio3130-downstream,bus=upstream2.19,id=downstream2.19,chassis=52 \
> -device ioh3420,bus=bridge2,id=root2.19,slot=52 \
> -device x3130-upstream,bus=root2.19,id=upstream2.20 \
> -device xio3130-downstream,bus=upstream2.20,id=downstream2.20,chassis=53 \
> -device ioh3420,bus=bridge2,id=root2.20,slot=53 \
> -device x3130-upstream,bus=root2.20,id=upstream2.21 \
> -device xio3130-downstream,bus=upstream2.21,id=downstream2.21,chassis=54 \
> -device ioh3420,bus=bridge2,id=root2.21,slot=54 \
> -device x3130-upstream,bus=root2.21,id=upstream2.22 \
> -device xio3130-downstream,bus=upstream2.22,id=downstream2.22,chassis=55 \
> -device ioh3420,bus=bridge2,id=root2.22,slot=55 \
> -device x3130-upstream,bus=root2.22,id=upstream2.23 \
> -device xio3130-downstream,bus=upstream2.23,id=downstream2.23,chassis=56 \
> -device ioh3420,bus=bridge2,id=root2.23,slot=56 \
> -device x3130-upstream,bus=root2.23,id=upstream2.24 \
> -device xio3130-downstream,bus=upstream2.24,id=downstream2.24,chassis=57 \
> -device ioh3420,bus=bridge2,id=root2.24,slot=57 \
> -device x3130-upstream,bus=root2.24,id=upstream2.25 \
> -device xio3130-downstream,bus=upstream2.25,id=downstream2.25,chassis=58 \
> -device ioh3420,bus=bridge2,id=root2.25,slot=58 \
> -device x3130-upstream,bus=root2.25,id=upstream2.26 \
> -device xio3130-downstream,bus=upstream2.26,id=downstream2.26,chassis=59 \
> -device ioh3420,bus=bridge2,id=root2.26,slot=59 \
> -device x3130-upstream,bus=root2.26,id=upstream2.27 \
> -device xio3130-downstream,bus=upstream2.27,id=downstream2.27,chassis=60 \
> -device ioh3420,bus=bridge2,id=root2.27,slot=60 \
> -device x3130-upstream,bus=root2.27,id=upstream2.28 \
> -device xio3130-downstream,bus=upstream2.28,id=downstream2.28,chassis=61 \
> -device ioh3420,bus=bridge2,id=root2.28,slot=61 \
> -device x3130-upstream,bus=root2.28,id=upstream2.29 \
> -device xio3130-downstream,bus=upstream2.29,id=downstream2.29,chassis=62 \
> -device ioh3420,bus=bridge2,id=root2.29,slot=62 \
> -device x3130-upstream,bus=root2.29,id=upstream2.30 \
> -device xio3130-downstream,bus=upstream2.30,id=downstream2.30,chassis=63 \
> -device ioh3420,bus=bridge2,id=root2.30,slot=63 \
> -device x3130-upstream,bus=root2.30,id=upstream2.31 \
> -device xio3130-downstream,bus=upstream2.31,id=downstream2.31,chassis=64 \
> 
> 
> Thanks

Hi,

The combination:
 -device ioh3420,bus=bridge1,id=root1.1,slot=2 \
 -device x3130-upstream,bus=root1.1,id=upstream1.2 \
 -device xio3130-downstream,bus=upstream1.2,id=downstream1.2,chassis=3 \
needs 3 buses, one for each device.

For bridge1 we have:
- bus 8 for pxb-pcie itself
- bus 9-104 for switches (3 x 32 = 96)
That leaves us with bus 105 for the bridge2 (the next pxb-pcie)

Thanks,
Marcel

Comment 11 Laszlo Ersek 2016-06-15 10:29:54 UTC
Aww shucks, I've just remembered from the PCIe spec that PCIe root ports
implicitly qualify as PCIe downstream ports, hence they need their own
separate bus numbers too.

On your command line, you don't just create a bunch of switches / downstream
ports, with the switches cascading from each other; you add a bunch of
sibling root ports first (ioh3420), and then plug the switches in those.

So, I actually tested this, with the following trimmed down command line, to
see how the bus numbers are consumed:

qemu-system-x86_64 \
  -M q35 \
  -cpu SandyBridge \
  -monitor stdio \
  -m 4G \
  -vga qxl \
  -enable-kvm \
  \
  -drive file=/usr/share/OVMF/OVMF_CODE.secboot.fd,if=pflash,format=raw,unit=0,readonly=on \
  -drive file=myvars.fd,if=pflash,format=raw,unit=1 \
  \
  -debugcon file:/home/lacos/tmp/q35.ovmf.log \
  -global isa-debugcon.iobase=0x402 \
  \
  -smp 4,sockets=4,cores=1,threads=1 \
  \
  -object memory-backend-ram,size=1024M,id=ram-node0 \
  -numa node,nodeid=0,cpus=0,memdev=ram-node0 \
  -object memory-backend-ram,size=1024M,id=ram-node1 \
  -numa node,nodeid=1,cpus=1,memdev=ram-node1 \
  -object memory-backend-ram,size=1024M,id=ram-node2 \
  -numa node,nodeid=2,cpus=2,memdev=ram-node2 \
  -object memory-backend-ram,size=1024M,id=ram-node3 \
  -numa node,nodeid=3,cpus=3,memdev=ram-node3 \
  \
  -device pxb-pcie,id=bridge1,bus=pcie.0,numa_node=0,bus_nr=8 \
  -device ioh3420,bus=bridge1,id=root1.0,slot=1 \
  -device x3130-upstream,bus=root1.0,id=upstream1.1 \
  -device xio3130-downstream,bus=upstream1.1,id=downstream1.1,chassis=2 \
  -device virtio-net-pci,bus=downstream1.1,netdev=net0,rombar=0 \
  -netdev user,id=net0 \
  \
  -device ioh3420,bus=bridge1,id=root1.1,slot=2 \
  -device x3130-upstream,bus=root1.1,id=upstream1.2 \
  -device xio3130-downstream,bus=upstream1.2,id=downstream1.2,chassis=3 \
  -drive if=none,readonly=on,media=cdrom,id=drive0,file=/mnt/data/isos/iso-fedora/Fedora-Live-Workstation-x86_64-23-10.iso \
  -device virtio-scsi-pci,id=scsi0,bus=downstream1.2 \
  -device scsi-cd,bus=scsi0.0,drive=drive0,bootindex=0 \
  \
  -device ioh3420,bus=bridge1,id=root1.2,slot=3 \
  -device x3130-upstream,bus=root1.2,id=upstream1.3 \
  -device xio3130-downstream,bus=upstream1.3,id=downstream1.3,chassis=4 \
  \
  -device pxb-pcie,id=bridge2,bus=pcie.0,numa_node=1,bus_nr=128 \
  -device ioh3420,bus=bridge2,id=root2.0,slot=33 \
  -device x3130-upstream,bus=root2.0,id=upstream2.1 \
  -device xio3130-downstream,bus=upstream2.1,id=downstream2.1,chassis=34 \
  \
  -device ioh3420,bus=bridge2,id=root2.1,slot=34 \
  -device x3130-upstream,bus=root2.1,id=upstream2.2 \
  -device xio3130-downstream,bus=upstream2.2,id=downstream2.2,chassis=35

I consulted both the OVMF debug log and the lspci output in the Fedora
guest. Not only do root ports (ioh3420) and switch downstream ports
(xio3130-downstream) consume separate bus numbers, even switch upstream
ports (x3130-upstream) constitute separate bridges, and consume separate bus
numbers!

In the above example, the extra root bridge (pxb-pcie) called "bridge1" gets
bus number 0x08. It has three root ports (ioh3420), with the following
addresses, and bus numbers assigned to the resultant bridges:

  root port address  bus number for devices directly behind the root port
  -----------------  ----------------------------------------------------
            08:00.0  0x09
            08:01.0  0x0c
            08:02.0  0x0f

  upstream port address (x3130-upstream)  resultant bus number
  --------------------------------------  --------------------
                                 09:00.0  0x0a
                                 0c:00.0  0x0d
                                 0f:00.0  0x10

  downstream port address (xio3130-downstream)  resultant bus number
  --------------------------------------------  --------------------
                                       0a:00.0  0x0b
                                       0d:00.0  0x0e
                                       10:00.0  0x11

  actual device addresses
  -----------------------
  0b:00.0 (virtio-net)
  0e:00.0 (virtio-scsi)

Therefore, for the above QEMU command line scheme, we can compute the
minimal bus_nr property of the next pxb-pcie device like this:

  bus_nr(n + 1) = bus_nr(n) +
                  ioh3420_count(n) +
                  x3130_upstream_count(n) +
                  xio3130_downstream_count(n) +
                  1

Applying this to my trimmed down command line above, we get:

  bus_nr(bridge2) = 8 + 3 + 3 + 3 + 1 = 18

And, indeed if I use

  -device pxb-pcie,id=bridge2,bus=pcie.0,numa_node=1,bus_nr=18

then the guest boots, while

  -device pxb-pcie,id=bridge2,bus=pcie.0,numa_node=1,bus_nr=17

breaks the boot.


For the large command line in comment 9, we get:

  bus_nr(bridge2) = 8 + 32 + 32 + 32 + 1 = 105

I tested this value, and I could successfully boot the Fedora LiveCD with it
(although OVMF was quite slow in some places). And when I lowered bus_nr to
104, for bridge2, then the boot failed.

Comment 12 Laszlo Ersek 2016-06-15 10:33:07 UTC
(In reply to Marcel Apfelbaum from comment #10)

> The combination:
>  -device ioh3420,bus=bridge1,id=root1.1,slot=2 \
>  -device x3130-upstream,bus=root1.1,id=upstream1.2 \
>  -device xio3130-downstream,bus=upstream1.2,id=downstream1.2,chassis=3 \
> needs 3 buses, one for each device.
> 
> For bridge1 we have:
> - bus 8 for pxb-pcie itself
> - bus 9-104 for switches (3 x 32 = 96)
> That leaves us with bus 105 for the bridge2 (the next pxb-pcie)

Haha, thanks Marcel :) I guess I should have just waited for you to answer
the question in two paragraphs :) It's great to have confirmation from you
for my experimental results!

Comment 13 Laszlo Ersek 2016-06-15 10:55:32 UTC
Jing Zhao,

a side remark: the way you are using OVMF_VARS.fd is not correct. Please refer to bug 1308678 comment 23 bullet (1) for details.

(This doesn't influence the pxb-pcie test at hand in any way, but I thought it best to inform you about it.) Thanks.

Comment 14 jingzhao 2016-06-16 08:27:18 UTC
Thanks your feedback laszlo and marcel

I Test it and guest can boot up with 255 bus
 kernel-3.10.0-433.el7.x86_64
 qemu-kvm-rhev-2.6.0-5.el7.x86_64
 OVMF-20160608-1.git988715a.el7.noarch

qemu-system-x86_64 \
  -M q35 \
  -cpu SandyBridge \
  -monitor stdio \
  -m 4G \
  -vga qxl \
  -enable-kvm \
  \
  -drive file=/usr/share/OVMF/OVMF_CODE.secboot.fd,if=pflash,format=raw,unit=0,readonly=on \
  -drive file=myvars.fd,if=pflash,format=raw,unit=1 \
  \
  -debugcon file:/home/lacos/tmp/q35.ovmf.log \
  -global isa-debugcon.iobase=0x402 \
  \
  -smp 4,sockets=4,cores=1,threads=1 \
  \
  -object memory-backend-ram,size=1024M,id=ram-node0 \
  -numa node,nodeid=0,cpus=0,memdev=ram-node0 \
  -object memory-backend-ram,size=1024M,id=ram-node1 \
  -numa node,nodeid=1,cpus=1,memdev=ram-node1 \
  -object memory-backend-ram,size=1024M,id=ram-node2 \
  -numa node,nodeid=2,cpus=2,memdev=ram-node2 \
  -object memory-backend-ram,size=1024M,id=ram-node3 \
  -numa node,nodeid=3,cpus=3,memdev=ram-node3 \
  \
  -device pxb-pcie,id=bridge1,bus=pcie.0,numa_node=0,bus_nr=8 \
 -device ioh3420,bus=bridge1,id=root1.0,slot=1 \
 -device x3130-upstream,bus=root1.0,id=upstream1.1 \
 -device xio3130-downstream,bus=upstream1.1,id=downstream1.1,chassis=2 \
 -device virtio-net-pci,bus=downstream1.1,netdev=tap10,mac=9a:6a:6b:6c:6d:6e -netdev tap,id=tap10 \
 -device ioh3420,bus=bridge1,id=root1.1,slot=2 \
 -device x3130-upstream,bus=root1.1,id=upstream1.2 \
 -device xio3130-downstream,bus=upstream1.2,id=downstream1.2,chassis=3 \
 -drive if=none,id=drive0,file=/home/pxb-ovmf.qcow2 \
 -device virtio-blk-pci,drive=drive0,scsi=off,bus=downstream1.2 \
  ...........

 -device ioh3420,bus=bridge1,id=root1.31,slot=32 \
 -device x3130-upstream,bus=root1.31,id=upstream1.32 \
 -device xio3130-downstream,bus=upstream1.32,id=downstream1.32,chassis=33 \

 -device pxb-pcie,id=bridge2,bus=pcie.0,numa_node=1,bus_nr=105 \
 -device ioh3420,bus=bridge2,id=root2.0,slot=33 \
 -device x3130-upstream,bus=root2.0,id=upstream2.1 \
 -device xio3130-downstream,bus=upstream2.1,id=downstream2.1,chassis=34 \
    ............
 -device ioh3420,bus=bridge2,id=root2.31,slot=64 \
 -device x3130-upstream,bus=root2.31,id=upstream2.32 \
 -device xio3130-downstream,bus=upstream2.32,id=downstream2.32,chassis=65 \

 -device pxb-pcie,id=bridge3,bus=pcie.0,numa_node=2,bus_nr=202 \
 -device ioh3420,bus=bridge3,id=root3.0,slot=65 \
 -device x3130-upstream,bus=root3.0,id=upstream3.1 \
 -device xio3130-downstream,bus=upstream3.1,id=downstream3.1,chassis=66 \
   ............
 -device ioh3420,bus=bridge3,id=root3.16,slot=81 \
 -device x3130-upstream,bus=root3.16,id=upstream3.17 \
 -device xio3130-downstream,bus=upstream3.17,id=downstream3.17,chassis=82 \
 -device xio3130-downstream,bus=upstream3.17,id=downstream3.18,chassis=83 \
 -device xio3130-downstream,bus=upstream3.17,id=downstream3.19,chassis=84 \

So I think I will close it.

Another question, about the parameter slot of ioh3420 device, as I understanding, the value of slot can be set as 1...32 when changed to the second pxb_pcie, why the value of slot should be increase even if the pxb-pcie changed.

Thanks
Jing Zhao

Comment 15 jingzhao 2016-06-17 07:29:47 UTC
Hi Laszlo

  Also open this bug because I reproduce the issue when changed to another config
with ovmf on q35

  kernel-3.10.0-433.el7.x86_64
  qemu-kvm-rhev-2.6.0-5.el7.x86_64
  OVMF-20160608-1.git988715a.el7.noarch

Boot guest with following command

/usr/libexec/qemu-kvm \
-M q35 \
-cpu SandyBridge \
-monitor stdio \
-m 4G \
-vga qxl \
-spice port=5932,disable-ticketing \
-drive file=/usr/share/OVMF/OVMF_CODE.secboot.fd,if=pflash,format=raw,unit=0,readonly=on \
-drive file=/home/OVMF_VARS.fd,if=pflash,format=raw,unit=1 \
-debugcon file:/home/q35.ovmf.log \
-global isa-debugcon.iobase=0x402 \
-smp 4,sockets=4,cores=1,threads=1 \
-object memory-backend-ram,size=1024M,id=ram-node0 \
-numa node,nodeid=0,cpus=0,memdev=ram-node0 \
-object memory-backend-ram,size=1024M,id=ram-node1 \
-numa node,nodeid=1,cpus=1,memdev=ram-node1 \
-object memory-backend-ram,size=1024M,id=ram-node2 \
-numa node,nodeid=2,cpus=2,memdev=ram-node2 \
-object memory-backend-ram,size=1024M,id=ram-node3 \
-numa node,nodeid=3,cpus=3,memdev=ram-node3 \
-device pxb-pcie,id=bridge1,bus=pcie.0,numa_node=0,bus_nr=8 \
-device ioh3420,bus=bridge1,id=root1.0,slot=1 \
-device x3130-upstream,bus=root1.0,id=upstream1.1 \
-device xio3130-downstream,bus=upstream1.1,id=downstream1.1,chassis=2 \
-device virtio-net-pci,bus=downstream1.1,netdev=tap10,mac=9a:6a:6b:6c:6d:6e -netdev tap,id=tap10 \
-device xio3130-downstream,bus=upstream1.1,id=downstream1.2,chassis=3 \
-drive if=none,id=drive0,file=/home/pxb-ovmf.qcow2 \
-device virtio-blk-pci,drive=drive0,scsi=off,bus=downstream1.2 \
-device xio3130-downstream,bus=upstream1.1,id=downstream1.3,chassis=4 \
-device xio3130-downstream,bus=upstream1.1,id=downstream1.4,chassis=5 \
-device xio3130-downstream,bus=upstream1.1,id=downstream1.5,chassis=6 \
-device xio3130-downstream,bus=upstream1.1,id=downstream1.6,chassis=7 \
-device xio3130-downstream,bus=upstream1.1,id=downstream1.7,chassis=8 \
-device xio3130-downstream,bus=upstream1.1,id=downstream1.8,chassis=9 \
-device xio3130-downstream,bus=upstream1.1,id=downstream1.9,chassis=10 \
-device xio3130-downstream,bus=upstream1.1,id=downstream1.10,chassis=11 \
-device xio3130-downstream,bus=upstream1.1,id=downstream1.11,chassis=12 \
-device xio3130-downstream,bus=upstream1.1,id=downstream1.12,chassis=13 \
-device xio3130-downstream,bus=upstream1.1,id=downstream1.13,chassis=14 \
-device xio3130-downstream,bus=upstream1.1,id=downstream1.14,chassis=15 \
-device xio3130-downstream,bus=upstream1.1,id=downstream1.15,chassis=16 \
-device xio3130-downstream,bus=upstream1.1,id=downstream1.16,chassis=17 \
-device xio3130-downstream,bus=upstream1.1,id=downstream1.17,chassis=18 \
-device xio3130-downstream,bus=upstream1.1,id=downstream1.18,chassis=19 \
-device xio3130-downstream,bus=upstream1.1,id=downstream1.19,chassis=20 \
-device xio3130-downstream,bus=upstream1.1,id=downstream1.20,chassis=21 \
-device xio3130-downstream,bus=upstream1.1,id=downstream1.21,chassis=22 \
-device xio3130-downstream,bus=upstream1.1,id=downstream1.22,chassis=23 \
-device xio3130-downstream,bus=upstream1.1,id=downstream1.23,chassis=24 \
-device xio3130-downstream,bus=upstream1.1,id=downstream1.24,chassis=25 \
-device xio3130-downstream,bus=upstream1.1,id=downstream1.25,chassis=26 \
-device xio3130-downstream,bus=upstream1.1,id=downstream1.26,chassis=27 \
-device xio3130-downstream,bus=upstream1.1,id=downstream1.27,chassis=28 \
-device xio3130-downstream,bus=upstream1.1,id=downstream1.28,chassis=29 \
-device xio3130-downstream,bus=upstream1.1,id=downstream1.29,chassis=30 \
-device xio3130-downstream,bus=upstream1.1,id=downstream1.30,chassis=31 \
-device xio3130-downstream,bus=upstream1.1,id=downstream1.31,chassis=32 \
-device xio3130-downstream,bus=upstream1.1,id=downstream1.32,chassis=33 \
-device pxb-pcie,id=bridge2,bus=pcie.0,numa_node=1,bus_nr=43 \
-device ioh3420,bus=bridge2,id=root2.0,slot=2 \
-device x3130-upstream,bus=root2.0,id=upstream2.1 \
-device xio3130-downstream,bus=upstream2.1,id=downstream2.1,chassis=34 \
-device xio3130-downstream,bus=upstream2.1,id=downstream2.2,chassis=35 \
-device xio3130-downstream,bus=upstream2.1,id=downstream2.3,chassis=36 \
-device xio3130-downstream,bus=upstream2.1,id=downstream2.4,chassis=37 \
-device xio3130-downstream,bus=upstream2.1,id=downstream2.5,chassis=38 \
-device xio3130-downstream,bus=upstream2.1,id=downstream2.6,chassis=39 \
-device xio3130-downstream,bus=upstream2.1,id=downstream2.7,chassis=40 \
-device xio3130-downstream,bus=upstream2.1,id=downstream2.8,chassis=41 \
-device xio3130-downstream,bus=upstream2.1,id=downstream2.9,chassis=42 \
-device xio3130-downstream,bus=upstream2.1,id=downstream2.10,chassis=43 \
-device xio3130-downstream,bus=upstream2.1,id=downstream2.11,chassis=44 \
-device xio3130-downstream,bus=upstream2.1,id=downstream2.12,chassis=45 \
-device xio3130-downstream,bus=upstream2.1,id=downstream2.13,chassis=46 \
-device xio3130-downstream,bus=upstream2.1,id=downstream2.14,chassis=47 \
-device xio3130-downstream,bus=upstream2.1,id=downstream2.15,chassis=48 \
-device xio3130-downstream,bus=upstream2.1,id=downstream2.16,chassis=49 \
-device xio3130-downstream,bus=upstream2.1,id=downstream2.17,chassis=50 \
-device xio3130-downstream,bus=upstream2.1,id=downstream2.18,chassis=51 \
-device xio3130-downstream,bus=upstream2.1,id=downstream2.19,chassis=52 \
-device xio3130-downstream,bus=upstream2.1,id=downstream2.20,chassis=53 \
-device xio3130-downstream,bus=upstream2.1,id=downstream2.21,chassis=54 \
-device xio3130-downstream,bus=upstream2.1,id=downstream2.22,chassis=55 \
-device xio3130-downstream,bus=upstream2.1,id=downstream2.23,chassis=56 \
-device xio3130-downstream,bus=upstream2.1,id=downstream2.24,chassis=57 \
-device xio3130-downstream,bus=upstream2.1,id=downstream2.25,chassis=58 \
-device xio3130-downstream,bus=upstream2.1,id=downstream2.26,chassis=59 \
-device xio3130-downstream,bus=upstream2.1,id=downstream2.27,chassis=60 \
-device xio3130-downstream,bus=upstream2.1,id=downstream2.28,chassis=61 \
-device xio3130-downstream,bus=upstream2.1,id=downstream2.29,chassis=62 \
-device xio3130-downstream,bus=upstream2.1,id=downstream2.30,chassis=63 \
-device xio3130-downstream,bus=upstream2.1,id=downstream2.31,chassis=64 \
-device xio3130-downstream,bus=upstream2.1,id=downstream2.32,chassis=65 \
-device pxb-pcie,id=bridge3,bus=pcie.0,numa_node=2,bus_nr=76 \
-device ioh3420,bus=bridge3,id=root3.0,slot=3 \

PS: Guest can boot up successfully if I set 2 pxb-pcie devices and 255 buses.
Like
-device pxb-pcie,id=bridge1,bus=pcie.0,numa_node=0,bus_nr=8 \
-device ioh3420,bus=bridge1,id=root1.0,slot=1 \
-device x3130-upstream,bus=root1.0,id=upstream1.1 \
-device xio3130-downstream,bus=upstream1.1,id=downstream1.1,chassis=2 \
.............
-device xio3130-downstream,bus=upstream1.1,id=downstream1.32,chassis=33 \

-device pxb-pcie,id=bridge2,bus=pcie.0,numa_node=1,bus_nr=43 \
-device ioh3420,bus=bridge2,id=root2.0,slot=2 \
-device x3130-upstream,bus=root2.0,id=upstream2.1 \
........(32 downstreams)

-device ioh3420,bus=bridge2,id=root3.0,slot=3 \
-device x3130-upstream,bus=root3.0,id=upstream3.1 \
.......(32 downstreams)

.........

-device ioh3420,bus=bridge2,id=root7.0,slot=7 \
-device x3130-upstream,bus=root7.0,id=upstream7.1 \
-device xio3130-downstream,bus=upstream7.1,id=downstream7.1,chassis=199 \
-device xio3130-downstream,bus=upstream7.1,id=downstream7.2,chassis=194 \
-device xio3130-downstream,bus=upstream7.1,id=downstream7.3,chassis=195 \
-device xio3130-downstream,bus=upstream7.1,id=downstream7.4,chassis=196 \
-device xio3130-downstream,bus=upstream7.1,id=downstream7.5,chassis=197 \
-device xio3130-downstream,bus=upstream7.1,id=downstream7.6,chassis=198 \

I will attach the ovmf log of this configration

Comment 16 jingzhao 2016-06-17 07:30:49 UTC
Created attachment 1168949 [details]
ovmf log of 32 downstream config

Comment 17 Laszlo Ersek 2016-06-17 16:33:30 UTC
(In reply to jingzhao from comment #14)

> Another question, about the parameter slot of ioh3420 device, as I
> understanding, the value of slot can be set as 1...32 when changed to the
> second pxb_pcie, why the value of slot should be increase even if the
> pxb-pcie changed.

Sorry, no clue -- I'll leave this to Marcel or Alex. Thanks. (I'm about to look into the rest of your comments though.)

Comment 18 Laszlo Ersek 2016-06-17 16:51:39 UTC
(In reply to jingzhao from comment #15)
> Hi Laszlo
> 
>   Also open this bug because I reproduce the issue when changed to another
> config
> with ovmf on q35
> 
>   kernel-3.10.0-433.el7.x86_64
>   qemu-kvm-rhev-2.6.0-5.el7.x86_64
>   OVMF-20160608-1.git988715a.el7.noarch
> 
> Boot guest with following command
> 
> /usr/libexec/qemu-kvm \
> -M q35 \
> -cpu SandyBridge \
> -monitor stdio \
> -m 4G \
> -vga qxl \
> -spice port=5932,disable-ticketing \
> -drive
> file=/usr/share/OVMF/OVMF_CODE.secboot.fd,if=pflash,format=raw,unit=0,
> readonly=on \
> -drive file=/home/OVMF_VARS.fd,if=pflash,format=raw,unit=1 \
> -debugcon file:/home/q35.ovmf.log \
> -global isa-debugcon.iobase=0x402 \
> -smp 4,sockets=4,cores=1,threads=1 \
> -object memory-backend-ram,size=1024M,id=ram-node0 \
> -numa node,nodeid=0,cpus=0,memdev=ram-node0 \
> -object memory-backend-ram,size=1024M,id=ram-node1 \
> -numa node,nodeid=1,cpus=1,memdev=ram-node1 \
> -object memory-backend-ram,size=1024M,id=ram-node2 \
> -numa node,nodeid=2,cpus=2,memdev=ram-node2 \
> -object memory-backend-ram,size=1024M,id=ram-node3 \
> -numa node,nodeid=3,cpus=3,memdev=ram-node3 \
> -device pxb-pcie,id=bridge1,bus=pcie.0,numa_node=0,bus_nr=8 \
> -device ioh3420,bus=bridge1,id=root1.0,slot=1 \
> -device x3130-upstream,bus=root1.0,id=upstream1.1 \
> -device xio3130-downstream,bus=upstream1.1,id=downstream1.1,chassis=2 \
> -device virtio-net-pci,bus=downstream1.1,netdev=tap10,mac=9a:6a:6b:6c:6d:6e
> -netdev tap,id=tap10 \
> -device xio3130-downstream,bus=upstream1.1,id=downstream1.2,chassis=3 \
> -drive if=none,id=drive0,file=/home/pxb-ovmf.qcow2 \
> -device virtio-blk-pci,drive=drive0,scsi=off,bus=downstream1.2 \
> -device xio3130-downstream,bus=upstream1.1,id=downstream1.3,chassis=4 \
> -device xio3130-downstream,bus=upstream1.1,id=downstream1.4,chassis=5 \
> -device xio3130-downstream,bus=upstream1.1,id=downstream1.5,chassis=6 \
> -device xio3130-downstream,bus=upstream1.1,id=downstream1.6,chassis=7 \
> -device xio3130-downstream,bus=upstream1.1,id=downstream1.7,chassis=8 \
> -device xio3130-downstream,bus=upstream1.1,id=downstream1.8,chassis=9 \
> -device xio3130-downstream,bus=upstream1.1,id=downstream1.9,chassis=10 \
> -device xio3130-downstream,bus=upstream1.1,id=downstream1.10,chassis=11 \
> -device xio3130-downstream,bus=upstream1.1,id=downstream1.11,chassis=12 \
> -device xio3130-downstream,bus=upstream1.1,id=downstream1.12,chassis=13 \
> -device xio3130-downstream,bus=upstream1.1,id=downstream1.13,chassis=14 \
> -device xio3130-downstream,bus=upstream1.1,id=downstream1.14,chassis=15 \
> -device xio3130-downstream,bus=upstream1.1,id=downstream1.15,chassis=16 \
> -device xio3130-downstream,bus=upstream1.1,id=downstream1.16,chassis=17 \
> -device xio3130-downstream,bus=upstream1.1,id=downstream1.17,chassis=18 \
> -device xio3130-downstream,bus=upstream1.1,id=downstream1.18,chassis=19 \
> -device xio3130-downstream,bus=upstream1.1,id=downstream1.19,chassis=20 \
> -device xio3130-downstream,bus=upstream1.1,id=downstream1.20,chassis=21 \
> -device xio3130-downstream,bus=upstream1.1,id=downstream1.21,chassis=22 \
> -device xio3130-downstream,bus=upstream1.1,id=downstream1.22,chassis=23 \
> -device xio3130-downstream,bus=upstream1.1,id=downstream1.23,chassis=24 \
> -device xio3130-downstream,bus=upstream1.1,id=downstream1.24,chassis=25 \
> -device xio3130-downstream,bus=upstream1.1,id=downstream1.25,chassis=26 \
> -device xio3130-downstream,bus=upstream1.1,id=downstream1.26,chassis=27 \
> -device xio3130-downstream,bus=upstream1.1,id=downstream1.27,chassis=28 \
> -device xio3130-downstream,bus=upstream1.1,id=downstream1.28,chassis=29 \
> -device xio3130-downstream,bus=upstream1.1,id=downstream1.29,chassis=30 \
> -device xio3130-downstream,bus=upstream1.1,id=downstream1.30,chassis=31 \
> -device xio3130-downstream,bus=upstream1.1,id=downstream1.31,chassis=32 \
> -device xio3130-downstream,bus=upstream1.1,id=downstream1.32,chassis=33 \

On "bridge1", with bus_nr=8, you have:
-  1 ioh3420 device
-  1 x3130-upstream device
- 32 xio3130-downstream devices

The formula from comment 11:

  bus_nr(n + 1) = bus_nr(n) +
                  ioh3420_count(n) +
                  x3130_upstream_count(n) +
                  xio3130_downstream_count(n) +
                  1

suggests 8+1+1+32+1 = 43 for the bus_nr of the next pxb-pcie device ("bridge2"). And, indeed that's what you have:

> -device pxb-pcie,id=bridge2,bus=pcie.0,numa_node=1,bus_nr=43 \

So that's good. Then,

> -device ioh3420,bus=bridge2,id=root2.0,slot=2 \
> -device x3130-upstream,bus=root2.0,id=upstream2.1 \
> -device xio3130-downstream,bus=upstream2.1,id=downstream2.1,chassis=34 \
> -device xio3130-downstream,bus=upstream2.1,id=downstream2.2,chassis=35 \
> -device xio3130-downstream,bus=upstream2.1,id=downstream2.3,chassis=36 \
> -device xio3130-downstream,bus=upstream2.1,id=downstream2.4,chassis=37 \
> -device xio3130-downstream,bus=upstream2.1,id=downstream2.5,chassis=38 \
> -device xio3130-downstream,bus=upstream2.1,id=downstream2.6,chassis=39 \
> -device xio3130-downstream,bus=upstream2.1,id=downstream2.7,chassis=40 \
> -device xio3130-downstream,bus=upstream2.1,id=downstream2.8,chassis=41 \
> -device xio3130-downstream,bus=upstream2.1,id=downstream2.9,chassis=42 \
> -device xio3130-downstream,bus=upstream2.1,id=downstream2.10,chassis=43 \
> -device xio3130-downstream,bus=upstream2.1,id=downstream2.11,chassis=44 \
> -device xio3130-downstream,bus=upstream2.1,id=downstream2.12,chassis=45 \
> -device xio3130-downstream,bus=upstream2.1,id=downstream2.13,chassis=46 \
> -device xio3130-downstream,bus=upstream2.1,id=downstream2.14,chassis=47 \
> -device xio3130-downstream,bus=upstream2.1,id=downstream2.15,chassis=48 \
> -device xio3130-downstream,bus=upstream2.1,id=downstream2.16,chassis=49 \
> -device xio3130-downstream,bus=upstream2.1,id=downstream2.17,chassis=50 \
> -device xio3130-downstream,bus=upstream2.1,id=downstream2.18,chassis=51 \
> -device xio3130-downstream,bus=upstream2.1,id=downstream2.19,chassis=52 \
> -device xio3130-downstream,bus=upstream2.1,id=downstream2.20,chassis=53 \
> -device xio3130-downstream,bus=upstream2.1,id=downstream2.21,chassis=54 \
> -device xio3130-downstream,bus=upstream2.1,id=downstream2.22,chassis=55 \
> -device xio3130-downstream,bus=upstream2.1,id=downstream2.23,chassis=56 \
> -device xio3130-downstream,bus=upstream2.1,id=downstream2.24,chassis=57 \
> -device xio3130-downstream,bus=upstream2.1,id=downstream2.25,chassis=58 \
> -device xio3130-downstream,bus=upstream2.1,id=downstream2.26,chassis=59 \
> -device xio3130-downstream,bus=upstream2.1,id=downstream2.27,chassis=60 \
> -device xio3130-downstream,bus=upstream2.1,id=downstream2.28,chassis=61 \
> -device xio3130-downstream,bus=upstream2.1,id=downstream2.29,chassis=62 \
> -device xio3130-downstream,bus=upstream2.1,id=downstream2.30,chassis=63 \
> -device xio3130-downstream,bus=upstream2.1,id=downstream2.31,chassis=64 \
> -device xio3130-downstream,bus=upstream2.1,id=downstream2.32,chassis=65 \

On "bridge2" (with bus_nr=43), you again have:
-  1 ioh3420 device
-  1 x3130-upstream device
- 32 xio3130-downstream devices

Applying the formula again,

  bus_nr(n + 1) = bus_nr(n) +
                  ioh3420_count(n) +
                  x3130_upstream_count(n) +
                  xio3130_downstream_count(n) +
                  1

we get 43+1+1+32+1 = 78, for the bus_nr of the next pxb-pcie device ("bridge3").

But, you seem to have miscalculated that:

> -device pxb-pcie,id=bridge3,bus=pcie.0,numa_node=2,bus_nr=76 \

Indeed the OVMF log from comment 16 states:

PciHostBridgeGetRootBridges: 3 extra root buses reported by QEMU
InitRootBridge: populated root bus 0, with room for 7 subordinate bus(es)
InitRootBridge: populated root bus 8, with room for 34 subordinate bus(es)
InitRootBridge: populated root bus 43, with room for 32 subordinate bus(es)
InitRootBridge: populated root bus 76, with room for 179 subordinate bus(es)

On bridge1 you have room for 34 secondary buses (which is correct: 1 root port, 1 upstream port, 32 downstream ports), but on bridge2, you only have room for 32 secondary buses (1 root port, 1 upstream port, 30 (not 32) downstream port).

So, it seems to me that this is a typo or a miscalculation in your config. If you modify the option to

  -device pxb-pcie,id=bridge3,bus=pcie.0,numa_node=2,bus_nr=78

it should work.

> -device ioh3420,bus=bridge3,id=root3.0,slot=3 \
> 
> PS: Guest can boot up successfully if I set 2 pxb-pcie devices and 255 buses.

Yes, because in that case you don't have the "bridge3" pxb-pcie device, with the incorrect bus_nr=76 property.

Comment 19 jingzhao 2016-06-20 01:57:31 UTC
Thanks Laszlo
correct the value of bus_nr, and guest can boot up successfully

Thanks 
Jing

Comment 20 Laszlo Ersek 2016-06-20 09:38:46 UTC
Thank you very much for confirming. Hence, I'm closing this as NOTABUG (see comment 8).

Also, as I mentioned earlier, please consider setting bug 1193080 to VERIFIED -- by now you have extensively tested the PXB-related code in OVMF.

Thanks,
Laszlo

Comment 21 Marcel Apfelbaum 2016-06-23 10:02:54 UTC
*** Bug 1345719 has been marked as a duplicate of this bug. ***

Comment 22 Marcel Apfelbaum 2016-08-14 11:54:17 UTC
(In reply to Laszlo Ersek from comment #17)
> (In reply to jingzhao from comment #14)
> 
> > Another question, about the parameter slot of ioh3420 device, as I
> > understanding, the value of slot can be set as 1...32 when changed to the
> > second pxb_pcie, why the value of slot should be increase even if the
> > pxb-pcie changed.
> 

I don't understand the question, the slot (as far as I understood from the PCIE spec - 6.7.3 PCI Express Hot-Plug Events) is a 8 bit long register and it acts as a unique identifier for hot-plug per system, not per bus/bridge/host-bridge.
That means we can have up to 255 hot-pluggable PCIe ports per system.

Thanks,
Marcel

> Sorry, no clue -- I'll leave this to Marcel or Alex. Thanks. (I'm about to
> look into the rest of your comments though.)


Note You need to log in before you can comment on or make changes to this bug.