RHEL Engineering is moving the tracking of its product development work on RHEL 6 through RHEL 9 to Red Hat Jira (issues.redhat.com). If you're a Red Hat customer, please continue to file support cases via the Red Hat customer portal. If you're not, please head to the "RHEL project" in Red Hat Jira and file new tickets here. Individual Bugzilla bugs in the statuses "NEW", "ASSIGNED", and "POST" are being migrated throughout September 2023. Bugs of Red Hat partners with an assigned Engineering Partner Manager (EPM) are migrated in late September as per pre-agreed dates. Bugs against components "kernel", "kernel-rt", and "kpatch" are only migrated if still in "NEW" or "ASSIGNED". If you cannot log in to RH Jira, please consult article #7032570. That failing, please send an e-mail to the RH Jira admins at rh-issues@redhat.com to troubleshoot your issue as a user management inquiry. The email creates a ServiceNow ticket with Red Hat. Individual Bugzilla bugs that are migrated will be moved to status "CLOSED", resolution "MIGRATED", and set with "MigratedToJIRA" in "Keywords". The link to the successor Jira issue will be found under "Links", have a little "two-footprint" icon next to it, and direct you to the "RHEL project" in Red Hat Jira (issue links are of type "https://issues.redhat.com/browse/RHEL-XXXX", where "X" is a digit). This same link will be available in a blue banner at the top of the page informing you that that bug has been migrated.
Bug 987322 - fail to boot guest when attach more than 4 devices to the same pcie switch
Summary: fail to boot guest when attach more than 4 devices to the same pcie switch
Keywords:
Status: CLOSED DUPLICATE of bug 1055832
Alias: None
Product: Red Hat Enterprise Linux 7
Classification: Red Hat
Component: seabios
Version: 7.0
Hardware: x86_64
OS: Linux
medium
medium
Target Milestone: rc
: ---
Assignee: Vlad Yasevich
QA Contact: Virtualization Bugs
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2013-07-23 09:02 UTC by FuXiangChun
Modified: 2014-01-29 18:00 UTC (History)
12 users (show)

Fixed In Version:
Doc Type: Bug Fix
Doc Text:
Clone Of:
Environment:
Last Closed: 2014-01-29 18:00:28 UTC
Target Upstream Version:
Embargoed:


Attachments (Terms of Use)
guest screenshot (12.00 KB, image/png)
2013-07-23 09:48 UTC, FuXiangChun
no flags Details


Links
System ID Private Priority Status Summary Last Updated
Red Hat Bugzilla 1056354 0 medium CLOSED seabios can not recognize system disk with more than 1 downstream 2021-02-22 00:41:40 UTC

Internal Links: 1056354

Description FuXiangChun 2013-07-23 09:02:53 UTC
Description of problem:
Guest works well when boot guest with 3 devices to switch.  If >3 devices to switch, then guest boot fail,  windows guest hit the same issue as well. I will attach guest's screenshot and console serial output.

Version-Release number of selected component (if applicable):
qemu-kvm:
qemu-kvm-1.5.0-2.el7.x86_64
host and guest kernel:
3.10.0-2.el7.x86_64

How reproducible:
100%

Steps to Reproduce:
1.attach 4 devices to the same switch 
/usr/libexec/qemu-kvm -name 'virtio-network' -nodefaults -m 4G -smp 4,cores=2,threads=2,sockets=1 -M q35 -cpu Opteron_G2 -rtc base=utc,clock=host,driftfix=slew -k en-us -boot menu=on -monitor stdio -vnc :1 -spice disable-ticketing,port=5931 -vga qxl -qmp tcp:0:5555,server,nowait  

-device ioh3420,bus=pcie.0,id=root.0 
-device x3130-upstream,bus=root.0,id=upstream 
-device xio3130-downstream,bus=upstream,id=downstream0,chassis=1 
-device x3130-upstream,bus=downstream0,id=upstream1 -device xio3130-downstream,bus=upstream1,id=downstream1,chassis=2 -device xio3130-downstream,bus=upstream1,id=downstream2,chassis=3 -device xio3130-downstream,bus=upstream1,id=downstream3,chassis=4 -device xio3130-downstream,bus=upstream1,id=downstream4,chassis=5 -device xio3130-downstream,bus=upstream1,id=downstream5,chassis=6 -device xio3130-downstream,bus=upstream1,id=downstream6,chassis=7 

-drive file=/home/guest-rhel7.0-64.qcow3,if=none,id=drive-scsi-disk,format=raw,cache=none,werror=stop,rerror=stop -device virtio-scsi-pci,id=scsi0,bus=downstream1 -device scsi-disk,drive=drive-scsi-disk,bus=scsi0.0,scsi-id=0,lun=0,id=scsi-disk,bootindex=1 

-device virtio-net-pci,netdev=fuxc,mac=00:24:21:7f:0d:10,id=n1,bus=downstream2,mq=on,vectors=9 -netdev tap,id=fuxc,vhost=on,script=/etc/qemu-ifup,queues=8 

-device virtio-net-pci,netdev=fuxc1,mac=00:24:21:7f:0d:11,id=n2,bus=downstream3,mq=on,vectors=17 -netdev tap,id=fuxc1,vhost=on,script=/etc/qemu-ifup,queues=8

-device virtio-net-pci,netdev=fuxc2,mac=00:24:21:7f:0d:12,id=n3,bus=downstream4,mq=on,vectors=25,status=off -netdev tap,id=fuxc2,vhost=on,script=/etc/qemu-ifup,queues=8

 -device sga -chardev socket,id=serial0,path=/var/test1,server,nowait -device isa-serial,chardev=serial0

2.nc -U /var/test1
3.

Actual results:
guest boot fail

Expected results:


Additional info:

Comment 2 FuXiangChun 2013-07-23 09:47:04 UTC
Re-tested this issue, For linux guest, fail to boot guest when attach more than 4 devices to the same switch. For windows guest, guest will boot fail when more than 3.  this is new qemu-kvm command line.

/usr/libexec/qemu-kvm -name 'virtio-network' -nodefaults -m 4G -smp 4,cores=2,threads=2,sockets=1 -M q35 -cpu Opteron_G2 -rtc base=utc,clock=host,driftfix=slew -k en-us -boot menu=on -monitor stdio -vnc :1 -spice disable-ticketing,port=5931 -qmp tcp:0:5555,server,nowait -vga qxl -device ioh3420,bus=pcie.0,id=root.0 -device x3130-upstream,bus=root.0,id=upstream -device xio3130-downstream,bus=upstream,id=downstream0,chassis=1 -device x3130-upstream,bus=downstream0,id=upstream1 -device xio3130-downstream,bus=upstream1,id=downstream1,chassis=2 -device xio3130-downstream,bus=upstream1,id=downstream2,chassis=3 -device xio3130-downstream,bus=upstream1,id=downstream3,chassis=4 -device xio3130-downstream,bus=upstream1,id=downstream4,chassis=5 -device xio3130-downstream,bus=upstream1,id=downstream5,chassis=6 -device xio3130-downstream,bus=upstream1,id=downstream6,chassis=7 

-drive file=/home/guest-rhel7.0-64.qcow3,if=none,id=drive-scsi-disk,format=qcow2,cache=none,werror=stop,rerror=stop -device virtio-scsi-pci,id=scsi0,bus=downstream1 -device scsi-disk,drive=drive-scsi-disk,bus=scsi0.0,scsi-id=0,lun=0,id=scsi-disk,bootindex=1 

-device virtio-net-pci,netdev=fuxc,mac=00:24:21:7f:0d:10,id=n1,bus=downstream2 -netdev tap,id=fuxc,vhost=on,script=/etc/qemu-ifup 

-device virtio-net-pci,netdev=fuxc1,mac=00:24:21:7f:0d:11,id=n2,bus=downstream3 -netdev tap,id=fuxc1,vhost=on,script=/etc/qemu-ifup 

-device virtio-net-pci,netdev=fuxc2,mac=00:24:21:7f:0d:12,id=n3,bus=downstream4 -netdev tap,id=fuxc2,vhost=on,script=/etc/qemu-ifup 

-device virtio-net-pci,netdev=fuxc3,mac=00:24:21:7f:0d:13,id=n4,bus=downstream5 -netdev tap,id=fuxc3,vhost=on,script=/etc/qemu-ifup

-device sga -chardev socket,id=serial0,path=/var/test1,server,nowait -device isa-serial,chardev=serial0

Comment 3 FuXiangChun 2013-07-23 09:48:03 UTC
Created attachment 777275 [details]
guest screenshot

Comment 4 Michael S. Tsirkin 2014-01-14 08:52:26 UTC
could this be re-tested on a recent kernel please?
also, does this happen with piix and not q35?

Comment 5 Vlad Yasevich 2014-01-16 19:12:00 UTC
Please re-test with recent kernel and qemu packages.  Attempt to reproduce this
on the following environment failed.

Kernel: 3.10.0-71.el7.x86_64
qemu: qemu-kvm-1.5.3-37.el7.x86_64

Thanks
-vlad

Comment 6 FuXiangChun 2014-01-17 01:53:32 UTC
Re-tested this bug with qemu-kvm-1.5.3-37.el7.x86_64 and 3.10.0-73.el7.x86_64.

Test steps as comment2.

result: fail to boot guest when attach more than 4 devices to the same switch.

so, This bug still can be reproduced with the latest qemu-kvm and kernel.

Comment 7 Vlad Yasevich 2014-01-22 16:54:41 UTC
This only happens with q35.  This also only happens when you have more then 5 devices each hanging off its won pcie down port.  In the reproducer from
Comment 2, we have 5 devices, each using bus downstream1-5.  The fact that
some of them are virtio_net and others are virtio_scsi doesn't seem to
matter.  The scsi device isn't detected either.

Interestingly, connecting them all to a single downstream bus makes things work
again. Even distributing them to 4 downstream buses still works.  It's when we
hit downstream5 is where we have a problem.

After looking at what iPXE does, it looks like it is trying to access the vq.
Attempting to trap in virtio_pci_config_write() succeeds when using a working
configuration and fails when using a non-working set-up.  It looks like when
there are 5 downstream buses being used, qemu doesn't correctly locate the virtio memory region.  Currently trying to debug this and determine where
the memory access ends up. It isn't virtio as its supposed to be.

-vlad

Comment 8 Vlad Yasevich 2014-01-29 18:00:28 UTC
Verified that solution to Bug 1055832 also solves this issue. Changing component to seabios and marking as duplicate.

*** This bug has been marked as a duplicate of bug 1055832 ***


Note You need to log in before you can comment on or make changes to this bug.