RHEL Engineering is moving the tracking of its product development work on RHEL 6 through RHEL 9 to Red Hat Jira (issues.redhat.com). If you're a Red Hat customer, please continue to file support cases via the Red Hat customer portal. If you're not, please head to the "RHEL project" in Red Hat Jira and file new tickets here. Individual Bugzilla bugs in the statuses "NEW", "ASSIGNED", and "POST" are being migrated throughout September 2023. Bugs of Red Hat partners with an assigned Engineering Partner Manager (EPM) are migrated in late September as per pre-agreed dates. Bugs against components "kernel", "kernel-rt", and "kpatch" are only migrated if still in "NEW" or "ASSIGNED". If you cannot log in to RH Jira, please consult article #7032570. That failing, please send an e-mail to the RH Jira admins at rh-issues@redhat.com to troubleshoot your issue as a user management inquiry. The email creates a ServiceNow ticket with Red Hat. Individual Bugzilla bugs that are migrated will be moved to status "CLOSED", resolution "MIGRATED", and set with "MigratedToJIRA" in "Keywords". The link to the successor Jira issue will be found under "Links", have a little "two-footprint" icon next to it, and direct you to the "RHEL project" in Red Hat Jira (issue links are of type "https://issues.redhat.com/browse/RHEL-XXXX", where "X" is a digit). This same link will be available in a blue banner at the top of the page informing you that that bug has been migrated.
Bug 1565054 - Failed to start guest OS with more than 144 virtual disks/interfaces with multifunction
Summary: Failed to start guest OS with more than 144 virtual disks/interfaces with mul...
Keywords:
Status: CLOSED NOTABUG
Alias: None
Product: Red Hat Enterprise Linux 7
Classification: Red Hat
Component: seabios
Version: 7.6
Hardware: x86_64
OS: Linux
unspecified
medium
Target Milestone: rc
: ---
Assignee: Gerd Hoffmann
QA Contact: Virtualization Bugs
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2018-04-09 09:17 UTC by chhu
Modified: 2019-03-26 14:44 UTC (History)
14 users (show)

Fixed In Version:
Doc Type: If docs needed, set a value
Doc Text:
Clone Of:
Environment:
Last Closed: 2018-04-13 06:28:19 UTC
Target Upstream Version:
Embargoed:


Attachments (Terms of Use)
guest xml with muti-disks (51.88 KB, text/plain)
2018-04-09 09:22 UTC, chhu
no flags Details
qemu command line (37.37 KB, text/plain)
2018-04-09 09:28 UTC, chhu
no flags Details
guest log (100.25 KB, text/plain)
2018-04-09 10:19 UTC, chhu
no flags Details
libvirtd log (9.86 MB, text/plain)
2018-04-09 10:20 UTC, chhu
no flags Details

Description chhu 2018-04-09 09:17:39 UTC
Description of problem:
Failed to start guest OS with more than 144 virtual disks/interfaces with multifunction

Version-Release number of selected component (if applicable):
libvirt-3.9.0-14.el7_5.2.x86_64
qemu-kvm-rhev-2.10.0-21.el7_5.1.x86_64
kernel-3.10.0-861.el7.x86_64
Guest kernel:  kernel-3.10.0-861.el7.x86_64

How reproducible:
100%

Steps to Reproduce:
1. Create a guest with 143 virtual disks

2. Login to the guest console, check there are 143 virtual disks
# fdisk -l|grep vd| wc -l (include two: /dev/vd*1,/dev/vd*2)
145

3. Destroy and undefine the guest.

4. Add another disk into the guest xml, define and start the guest successfully.

5. Failed to login to the guest console. Check there is no error in libvirtd.log or guest log.

6. Check in virt-manager, the guest console shows: no-bootable.png
"SeaBIOS(version 1.11.0-2.el7)
Machine UUID ***
iPXE(http://ipxe.org) 00:03.0 C980 PCI2.10 PnP PMM+BFF919E0+BFEF19E0 C980
No bootable device".

7. Add below xml to the guest, and then start the guest, check the seabios output:
<domain type='kvm' xmlns:qemu='http://libvirt.org/schemas/domain/qemu/1.0'>
...
  <qemu:commandline>
     <qemu:arg value='-chardev'/>
     <qemu:arg value='socket,id=seabioslog_id,path=/tmp/seabios,server,nowait'/>
     <qemu:arg value='-device'/>
     <qemu:arg value='isa-debugcon,chardev=seabioslog_id,iobase=0x402'/>
  </qemu:commandline>

# tail -f /tmp/seabios.log 
  1: 000000000009fc00 - 00000000000a0000 = 2 RESERVED
  2: 00000000000f0000 - 0000000000100000 = 2 RESERVED
  3: 0000000000100000 - 00000000bffc0000 = 1 RAM
  4: 00000000bffc0000 - 00000000c0000000 = 2 RESERVED
  5: 00000000feffc000 - 00000000ff000000 = 2 RESERVED
  6: 00000000fffc0000 - 0000000100000000 = 2 RESERVED
  7: 0000000100000000 - 0000000793600000 = 1 RAM
enter handle_19:
  NULL
No bootable device.

Actual results:
In step5,6: Failed to login to the guest.

Expected results:
In step5,6:  Can login to the guest console and check the disks in the guest.

Additional info:
- Start guest with 144 virtual interface hit the same issue.
guest xml: r7.xml
libvirtd.log
guest log

Comment 2 chhu 2018-04-09 09:22:33 UTC
Created attachment 1419160 [details]
guest xml with muti-disks

Comment 3 chhu 2018-04-09 09:28:05 UTC
Created attachment 1419163 [details]
qemu command line

Comment 4 chhu 2018-04-09 10:19:33 UTC
Created attachment 1419177 [details]
guest log

Comment 5 chhu 2018-04-09 10:20:34 UTC
Created attachment 1419179 [details]
libvirtd log

Comment 6 Ademar Reis 2018-04-12 14:18:17 UTC
Is this a limitation in Seabios?

Comment 7 Gerd Hoffmann 2018-04-13 06:28:19 UTC
(In reply to Ademar Reis from comment #6)
> Is this a limitation in Seabios?

Yes.  Due to allocations in real mode address space (< 1M) seabios can't handle huge numbers of disks.  The 1.11 rebase improved things a bit by moving some of the allocations to high memory, but we can't move everything due to BIOS interface constrains.

144 disks already is _way_ above the documented limitations -> NOTABUG.

The recommended way to handle more disks: use OVMF.


Note You need to log in before you can comment on or make changes to this bug.