Bugzilla (bugzilla.redhat.com) will be under maintenance for infrastructure upgrades and will not be available on July 31st between 12:30 AM - 05:30 AM UTC. We appreciate your understanding and patience. You can follow status.redhat.com for details.
Bug 1565054 - Failed to start guest OS with more than 144 virtual disks/interfaces with multifunction
Summary: Failed to start guest OS with more than 144 virtual disks/interfaces with mul...
Alias: None
Product: Red Hat Enterprise Linux 7
Classification: Red Hat
Component: seabios
Version: 7.6
Hardware: x86_64
OS: Linux
Target Milestone: rc
: ---
Assignee: Gerd Hoffmann
QA Contact: Virtualization Bugs
Depends On:
TreeView+ depends on / blocked
Reported: 2018-04-09 09:17 UTC by chhu
Modified: 2019-03-26 14:44 UTC (History)
14 users (show)

Fixed In Version:
Doc Type: If docs needed, set a value
Doc Text:
Clone Of:
Last Closed: 2018-04-13 06:28:19 UTC
Target Upstream Version:

Attachments (Terms of Use)
guest xml with muti-disks (51.88 KB, text/plain)
2018-04-09 09:22 UTC, chhu
no flags Details
qemu command line (37.37 KB, text/plain)
2018-04-09 09:28 UTC, chhu
no flags Details
guest log (100.25 KB, text/plain)
2018-04-09 10:19 UTC, chhu
no flags Details
libvirtd log (9.86 MB, text/plain)
2018-04-09 10:20 UTC, chhu
no flags Details

Description chhu 2018-04-09 09:17:39 UTC
Description of problem:
Failed to start guest OS with more than 144 virtual disks/interfaces with multifunction

Version-Release number of selected component (if applicable):
Guest kernel:  kernel-3.10.0-861.el7.x86_64

How reproducible:

Steps to Reproduce:
1. Create a guest with 143 virtual disks

2. Login to the guest console, check there are 143 virtual disks
# fdisk -l|grep vd| wc -l (include two: /dev/vd*1,/dev/vd*2)

3. Destroy and undefine the guest.

4. Add another disk into the guest xml, define and start the guest successfully.

5. Failed to login to the guest console. Check there is no error in libvirtd.log or guest log.

6. Check in virt-manager, the guest console shows: no-bootable.png
"SeaBIOS(version 1.11.0-2.el7)
Machine UUID ***
iPXE(http://ipxe.org) 00:03.0 C980 PCI2.10 PnP PMM+BFF919E0+BFEF19E0 C980
No bootable device".

7. Add below xml to the guest, and then start the guest, check the seabios output:
<domain type='kvm' xmlns:qemu='http://libvirt.org/schemas/domain/qemu/1.0'>
     <qemu:arg value='-chardev'/>
     <qemu:arg value='socket,id=seabioslog_id,path=/tmp/seabios,server,nowait'/>
     <qemu:arg value='-device'/>
     <qemu:arg value='isa-debugcon,chardev=seabioslog_id,iobase=0x402'/>

# tail -f /tmp/seabios.log 
  1: 000000000009fc00 - 00000000000a0000 = 2 RESERVED
  2: 00000000000f0000 - 0000000000100000 = 2 RESERVED
  3: 0000000000100000 - 00000000bffc0000 = 1 RAM
  4: 00000000bffc0000 - 00000000c0000000 = 2 RESERVED
  5: 00000000feffc000 - 00000000ff000000 = 2 RESERVED
  6: 00000000fffc0000 - 0000000100000000 = 2 RESERVED
  7: 0000000100000000 - 0000000793600000 = 1 RAM
enter handle_19:
No bootable device.

Actual results:
In step5,6: Failed to login to the guest.

Expected results:
In step5,6:  Can login to the guest console and check the disks in the guest.

Additional info:
- Start guest with 144 virtual interface hit the same issue.
guest xml: r7.xml
guest log

Comment 2 chhu 2018-04-09 09:22:33 UTC
Created attachment 1419160 [details]
guest xml with muti-disks

Comment 3 chhu 2018-04-09 09:28:05 UTC
Created attachment 1419163 [details]
qemu command line

Comment 4 chhu 2018-04-09 10:19:33 UTC
Created attachment 1419177 [details]
guest log

Comment 5 chhu 2018-04-09 10:20:34 UTC
Created attachment 1419179 [details]
libvirtd log

Comment 6 Ademar Reis 2018-04-12 14:18:17 UTC
Is this a limitation in Seabios?

Comment 7 Gerd Hoffmann 2018-04-13 06:28:19 UTC
(In reply to Ademar Reis from comment #6)
> Is this a limitation in Seabios?

Yes.  Due to allocations in real mode address space (< 1M) seabios can't handle huge numbers of disks.  The 1.11 rebase improved things a bit by moving some of the allocations to high memory, but we can't move everything due to BIOS interface constrains.

144 disks already is _way_ above the documented limitations -> NOTABUG.

The recommended way to handle more disks: use OVMF.

Note You need to log in before you can comment on or make changes to this bug.