Bug 18231 - Autopartitioning fails with "no free slots" on Mylex Acceleraid (DAC960)
Summary: Autopartitioning fails with "no free slots" on Mylex Acceleraid (DAC960)
Alias: None
Product: Red Hat Linux
Classification: Retired
Component: installer   
(Show other bugs)
Version: 7.0
Hardware: i386
OS: Linux
Target Milestone: ---
Assignee: Matt Wilson
QA Contact: Brock Organ
: 22429 (view as bug list)
Depends On:
TreeView+ depends on / blocked
Reported: 2000-10-03 16:07 UTC by daniel.deimert
Modified: 2007-04-18 16:28 UTC (History)
0 users

Fixed In Version:
Doc Type: Bug Fix
Doc Text:
Story Points: ---
Clone Of:
Last Closed: 2000-10-26 22:15:41 UTC
Type: ---
Regression: ---
Mount Type: ---
Documentation: ---
Verified Versions:
Category: ---
oVirt Team: ---
RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: ---

Attachments (Terms of Use)

Description daniel.deimert 2000-10-03 16:07:28 UTC
I consider this to be a bug, because installation fails. However, if this
fails by design, consider it a RFE.

I was trying to install Red Hat 7.0 on a server with Mylex AcceleRAID 250
(DAC960PTL1) but ran into some problems.

The kernel from 6.2 had a patch for the DAC960 driver that allowed it to
use 16 partitions on one logical device.  A good thing. The kernel driver
in 7.0 does not appear to have this patch, so that when you create
partitions for /boot, <swap>, /, /var, /tmp and /home - partitioning
fails!  This is bad.

The only messages given as reasons are
	"autopartitioning failed" (even if you made them using text-mode fdisk)
	"failed to allocate /: no free slots"
These messages are only visible on the console on alt-f3,
the message in disk druid is even more terse.

Please consider applying the same kernel patch as in 6.2 or at least change
the error message in the installer to indicate that this is a limitation in
the RAID controller driver.   It wasn't clear to me what "slots" the
messages was referring to and why the autopartitioning failed.

Comment 1 Michael Fulbright 2000-10-03 18:49:00 UTC
I am looking into the history of the DAC960 support - it appears we currently
restrict you to 7 partitions
per device.  I hope to have more information soon.

Comment 2 daniel.deimert 2000-10-04 13:01:56 UTC
It also appears that Disk Druid defaults to create one primary and all the 
other partitions as extended:

1. Entering Disk Druid, if I "Add" /boot, it gets device rd/c0d0p1.
2. Then I add / and it gets device rd/c0d0p5
3. Add swap, it gets c0d0p6
4. Add /home it gets c0d0p7
5. Add /var -- and I get an error message saying "No free slots"

This means that in practice using Disk Druid partitiong, DAC960 is restricted 
to 4 partitions. 

If the logical raid device has more than 4 partitions created with fdisk prior 
to booting the 7.0 installer, the installer does not even start disk druid, it 
only says "no free slots" and only gives two choices: retry or reboot.

Comment 3 Michael Fulbright 2000-10-06 21:38:50 UTC
Ok you are correct, the installer currently limits the maximum partition number
to 7, not the *number* of partitions.

I am still trying to find out why this limitation exists.

Comment 4 Michael Fulbright 2000-10-06 21:39:15 UTC
Matt do you remember why you added code with this restriction?

Comment 5 Christian Hechelmann 2000-10-19 21:46:09 UTC
Thats a Mylex BIOS limitation. See http://www.dandelion.com/Linux/README.DAC960

Comment 6 Michael Fulbright 2000-10-26 22:15:38 UTC
Thanks for the information.  Closing as notabug.

Comment 7 daniel.deimert 2000-10-27 11:02:16 UTC
Please! That was a bit too quick.  Please read my comments above:   "This means 
that in practice using Disk Druid partitiong, DAC960 is restricted 
to 4 partitions. "

The README.DAC960 states that the Mylex is limited to 7 partitions by default.  
But the Red Hat 7 installer further limits the number to 4.  Not seven. FOUR.   

Please consider this bug to be a RFE for the installer to fully support the 7 
partitions on the Mylex and not just 4 of them.    

Comment 8 Daniel Stone 2000-12-18 01:04:31 UTC
*** Bug 22429 has been marked as a duplicate of this bug. ***

Note You need to log in before you can comment on or make changes to this bug.