Bug 34380 - diskdruid only allows 4 partitions on RAID devices
diskdruid only allows 4 partitions on RAID devices
Product: Red Hat Linux
Classification: Retired
Component: anaconda (Show other bugs)
i386 Linux
medium Severity medium
: ---
: ---
Assigned To: Matt Wilson
Brock Organ
Depends On:
  Show dependency treegraph
Reported: 2001-04-02 17:07 EDT by Steven Roberts
Modified: 2007-04-18 12:32 EDT (History)
0 users

See Also:
Fixed In Version:
Doc Type: Bug Fix
Doc Text:
Story Points: ---
Clone Of:
Last Closed: 2001-08-20 21:58:47 EDT
Type: ---
Regression: ---
Mount Type: ---
Documentation: ---
Verified Versions:
Category: ---
oVirt Team: ---
RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: ---

Attachments (Terms of Use)

  None (edit)
Description Steven Roberts 2001-04-02 17:07:49 EDT
I have mainly tested this with kickstart, but after dropping manually to diskdruid.  I still can't created more than four partitions per system drive.

I am using the Mylex controllers (mainly on VA 2240 boxes), so the devices look like /dev/rd/c0d0, partitions like /dev/rd/c0c0p1.

Bugs addrssing similar issues are: 20221 and 11698 (these bugs dealt with --ondisk not working).

I have several test boxes available and if someone can give me a pointer to what in anaconda is doing partitioning (just need the directory to 
start looking in) I can help debug the problem.
Comment 1 Michael Fulbright 2001-04-02 19:14:07 EDT
Matt is this related to the other issue we recently addressed in the last week
with your workaround on the device name?
Comment 2 Steven Roberts 2001-04-02 19:40:58 EDT
Not sure if it is related in the code or not, but I was on the other bug as well from this end.
Comment 3 Matt Wilson 2001-07-20 16:13:37 EDT
do you see this problem in 7.1?
Comment 4 Steven Roberts 2001-07-20 20:53:31 EDT
We currently don't have any RH 7.1 boxes in house.  We currently have our servers standardized on RH6.2.  I will set up one of our test boxes with 7.1 
over the next few days.  Hopefully I will have the results mid next week.
Comment 5 Matt Wilson 2001-08-09 17:11:59 EDT
ok, thanks for the update.
Comment 6 Brent Fox 2001-08-20 09:49:06 EDT
strobert, do you have any more information to add?
Comment 7 Steven Roberts 2001-08-20 21:58:42 EDT
okay, had a few issues here, but those fires are out.  

 - well, both good and bad on my test machine.  bad news is I lost the machine I was going to test on.  good news is lost it because we moved it into 
production to replace a Solaris box :).  And I grabbed another box for testing (but that was a large part of the delay)
 - fired up the 7.1 installer.  I can't get the 7.1 installer to work.  trying to do an ftp install, and it doesn't seem to be picking up the EEPro100 cards I have 
correctly (the box is a VA 2240, uses the Intel 740 motherboard which has a buitin EEPro and we have a PCI EEPro installed as well).  RH 6.2 picks up 
the cards okay.  7.1 detects both cards (which I like the new install choice of picking which card to install via -- very nice). but can't receive data right.  I 
tried DHCP and the DHCP server gets the request and sends a response, but the 7.1 setup doesn't see it.  I also tried setting the IP settings manually, 
but still no go.  I'll dig more tonight to see if there are any known install issues with EEPro's and 7.1, but until I can get to DiskDruid in the install program I 
can't test the partitioning issue :)

Oh, which two install questions/thoughts:
 - is the FTP install using passive ftp or something like that?  (I verified 6.2 had the problem on this machine before I started on 7.1) .  it was forcing the 
server to not use the data port (20) but some arbittary high port (which was annoying since it makes firewall rules a pain)
 - Is there any reason why the timeouts on network actions aren't a little lower in the install?  the 30-60 timeouts have always seemed long when trying to 
get things working :)

please not me know if you have any workarounds on getting the EEPro's to work on 7.1
Comment 8 Matt Wilson 2001-08-24 18:16:44 EDT
I know this issue is fixed in Roswell.

Note You need to log in before you can comment on or make changes to this bug.