Bug 34380 - diskdruid only allows 4 partitions on RAID devices
Summary: diskdruid only allows 4 partitions on RAID devices
Keywords:
Status: CLOSED RAWHIDE
Alias: None
Product: Red Hat Linux
Classification: Retired
Component: anaconda
Version: 6.2
Hardware: i386
OS: Linux
medium
medium
Target Milestone: ---
Assignee: Matt Wilson
QA Contact: Brock Organ
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2001-04-02 21:07 UTC by Steven Roberts
Modified: 2007-04-18 16:32 UTC (History)
0 users

Fixed In Version:
Doc Type: Bug Fix
Doc Text:
Clone Of:
Environment:
Last Closed: 2001-08-21 01:58:47 UTC
Embargoed:


Attachments (Terms of Use)

Description Steven Roberts 2001-04-02 21:07:49 UTC
I have mainly tested this with kickstart, but after dropping manually to diskdruid.  I still can't created more than four partitions per system drive.

I am using the Mylex controllers (mainly on VA 2240 boxes), so the devices look like /dev/rd/c0d0, partitions like /dev/rd/c0c0p1.

Bugs addrssing similar issues are: 20221 and 11698 (these bugs dealt with --ondisk not working).

I have several test boxes available and if someone can give me a pointer to what in anaconda is doing partitioning (just need the directory to 
start looking in) I can help debug the problem.

Comment 1 Michael Fulbright 2001-04-02 23:14:07 UTC
Matt is this related to the other issue we recently addressed in the last week
with your workaround on the device name?

Comment 2 Steven Roberts 2001-04-02 23:40:58 UTC
Not sure if it is related in the code or not, but I was on the other bug as well from this end.

Comment 3 Matt Wilson 2001-07-20 20:13:37 UTC
do you see this problem in 7.1?


Comment 4 Steven Roberts 2001-07-21 00:53:31 UTC
We currently don't have any RH 7.1 boxes in house.  We currently have our servers standardized on RH6.2.  I will set up one of our test boxes with 7.1 
over the next few days.  Hopefully I will have the results mid next week.

Comment 5 Matt Wilson 2001-08-09 21:11:59 UTC
ok, thanks for the update.


Comment 6 Brent Fox 2001-08-20 13:49:06 UTC
strobert, do you have any more information to add?

Comment 7 Steven Roberts 2001-08-21 01:58:42 UTC
okay, had a few issues here, but those fires are out.  

news:
 - well, both good and bad on my test machine.  bad news is I lost the machine I was going to test on.  good news is lost it because we moved it into 
production to replace a Solaris box :).  And I grabbed another box for testing (but that was a large part of the delay)
 - fired up the 7.1 installer.  I can't get the 7.1 installer to work.  trying to do an ftp install, and it doesn't seem to be picking up the EEPro100 cards I have 
correctly (the box is a VA 2240, uses the Intel 740 motherboard which has a buitin EEPro and we have a PCI EEPro installed as well).  RH 6.2 picks up 
the cards okay.  7.1 detects both cards (which I like the new install choice of picking which card to install via -- very nice). but can't receive data right.  I 
tried DHCP and the DHCP server gets the request and sends a response, but the 7.1 setup doesn't see it.  I also tried setting the IP settings manually, 
but still no go.  I'll dig more tonight to see if there are any known install issues with EEPro's and 7.1, but until I can get to DiskDruid in the install program I 
can't test the partitioning issue :)

Oh, which two install questions/thoughts:
 - is the FTP install using passive ftp or something like that?  (I verified 6.2 had the problem on this machine before I started on 7.1) .  it was forcing the 
server to not use the data port (20) but some arbittary high port (which was annoying since it makes firewall rules a pain)
 - Is there any reason why the timeouts on network actions aren't a little lower in the install?  the 30-60 timeouts have always seemed long when trying to 
get things working :)

please not me know if you have any workarounds on getting the EEPro's to work on 7.1

Comment 8 Matt Wilson 2001-08-24 22:16:44 UTC
I know this issue is fixed in Roswell.



Note You need to log in before you can comment on or make changes to this bug.