Bug 81306 - RH Installer confused when upgrading S/W Raid-5 drives
RH Installer confused when upgrading S/W Raid-5 drives
Product: Red Hat Linux
Classification: Retired
Component: anaconda (Show other bugs)
i386 Linux
medium Severity medium
: ---
: ---
Assigned To: Jeremy Katz
Mike McLean
Depends On:
  Show dependency treegraph
Reported: 2003-01-07 17:08 EST by Ruth Ivimey-Cook
Modified: 2007-04-18 12:49 EDT (History)
0 users

See Also:
Fixed In Version:
Doc Type: Bug Fix
Doc Text:
Story Points: ---
Clone Of:
Last Closed: 2003-02-11 02:44:32 EST
Type: ---
Regression: ---
Mount Type: ---
Documentation: ---
Verified Versions:
Category: ---
oVirt Team: ---
RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: ---

Attachments (Terms of Use)

  None (edit)
Description Ruth Ivimey-Cook 2003-01-07 17:08:48 EST
Description of problem:
I cannot easily upgrade my RHL system because the installer doesn't include 
the kernel raid functionality (or something is masking it).

My config is, at present, a Gigabyte 7VRXP mb with 4 seagate drives, 3 of 
which are in a raid-5 array using the software-raid code in the kernel. I 
currently use ext3 on top. The mb has 2 controllers: the internal via 
southbridge, a promise 20276 (which is raid-capable, but I'm not using that). 
I have a promise 20267 controller in a PCI slot. The via controller has one 
seagate (non-raid) drive and 2 CD-ROM's attached. The 20276 has 2 of the 3 
raid drives, each on it's own bus. The 20267 has the remaining seagate drive.

When the installer boots, is tries to mount the raid drives independently, 
which of course fails, then gives up. Because almost all of my system is on 
the raid drives, this means I can't use the installer.

Trying to upgrade without the installer is a nightmare!

Is there any way to create a new kernel for the installer to use, so I can 
give it the right configuration and so work?

Can RH, in the future, install the raid-0, raid-1, raid-5 modules in the 
installer, preferably with the 'auto-mount' feature enabled?

Version-Release number of selected component (if applicable):

How reproducible:

Steps to Reproduce:
2.Watch the kernel fail to find my filesystems
Actual results:
All of the drives are detected (/dev/hda, hdc, hdd, hde, hdg, hdi), but the 
raid device isn't there (/dev/md0, md1) and so it falls over.

Expected results:
The raid devices initialise, find the discs and create the drive

Additional info:
This system has been progressively updated from rh5.2 and is also updated with 
non-rpm stuff.
Currently running kernel 2.4.20.
Comment 1 Jeremy Katz 2003-01-08 16:03:32 EST
Are there any messages on tty3 or tty4 about starting raid?  Also, if you switch
to tty2 and run 'raidstart /dev/md0' does it work?
Comment 2 Ruth Ivimey-Cook 2003-01-15 12:39:01 EST
The usual raid driver startup message does appear, but nothing more relating 
to particular devices. Could it be pining for a valid raidtab in /etc?? Could 
it be that because I was using devfs, the /mnt/sys/etc/raidtab was using 'the 
wrong' device names? If you automounted raid in the kernel, raidtab would be 
irrelevant (once created)...

When I tried to raidstart the driver (having imported an appropriate raidtab), 
raidstart failed (I apologise, but I have forgotten the error message). It was 
something to do with a mismatch.

I then used the raidstart binary from my system disk (by mounting the disk 
under /mnt and invoking it directly) and that worked fine. Sadly, even when I 
did a 'raidstart' in an upgrade run of anaconda, and before anaconda had asked 
about doing the upgrade, anaconda still issued a set of messages to the effect 
that the constituent drives of the array were not (individually) valid 

As a result of some other problems, notably with glibc and rpm, I have now 
given up on the old config and have installed rh8 from scratch. I haven't as 
yet got raid working on the new config.
Comment 3 Jeremy Katz 2003-02-11 02:44:32 EST
This should be happier in our current codebase

Note You need to log in before you can comment on or make changes to this bug.