Red Hat Bugzilla – Bug 81306
RH Installer confused when upgrading S/W Raid-5 drives
Last modified: 2007-04-18 12:49:34 EDT
Description of problem:
I cannot easily upgrade my RHL system because the installer doesn't include
the kernel raid functionality (or something is masking it).
My config is, at present, a Gigabyte 7VRXP mb with 4 seagate drives, 3 of
which are in a raid-5 array using the software-raid code in the kernel. I
currently use ext3 on top. The mb has 2 controllers: the internal via
southbridge, a promise 20276 (which is raid-capable, but I'm not using that).
I have a promise 20267 controller in a PCI slot. The via controller has one
seagate (non-raid) drive and 2 CD-ROM's attached. The 20276 has 2 of the 3
raid drives, each on it's own bus. The 20267 has the remaining seagate drive.
When the installer boots, is tries to mount the raid drives independently,
which of course fails, then gives up. Because almost all of my system is on
the raid drives, this means I can't use the installer.
Trying to upgrade without the installer is a nightmare!
Is there any way to create a new kernel for the installer to use, so I can
give it the right configuration and so work?
Can RH, in the future, install the raid-0, raid-1, raid-5 modules in the
installer, preferably with the 'auto-mount' feature enabled?
Version-Release number of selected component (if applicable):
Steps to Reproduce:
2.Watch the kernel fail to find my filesystems
All of the drives are detected (/dev/hda, hdc, hdd, hde, hdg, hdi), but the
raid device isn't there (/dev/md0, md1) and so it falls over.
The raid devices initialise, find the discs and create the drive
This system has been progressively updated from rh5.2 and is also updated with
Currently running kernel 2.4.20.
Are there any messages on tty3 or tty4 about starting raid? Also, if you switch
to tty2 and run 'raidstart /dev/md0' does it work?
The usual raid driver startup message does appear, but nothing more relating
to particular devices. Could it be pining for a valid raidtab in /etc?? Could
it be that because I was using devfs, the /mnt/sys/etc/raidtab was using 'the
wrong' device names? If you automounted raid in the kernel, raidtab would be
irrelevant (once created)...
When I tried to raidstart the driver (having imported an appropriate raidtab),
raidstart failed (I apologise, but I have forgotten the error message). It was
something to do with a mismatch.
I then used the raidstart binary from my system disk (by mounting the disk
under /mnt and invoking it directly) and that worked fine. Sadly, even when I
did a 'raidstart' in an upgrade run of anaconda, and before anaconda had asked
about doing the upgrade, anaconda still issued a set of messages to the effect
that the constituent drives of the array were not (individually) valid
As a result of some other problems, notably with glibc and rpm, I have now
given up on the old config and have installed rh8 from scratch. I haven't as
yet got raid working on the new config.
This should be happier in our current codebase