Description of problem:
The rc.sysinit script in RedHat 9 fails to activate the RAID 1+0 config
and bails out to the sulogin shell for RAID repair.
I have the following setup in my /etc/raidtab:
md0: hdi1 + hde1 (raid1)
md1: hdk1 + hdh1 (raid1)
md2: md0 + md1 (raid0)
The rc.sysinit skips the "raidstart" for the md0 and md1, because it does
not find md0 nor md1 referenced from /etc/fstab. It then tries to
do raidstart /dev/md2, which of course fails, because md0 and md1 are
still inactive. The correct solution would be to give up the suboptimal
parsing of /etc/fstab and implement the full tree search over all md and lvm
devices, but this can be quite expensive. I think it would be better to
add a "noauto"-like option for /etc/raidtab (raidtools) and simply do
"raidstart -a" from rc.sysinit.
Version-Release number of selected component (if applicable):
Steps to Reproduce:
1. use the attached /etc/raidtab
2. add /dev/md2 to /etc/fstab
3. reboot your system
rc.sysinit bails out to the login prompt
normal system boot
I will attach my /etc/raidtab to this bugreport in a moment.
Created attachment 92775 [details]
The same problem affects LVM + RAID setups doing `Raid' 1+0.
Say you make a volume group of two raid devices. Since the raid devices
themselves don't contain filesystem and hence aren't in the filesystem stable,
they'll never be started. Then mounting the logical volume will fail because the
devices that are part of the volume group don't exist.
Good one marking this as high. As well as the reasons above, we have a optional
lab in the RH133 course where students can set this up. Kinda embarassing when
Any progress on this bug? It's almost two years old and is still a problem with
RHEL3U4, Fedora Core 3, and presumably RHEL4 since it is based on FC3.
Closing bugs on older, no longer supported, releases. Apologies for any lack of
This shouldn't be the same issue on FC3 and later, as mdadm is used instead of
raidtab and raidtools. If this persists there, please open a new issue.