Bug 25291 - RAID start-up in rc.sysinit fails if RAID is loaded as a module
Summary: RAID start-up in rc.sysinit fails if RAID is loaded as a module
Keywords:
Status: CLOSED RAWHIDE
Alias: None
Product: Red Hat Linux
Classification: Retired
Component: initscripts
Version: 6.2
Hardware: i386
OS: Linux
medium
medium
Target Milestone: ---
Assignee: Bill Nottingham
QA Contact: David Lawrence
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2001-01-30 11:23 UTC by Ben North
Modified: 2014-03-17 02:18 UTC (History)
3 users (show)

Fixed In Version:
Clone Of:
Environment:
Last Closed: 2001-03-08 00:27:52 UTC
Embargoed:


Attachments (Terms of Use)

Description Ben North 2001-01-30 11:23:28 UTC
This is only a problem for people who have compiled their own kernel, but I
imagine this is quite common.  If you build the various RAID components as
modules and have them loaded automatically, then on boot, the file
/proc/mdstat does not exist.  So the section in rc.sysinit which starts up
RAID doesn't get run --- it tests on [ -f /proc/mdstat -a -f /etc/raidtab
].  Then the fsck stage fails on /dev/md0 (for example) if /dev/md0 has an
entry in /etc/fstab.

The patch below addresses this problem by running the RAID section of
rc.sysinit based on the result of [ -f /etc/raidtab ] only.  This seems
reasonable as the existence of that file should be a reliable indicator of
whether there are any RAID sets to start up.  The original test for whether
a particular RAID set is already running uses /proc/mdstat so an additional
test is required for the case that /proc/mdstat does not exist.

Hope this helps.

Yours,

Ben North.

---- 8< ----
--- rc.sysinit.orig	Tue Jan 30 09:56:35 2001
+++ rc.sysinit	Tue Jan 30 10:44:46 2001
@@ -298,7 +298,7 @@
 fi
 
 # Add raid devices
-if [ -f /proc/mdstat -a -f /etc/raidtab ]; then
+if [ -f /etc/raidtab ]; then
 	echo -n "Starting up RAID devices: " 
 
 	rc=0
@@ -306,7 +306,11 @@
 	for i in `grep "^raiddev" /etc/raidtab | awk '{print $2}'`
 	do
 		RAIDDEV=`basename $i`
-		RAIDSTAT=`grep "^$RAIDDEV : active" /proc/mdstat`
+		if [ -f /proc/mdstat ]; then
+			RAIDSTAT=`grep "^$RAIDDEV : active" /proc/mdstat`
+		else
+			RAIDSTAT=""
+		fi
 		if [ -z "$RAIDSTAT" ]; then
 			# Try raidstart first...if that fails then
 			# fall back to raidadd, raidrun.  If that
---- 8< ----

Comment 1 Bill Nottingham 2001-01-30 17:52:26 UTC
Will be in 5.60-1 ; thanks!

Comment 2 Pete Zaitcev 2001-03-08 00:27:48 UTC
I understand the requestor's concern, and they need to be
addressed, but, IMHO, the bug was not fixed completely.
Old script did have its merit.

Consider a scenario: I have an experimental RAID volume,
and I do have /etc/raidtab, but normally I boot a kernel
without MD support. In that case, "fixed" script goes
inside the outer if, then executes all raid start programs
(all of them eventually failing), and bombs.

I would suggest 1. reverting to the old script;
2. running some sort of modprobe before the test
for /proc/mdstat and /etc/raidtab. If that fails,
then continue.

This way we split out three different results:
 a missing module [result -> continue]
 failed RAID [result -> bail to prompt]

BTW, Matt has some related problem on devserv, so
adding him to cc:.

-- Pete


Comment 3 Bill Nottingham 2001-03-08 03:20:57 UTC
Sure, will be changed in 5.71-1 to:

# Add raid devices
if [ ! -f /proc/mdstat ]; then
      modprobe md >/dev/null 2>&1
fi

if [ -f /proc/mdstat -f /etc/raidtab ]; then
 ...

Comment 4 Pete Zaitcev 2001-03-08 18:59:33 UTC
The fix that Bill posted yesterday works on all my configurations.
Personally, I would test -z "$USEMODULES" before running modprobe,
and using /sbin/modprobe -- just to be on a safer side.
I am far from understanding all issues though, so it's up for
Bill to decide. Thanks for the fix!



Note You need to log in before you can comment on or make changes to this bug.