Bug 25291 - RAID start-up in rc.sysinit fails if RAID is loaded as a module
RAID start-up in rc.sysinit fails if RAID is loaded as a module
Status: CLOSED RAWHIDE
Product: Red Hat Linux
Classification: Retired
Component: initscripts (Show other bugs)
6.2
i386 Linux
medium Severity medium
: ---
: ---
Assigned To: Bill Nottingham
David Lawrence
:
Depends On:
Blocks:
  Show dependency treegraph
 
Reported: 2001-01-30 06:23 EST by Ben North
Modified: 2014-03-16 22:18 EDT (History)
3 users (show)

See Also:
Fixed In Version:
Doc Type: Bug Fix
Doc Text:
Story Points: ---
Clone Of:
Environment:
Last Closed: 2001-03-07 19:27:52 EST
Type: ---
Regression: ---
Mount Type: ---
Documentation: ---
CRM:
Verified Versions:
Category: ---
oVirt Team: ---
RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: ---


Attachments (Terms of Use)

  None (edit)
Description Ben North 2001-01-30 06:23:28 EST
This is only a problem for people who have compiled their own kernel, but I
imagine this is quite common.  If you build the various RAID components as
modules and have them loaded automatically, then on boot, the file
/proc/mdstat does not exist.  So the section in rc.sysinit which starts up
RAID doesn't get run --- it tests on [ -f /proc/mdstat -a -f /etc/raidtab
].  Then the fsck stage fails on /dev/md0 (for example) if /dev/md0 has an
entry in /etc/fstab.

The patch below addresses this problem by running the RAID section of
rc.sysinit based on the result of [ -f /etc/raidtab ] only.  This seems
reasonable as the existence of that file should be a reliable indicator of
whether there are any RAID sets to start up.  The original test for whether
a particular RAID set is already running uses /proc/mdstat so an additional
test is required for the case that /proc/mdstat does not exist.

Hope this helps.

Yours,

Ben North.

---- 8< ----
--- rc.sysinit.orig	Tue Jan 30 09:56:35 2001
+++ rc.sysinit	Tue Jan 30 10:44:46 2001
@@ -298,7 +298,7 @@
 fi
 
 # Add raid devices
-if [ -f /proc/mdstat -a -f /etc/raidtab ]; then
+if [ -f /etc/raidtab ]; then
 	echo -n "Starting up RAID devices: " 
 
 	rc=0
@@ -306,7 +306,11 @@
 	for i in `grep "^raiddev" /etc/raidtab | awk '{print $2}'`
 	do
 		RAIDDEV=`basename $i`
-		RAIDSTAT=`grep "^$RAIDDEV : active" /proc/mdstat`
+		if [ -f /proc/mdstat ]; then
+			RAIDSTAT=`grep "^$RAIDDEV : active" /proc/mdstat`
+		else
+			RAIDSTAT=""
+		fi
 		if [ -z "$RAIDSTAT" ]; then
 			# Try raidstart first...if that fails then
 			# fall back to raidadd, raidrun.  If that
---- 8< ----
Comment 1 Bill Nottingham 2001-01-30 12:52:26 EST
Will be in 5.60-1 ; thanks!
Comment 2 Pete Zaitcev 2001-03-07 19:27:48 EST
I understand the requestor's concern, and they need to be
addressed, but, IMHO, the bug was not fixed completely.
Old script did have its merit.

Consider a scenario: I have an experimental RAID volume,
and I do have /etc/raidtab, but normally I boot a kernel
without MD support. In that case, "fixed" script goes
inside the outer if, then executes all raid start programs
(all of them eventually failing), and bombs.

I would suggest 1. reverting to the old script;
2. running some sort of modprobe before the test
for /proc/mdstat and /etc/raidtab. If that fails,
then continue.

This way we split out three different results:
 a missing module [result -> continue]
 failed RAID [result -> bail to prompt]

BTW, Matt has some related problem on devserv, so
adding him to cc:.

-- Pete
Comment 3 Bill Nottingham 2001-03-07 22:20:57 EST
Sure, will be changed in 5.71-1 to:

# Add raid devices
if [ ! -f /proc/mdstat ]; then
      modprobe md >/dev/null 2>&1
fi

if [ -f /proc/mdstat -f /etc/raidtab ]; then
 ...
Comment 4 Pete Zaitcev 2001-03-08 13:59:33 EST
The fix that Bill posted yesterday works on all my configurations.
Personally, I would test -z "$USEMODULES" before running modprobe,
and using /sbin/modprobe -- just to be on a safer side.
I am far from understanding all issues though, so it's up for
Bill to decide. Thanks for the fix!

Note You need to log in before you can comment on or make changes to this bug.