Bug 10257 - system down, unable boot w/ root as raid1 device (/dev/md7)
system down, unable boot w/ root as raid1 device (/dev/md7)
Product: Red Hat Linux
Classification: Retired
Component: kernel (Show other bugs)
i386 Linux
high Severity high
: ---
: ---
Assigned To: Ingo Molnar
Depends On:
  Show dependency treegraph
Reported: 2000-03-20 01:17 EST by Landon Curt Noll
Modified: 2008-05-01 11:37 EDT (History)
1 user (show)

See Also:
Fixed In Version:
Doc Type: Bug Fix
Doc Text:
Story Points: ---
Clone Of:
Last Closed: 2000-08-08 16:31:29 EDT
Type: ---
Regression: ---
Mount Type: ---
Documentation: ---
Verified Versions:
Category: ---
oVirt Team: ---
RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: ---

Attachments (Terms of Use)

  None (edit)
Description Landon Curt Noll 2000-03-20 01:17:24 EST
I have a system with several raid1 file systems.  Initially I had
only /dev/md0 thru /dev/md6 with / on /dev/hda1 and /boot on /dev/hda3.
While this configuration worked, it was not acceptable because /
was not on a raid1 device.  Having / on a raid1 device is critical.
Having /boot as a non-raid pain-old /dev/hda3 IDE ext2 partition is OK,
but / definitely needed to be raid1.

BTW: I am using software raid, not a raid controller.

Booting from the RH6.1 CDROM, I was able to convert / into a raid1

    raiddev /dev/md7
    	   raid-level      1
    	   nr-raid-disks   2
     	   nr-spare-disks  0
     	   chunk-size      4
       	   persistent-superblock 1
      	   device          /dev/hda1
      	   raid-disk       0
      	   device          /dev/hdc1
       	   raid-disk       1

While booted form the RH6.1 CDROM, I was able to raidstart, mount
all of the raid1 devices (including the new /dev/md7), umount and
raidstop.  All seemed well at this point.

While still booted form the RH6.1 CDROM, I mounted /dev/md7 as /root,
chrooted under /root, edited fstab so that / used /dev/md7 and
did a mount -a to mount all the /dev/md's as well as /boot (which
is on /dev/hda3).  I waited until /proc/mtstat showed that all 8
raid1 devices were active and complete [UU] just to be safe.

Again all seems well so far.

I built a new initrd image:

	/sbin/mkinitrd /boot/initrd-raid-2.2.12-20.img -v -f \
			--with=raid1 2.2.12-20

	BTW: I tried various combinations w/wo --preload raid1 and
	     --with=raid1, all of which failed to mount / on boot.

I changed lilo.conf so that:


and did a /sbin/lilo -v.

I unmounted all devices, exited from the chrooted shell,
unmounted /root, did a raidstop.

Just to be safe again, I restated raid, re-mounted things, looked
things over to ensure that all was well, unmounted, did a raidstop
and verified that raid1 devices were stopped.

Last I ejected the RH6.1 CDROM and rebooted.  This is what happened:

	md driver 0.90.0 MAX_MD_DEVS=256, MAX_REAL-12
	3c59x.c:v0.99H 11/17/98 Donald ...
	md.c: sizeof(mdp_super_t) = 4096
	Partition check:
	 hda: hda1 hda2 hda3 hda4 < hda5 ... hda12 >
	 hdc: hdc1 hdc2 hdc3 hdc4 < hdc5 ... hdc12 >
	RAMDISK: Compressed image found at block 0
	autodetecting RAID arrays
	autorun ...
	... autorun DONE.
	VFS: Mounted root (ext2 filesystem).
	Loading raid1 module
	raid1 personality registered
	autodetecting RAID arrays
	autorun ...
	... autorun DONE.
	Bad md_map in ll_rw_block
	EXT2-fs: unable to read superblock
	Bad md_map in ll_rw_block
	romfs: unable to read superblock
	Bad md_map in ll_rw_block
	isods_read_super: bread failed, dev=09:07, iso_blknum=16, block=32
	Kernel panic: VFS: Unable to mount root fs on 09:07

Comment 1 Anonymous 2000-03-23 04:16:59 EST
Just in case there was a problem with using /dev/md7, I switched root so that
root was on /dev/md0.  Unfortunately this made no difference.  The kernel
still paniced when root was on /dev/md0.  The only change in the error message
was the ``isods_read_super'' line which said dev=09:00 instead of 09:07.
Comment 2 Ingo Molnar 2001-04-16 05:32:10 EDT
root filesystem on a RAID-1 device is not supported by the 2.2 kernel, but it
works just fine in the 2.4 kernel.

Note You need to log in before you can comment on or make changes to this bug.