This service will be undergoing maintenance at 00:00 UTC, 2016-09-28. It is expected to last about 1 hours
Bug 29731 - boot lvm root filesystem from raid1 device
boot lvm root filesystem from raid1 device
Product: Red Hat Linux
Classification: Retired
Component: lvm (Show other bugs)
i386 Linux
medium Severity medium
: ---
: ---
Assigned To: Stephen Tweedie
: FutureFeature
Depends On:
  Show dependency treegraph
Reported: 2001-02-27 07:37 EST by Tim Clymo
Modified: 2008-05-01 11:37 EDT (History)
2 users (show)

See Also:
Fixed In Version:
Doc Type: Enhancement
Doc Text:
Story Points: ---
Clone Of:
Last Closed: 2002-09-25 16:50:02 EDT
Type: ---
Regression: ---
Mount Type: ---
Documentation: ---
Verified Versions:
Category: ---
oVirt Team: ---
RHEL 7.3 requirements from Atomic Host:

Attachments (Terms of Use)

  None (edit)
Description Tim Clymo 2001-02-27 07:37:55 EST
From Bugzilla Helper:
User-Agent: Mozilla/4.0 (compatible; MSIE 5.5; Windows 98; Win 9x 4.90)

Any chance of a tool to create an initrd image for booting an LVM root 
filesystem contained within a raid (md) device?

LVM (as at 0.9.1beta5) has lvmcreate_initrd which creates a ramdisk image 
with all the necessary LVM drivers (and also some init stuff to activate 
the VG - essential if you're going to mount it!). However, it doesn't 
include support for any other storage driver modules.

mkinitrd will create a ramdisk with the necessary raid module support, but 
(obviously) doesn't do anything in init to activate VG's

I've worked around this by building a kernel with raid1 support instead of 
loading as a module and using lvmcreate_initrd to generate the image, but 
this isn't ideal given that I'd need to build a new kernel every time it 
was updated (stock redhat ships with MD support as modules)

I guess there are many potential solutions to this, maybe Redhat could 
consider choosing one of the following:
a) Change stock kernel configuration to include MD support built in
b) Modify mkinitrd to make it "LVM aware", integrating the current 
functionality provided by lvmcreate_initrd
c) Modify lvmcreate_initrd to include optional additional module support 
(probably not viable since maintenance of the LVM distribution is not 
Redhat's responsibility)
d) Create new tool, combining appropriate features of mkinitrd and 
e) Provide an "LVM aware" LILO! (certainly, c) above applies here too)

Reproducible: Always
Steps to Reproduce:
1. See description
Comment 1 Derrick Hamner 2001-02-27 11:35:58 EST
I suspect that I am having a similar problem. During the installation process I 
create RAID 1 arrays for /, /boot, /var, and /home. The installation proceeds 
normally, but when I try to boot off of the hard drives I get:

autodetecting RAID arrays
autorun ...
... autorun DONE.
EXT2-fs: unable to read superblock
isofs_read_super: bread failed, dev=09:01, iso_blknum=16, block=32
Kernel panic: VFS: Unable to mount root fs on 09:01

Notice that no arrays were detected. If I boot off of the CD in rescue mode, it 
recognizes and mounts the arrays correctly. The same installation worked 
correctly under Fisher.
Comment 2 Panic 2001-03-03 20:25:38 EST
I've got the same problem as derrick.  System does not boot upon rebooting from
installation.  I tried installing twice with different partition placement, no

Reproducible:  Always

Steps to recreate:

1)Install wolverine: configure RAID 1 array for /boot, one for /, and one for /var
2) reboot and enjoy. :)

System Configuration:

Asus TX-97 BIOS Revision 08
Pentium 233MMX
1x old Tulip card (getting the 00:00:00:00 error too -- but that's been fixed
2x 3c509B cards (hard set to 10/0x300 and 7/0x310)
S3 Trio64+

hda: WD AC21600H
hdb: WD AC31600H
hdc: FX400_02 (4x CD-ROM drive)

minimal installation, essentially a custom install with nothing selected except
up2date and a few network utils (this machine is going to be an iptables test bed).

/dev/md0  32MB ext2 /boot  (hda1 + hdb1)
/dev/md1  1000MB ext2 /    (hda5 + hdb5)
/dev/md2  200MB ext2 /var  (hda6 + hdb6)

Output from the boot sequence:

Uniform Multi-Platform E-IDE Revision: 6.31
ide: assuming 33MHz system bus speed for PIO modes; override with idebus=xx
PIIX4: IDE controller on PCI bus 00 dev 09
PIIX4: chipset revision 1
PIIX4: not 100% native mode: will probe irqs later
     ide0: BM-DMA at 0xe000-0xe007, BIOS settings: hda:DMA, hdb:DMA
     ide1: BM-DMA at 0xe008-0xe00f, BIOS settings: hdc:pio, hdd:pio
hda: WDC AC21600H, ATA DISK drive
hdb: WDC AC31600H, ATA DISK drive
hdc: FX400_02, ATAPI CD/DVD-ROM player
ide0 at 0x1f0-0x1f7, 0x3f6 on irq 14
ide1 at 0x170-0x177, 0x376 on irq 15
hda: 3173184 sectors (1625MB) w/128KiB Cache, CHS=787/64/63, DMA
hdb: Disabling (U)DMA for WDC AC31600H
hdb: 3173184 sectors (1625MB) w/128KiB Cache, CHS=787/64/63
Partition check:
hda: hda1 hda2 < hda5 hda6 hda7 >
hdb: hdb1 hdb2 < hdb5 hdb6 hdb7 >

<snip various unrelated stuff>

md driver 0.90.0 MAX_MD_DEVS=256, MD_SB_DISKS=27
md.c sizeof(mdp_super_t) = 4096
autodetecting RAID arrays
(read) hda1's sb offset: 34176 [events: 00000002]
(read) hda5's sb offset: 1024000 [events: 00000002]
(read) hda6's sb offset: 205504 [events: 00000002]
(read) hdb1's sb offset: 34176 [events: 00000002]
(read) hdb5's sb offset: 1024000 [events: 00000002]
(read) hdb6's sb offset: 205504 [events: 00000002]
autorun ...
considering hdb6 ...
adding hdb6 ...
adding hda6 ...
created md2
running: <hdb6><hda6>
hdb6's event counter: 00000002
hda6's event counter: 00000002
request_module(md-personality-3): Root fs not mounted
do_md_run() returned -22
md2 stopped.
unbind <hdb6,1>
unbind <hda6,0>

<that sequence repeats for hda5+hdb5, then hda1+hdb1>

...autorun DONE 

<snip LVM and network stuff>

VFS: Mounted root (ext2 filesystem).
Red Hat nash version 0.1 starting
Loading raid1 module
raid1 personality registered
autdetecting RAID arrays
autorun ...
... autorun DONE.
EXT2-fs: unable to read superblock
isofs_read_super: bread failed, dev=09:01, iso_blknum=16, block=32
Kernel panic: VFS: Unable to mount root fs on 09:01


Several things bother me about this boot.  One is that DMA is apparently
disabled on one of my drives for no particular reason -- looking into that one.

Autorun is run twice(?): once apparently before the raid1 module is loaded, and
once after.  That might be okay, but...

The unbind<> commands are different from the bind<> commands;  bind uses 1 and
2, unbind uses 0 and 1.  This leads me to believe that the partitions are not
being unbound properly, and thus when the raid1 module is finally loaded, there
are no more partitions to bind and thus the md devices can't start up.  It may
be worse than that, but that's all I can see (IANAKH).

This machine is available for testing if necessary, I'm going to work on it some
more tonight to see if I can get rid if the "Disabling (U)DMA" and eliminate
that as a factor.

Comment 3 Panic 2001-03-05 08:58:00 EST
The Disabling (U)DMA turned out to be a failing hard drive, as hdb failed
shortly thereafter.  I replaced it with an ST31220A 1GB drive and moved on.  I
also reconfigured hda to be the Seagate drive, and hdc to be the remaining WD
drive, with the CD-ROM at hdd.

As often happens, I was wrong.  After many, many installations, this is what I
know now:

The error is the same error you get if the partitions/md devices do no exist.

The first autorun sequence failing out on all the md devices is apparently
"normal", since it happens during boot of a system with / on a non-RAID
partition.  The bind<> and unbind<> statements occur in the same way under that
condition as well, so that is probably right too.

I hacked the 0.1.14 kernel into the installation, no change in symptoms.

The major difference between booting to a RAID / and an non-RAID / is the VFS
statement.  With the RAID /, it just says:

VFS mounted root (ext2 filesystem).

With the non-RAID /, it says:

VFS mounted root (ext2 filesystem) (read-only)

and boots normally, with the other RAID-1 devices being set up in the second
autorun sequence.  RAID 0 also does not work as a / partition.  I know the
raid1.o and raid0.o modules are loaded into the initrd image.  To me, the major
problem seems to be that the raid1 module is loaded after the attempt to mount
the root filesystem, which means that if that filesystem is an md device, it is
unbootable for all intents and purposes.  Also, once the raid1 module is loaded,
no md devices are detected (even other non-/ partitions) by the autorun there,
which is not the case in a non-RAID / boot.  The inability to boot a RAID device
as the root partition would seem to qualify as a Bad Thing(tm), since this
functionality has been present for a while now and people have come to rely on it.

This situation prompts a few questions from me:

1)LVM is able to start from the initrd -- how much of the LVM functionality is
there already?

2)Why does VFS say that it mounted the / filesystem when it obviously did not do
so in a useful fashion?  Why didn't it mount it read-only as is specified in
lilo.conf?  Did VFS even mount / at all?

3)Why is the raid1 module loaded after LVM initialization instead of before the
first autorun sequence?

4)During the second autorun sequence, why are no md devices initialized, as is
normal with a non-RAID / boot?  Checking /etc/fstab maybe?

5)Would a kernel with the RAID functionality compiled in work without using the
lvmcreate_initrd functionality, or is that required?
Comment 4 Panic 2001-03-06 20:26:55 EST
The problem that derrick and I had has been fixed in the rawhide 2.4.2 kernel --
can't speak to the LVM issue.
Comment 5 Stephen Tweedie 2002-10-15 18:54:10 EDT
Current releases (7.3 and later) should support root on both LVM and software
raid (and in fact should even allow you to carve the LVM out of raid1 devices or
vice versa.)

Note You need to log in before you can comment on or make changes to this bug.