Description of problem:
When a RAID device is used as PV for an LVM VG, the RAID device will not be
started in rc.sysinit, because it is not listed in /etc/fstab, and thus the LVM
VG is never activated.
Version-Release number of selected component (if applicable):
Is it in /etc/raidtab?
I have this exact same problem with a manually created raiddevice and volume
group. It does work fine on an anaconda/diskdruid created raiddevice volumegroup.
The working machine has a /dev/md0 as / and a pvgroup on /dev/md1
Both md0 and md1 is in /etc/raidtab, and the whole setup was done at install
The non-working machine has /dev/md0 as a manually created raid, entered into
/etc/raidtab. On top of this is a pvgroup and lvgroup. The fstab does not
contain any reference to /dev/md0 and so rc.sysinit skips initalization of md0
on this machine so the LVM volumes mentioned in fstab are inaccessible.
BTW: This might very well be caused by the missing documentation requested in
bug #106845 ;-)
Yes, it is in /etc/raidtab (but not in /etc/fstab).
(Sorry for not noticing the question before...)
The offending code is in /etc/rc.d/rc.sysinit at lines 460-464 inclusive:
INFSTAB=`LC_ALL=C grep -c "^$i" /etc/fstab`
if [ $INFSTAB -eq 0 ] ; then
These assume that any raid device on the system is always going to
have an entry in /etc/fstab. These is an incorrect assumption in at
least two scenarios;
1) as mentioned above, when a raid device is used as an LVM Physical
Volume with an LVM Volume Group layered over the top (in this case the
LVM Logical Volumes are what will have entries in the /etc/fstab, not
the raid devices themselves).
2) when the raid devices are being used as raw devices by a database.
It seems to me that it would be best to simply remove these lines from
/etc/rc.d/rc.sysinit as I can't see any value that they add, and they
certainly do cause real problems.
Duplicated in RHEL 3 with the following /etc/raidtab.
[root@corpftp2 rc.d]# cat /etc/raidtab
# Multipath configuration
Further to what the test in rc.sysinit should be, I offer these results:
[root@corpftp2 rc.d]# grep -c "^/dev/mdo" /etc/fstab
[root@corpftp2 rc.d]# grep -c "/dev/md0" /etc/lvmconf/*.conf
As I mentioned to some of you on taroon-list, the software RAID device
used by LVM as a physical volume can be specified in /etc/fstab with a
/dev/md0 none ignore defaults 0 0
so that rc.sysinit sees it and starts the RAID array at boot. This
also should work for constructing nested software RAID arrays.
As RH9 is End-Of-Life now, changing component to RHEL3 because the
problem still here .
Problem is in Fedora Core 2 as well. Please remove the offending
lines from rc.sysinit unless some value for them can be shown.
The problem does not only occur with LVM, but with raid devices used as raw database
devices (Such as with DB2). I recommend removing the code rather than searching /etc/
lvmtab or /etc/lvmconf for raid entries.
/dev/md0 name ignore defaults 0 0 entry
does nothing to correct a badly written script.
Does anyone know of a reason why you might not want RAID devices started at boot? This
has nothing to do with whether you want to mount a filesystem.
The problem does not appear if at least one md device is mentioned in
With /dev/md10 mounted as / and /dev/md11 as an LVM PE, this bug is
Can't really see the need for this extra check though, and second the
notion that this be removed from rc.sysinit.
The reason it was added is for allowing systems to boot when there are
volumes in the raidtab but not actually present. (See bug 78467).
Hi Bill :)
Thanks for the explanation, at least we know why it's there now.
Certainly it seems as though the solution suggested by A.J. Aranyos
did not take into account the very large number of situations where
raid devices wouldn't have entries in the fstab :( Obviously the
~correct~ solution for this problem is inside the raidtools
themselves, however, failing that, I would suggest that being dumped
to a root prompt to hash-out some entries in /etc/raidtab is a little
better than being dumped to a root prompt to figure out why rc.sysinit
isn't starting your arrays! :-)
This problem is resolved in the next release of Red Hat Enterprise Linux. Red
Hat does not currently plan to provide a resolution for this in a Red Hat
Enterprise Linux update for currently deployed systems.
With the goal of minimizing risk of change for deployed systems, and in response
to customer and partner requirements, Red Hat takes a conservative approach when
evaluating changes for inclusion in maintenance updates for currently deployed
products. The primary objectives of update releases are to enable new hardware
platform support and to resolve critical defects.