Bug 103407

Summary: rc.sysinit doesn't handle LVM on top of RAID
Product: Red Hat Enterprise Linux 3 Reporter: Jos Vos <jos>
Component: initscriptsAssignee: Bill Nottingham <notting>
Status: CLOSED NEXTRELEASE QA Contact:
Severity: medium Docs Contact:
Priority: medium    
Version: 3.0CC: bugzilla-fedora.20.esrever_otua, david, ewilts, k.georgiou, milan.kerslager, redhat, rvokal, tao
Target Milestone: ---   
Target Release: ---   
Hardware: All   
OS: Linux   
Whiteboard:
Fixed In Version: Doc Type: Bug Fix
Doc Text:
Story Points: ---
Clone Of: Environment:
Last Closed: 2005-09-20 19:46:31 UTC Type: ---
Regression: --- Mount Type: ---
Documentation: --- CRM:
Verified Versions: Category: ---
oVirt Team: --- RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: --- Target Upstream Version:
Embargoed:

Description Jos Vos 2003-08-29 21:02:10 UTC
Description of problem:
When a RAID device is used as PV for an LVM VG, the RAID device will not be
started in rc.sysinit, because it is not listed in /etc/fstab, and thus the LVM
VG is never activated.

Version-Release number of selected component (if applicable):
7.14-1

Comment 1 Bill Nottingham 2003-08-29 21:14:27 UTC
Is it in /etc/raidtab?

Comment 2 Andreas Øye 2003-10-30 14:39:14 UTC
I have this exact same problem with a manually created raiddevice and volume
group. It does work fine on an anaconda/diskdruid created raiddevice volumegroup.

The working machine has a /dev/md0 as / and a pvgroup on /dev/md1
Both md0 and md1 is in /etc/raidtab, and the whole setup was done at install
with diskdruid/anaconda.

The non-working machine has /dev/md0 as a manually created raid, entered into
/etc/raidtab. On top of this is a pvgroup and lvgroup. The fstab does not
contain any reference to /dev/md0 and so rc.sysinit skips initalization of md0
on this machine so the LVM volumes mentioned in fstab are inaccessible.

Comment 3 Andreas Øye 2003-10-30 14:42:59 UTC
BTW: This might very well be caused by the missing documentation requested in
bug #106845 ;-)

Comment 4 Jos Vos 2003-11-04 18:52:00 UTC
Yes, it is in /etc/raidtab (but not in /etc/fstab).

(Sorry for not noticing the question before...)

Comment 5 Darryl Dixon 2004-01-28 20:17:07 UTC
The offending code is in /etc/rc.d/rc.sysinit at lines 460-464 inclusive:

------------8<------------
INFSTAB=`LC_ALL=C grep -c "^$i" /etc/fstab`
if [ $INFSTAB -eq 0 ] ; then
    RESULT=0
    RAIDDEV="$RAIDDEV(skipped)"
fi
------------8<------------

These assume that any raid device on the system is always going to
have an entry in /etc/fstab.  These is an incorrect assumption in at
least two scenarios;
1) as mentioned above, when a raid device is used as an LVM Physical
Volume with an LVM Volume Group layered over the top (in this case the
 LVM Logical Volumes are what will have entries in the /etc/fstab, not
the raid devices themselves).
2) when the raid devices are being used as raw devices by a database.

It seems to me that it would be best to simply remove these lines from
/etc/rc.d/rc.sysinit as I can't see any value that they add, and they
certainly do cause real problems.

Cheers,
D


Comment 6 Ed Wilts 2004-05-12 19:49:42 UTC
Duplicated in RHEL 3 with the following /etc/raidtab.  

[root@corpftp2 rc.d]# cat /etc/raidtab
# Multipath configuration
raiddev         /dev/md0
raid-level      multipath
nr-raid-disks   2

device          /dev/sda1
raid-disk       0

device          /dev/sdb1
raid-disk       1

Further to what the test in rc.sysinit should be, I offer these results:

[root@corpftp2 rc.d]# grep -c "^/dev/mdo" /etc/fstab
0
[root@corpftp2 rc.d]# grep -c "/dev/md0" /etc/lvmconf/*.conf
1


Comment 7 Steve Bonneville 2004-05-12 21:32:34 UTC
As I mentioned to some of you on taroon-list, the software RAID device
used by LVM as a physical volume can be specified in /etc/fstab with a
line like

/dev/md0  none  ignore  defaults  0 0

so that rc.sysinit sees it and starts the RAID array at boot.  This
also should work for constructing nested software RAID arrays.

Comment 8 Milan Kerslager 2004-05-13 09:26:34 UTC
As RH9 is End-Of-Life now, changing component to RHEL3 because the
problem still here .

Comment 9 Orion Poplawski 2004-05-18 16:48:56 UTC
Problem is in Fedora Core 2 as well.  Please remove the offending
lines from rc.sysinit unless some value for them can be shown.

Comment 10 David Huffman 2004-06-10 04:56:24 UTC
The problem does not only occur with LVM, but with raid devices used as raw database 
devices (Such as with DB2). I recommend removing the code rather than searching /etc/
lvmtab or /etc/lvmconf for raid entries.

Adding the 

/dev/md0 name ignore defaults 0 0 entry 

does nothing to correct a badly written script.

Does anyone know of a reason why you might not want RAID devices started at boot? This 
has nothing to do with whether you want to mount a filesystem.

Comment 11 Asgeir Nilsen 2004-06-15 14:01:19 UTC
The problem does not appear if at least one md device is mentioned in
fstab.

With /dev/md10 mounted as / and /dev/md11 as an LVM PE, this bug is
not triggered.

Can't really see the need for this extra check though, and second the
notion that this be removed from rc.sysinit.

Comment 12 Bill Nottingham 2004-06-28 20:11:59 UTC
The reason it was added is for allowing systems to boot when there are
volumes in the raidtab but not actually present. (See bug 78467).

Comment 13 Darryl Dixon 2004-06-28 21:18:57 UTC
Hi Bill :)

  Thanks for the explanation, at least we know why it's there now. 
Certainly it seems as though the solution suggested by A.J. Aranyos
did not take into account the very large number of situations where
raid devices wouldn't have entries in the fstab :(  Obviously the
~correct~ solution for this problem is inside the raidtools
themselves, however, failing that, I would suggest that being dumped
to a root prompt to hash-out some entries in /etc/raidtab is a little
better than being dumped to a root prompt to figure out why rc.sysinit
isn't starting your arrays!  :-)

Cheers,
Darryl Dixon

Comment 17 Bill Nottingham 2005-09-20 19:46:31 UTC
This problem is resolved in the next release of Red Hat Enterprise Linux. Red
Hat does not currently plan to provide a resolution for this in a Red Hat
Enterprise Linux update for currently deployed systems.

With the goal of minimizing risk of change for deployed systems, and in response
to customer and partner requirements, Red Hat takes a conservative approach when
evaluating changes for inclusion in maintenance updates for currently deployed
products. The primary objectives of update releases are to enable new hardware
platform support and to resolve critical defects.