Bug 128738 - rc.sysinit skips enabling raid when devices used for LVM
rc.sysinit skips enabling raid when devices used for LVM
Status: CLOSED CURRENTRELEASE
Product: Fedora
Classification: Fedora
Component: initscripts (Show other bugs)
2
All Linux
medium Severity high
: ---
: ---
Assigned To: Bill Nottingham
Brock Organ
:
Depends On:
Blocks:
  Show dependency treegraph
 
Reported: 2004-07-28 16:20 EDT by Andrew Meredith
Modified: 2014-03-16 22:46 EDT (History)
3 users (show)

See Also:
Fixed In Version:
Doc Type: Bug Fix
Doc Text:
Story Points: ---
Clone Of:
Environment:
Last Closed: 2005-04-26 12:07:32 EDT
Type: ---
Regression: ---
Mount Type: ---
Documentation: ---
CRM:
Verified Versions:
Category: ---
oVirt Team: ---
RHEL 7.3 requirements from Atomic Host:


Attachments (Terms of Use)

  None (edit)
Description Andrew Meredith 2004-07-28 16:20:14 EDT
Description of problem:

Certain RAID meta-devices that are part of LVM sets, rather than being
used directly as filesystems, fail to start up during the boot
sequence with the annotation "(skipped)". This causes big problems
with LVM volume group startup.

Version-Release number of selected component (if applicable):

initscripts-7.55.1-1

How reproducible:

Every time

Steps to Reproduce:
1. Create RAID 5 array
2. pvcreate as an LVM physical volume
3. vgextend it into a volume group
3. Use some of this new space for a LV
4. reboot
  
Actual results:

The init sequence instead of listing md8 as one of the meta devices
enabled, lists it as md8(skipped). Then the LVM reports that it cannot
enable the VG that contains this MD. Then the boot sequence collapses
as the LVs that are now not present fail to fsck.

Expected results:

It should either not care why this MD is around, or it should check
for the presence of the MD in a pvscan listing .. or some such.

Additional info:

I haven't got to the bottom of this yet, but it only seems to do this
for RAID 5 arrays. I have been doing this MD under LVM structure for a
while now and it hasn't caused a problem before. I just now got enough
disks to out a 3rd in this machine and raid5 it, now it fails.
Comment 1 Harry Hoffman 2004-09-07 12:52:05 EDT
I am seeing the same issue. The problem resides in /etc/rc.sysinit. On
line 492 the following code checks to see if the raid device belongs
to a filesystem:
INFSTAB=`LC_ALL=C grep -c "^$i" /etc/fstab`
if [ $INFSTAB -eq 0 ] ; then
  RESULT=0
  RAIDDEV="$RAIDDEV(skipped)"
fi

In both cases (mine and the original but reporters) the software raid
becomes part of a LVM volume (I'm using RAID 1+0).
Comment 2 Harry Hoffman 2004-09-07 14:28:27 EDT
sorry I should have added this before. Doing a pvscan will not help as
the raid array is not active so no uuid's belonging to a volume group
will be found.
Comment 3 Matthew Miller 2005-04-26 12:01:48 EDT
Fedora Core 2 is now maintained by the Fedora Legacy project for
security updates only. If this problem is a security issue, please
reopen and reassign to the Fedora Legacy product. If it is not a
security issue and hasn't been resolved in the current FC3 updates or
in the FC4 test release, reopen and change the version to match.
Comment 4 Bill Nottingham 2005-04-26 12:07:32 EDT
This is solved with the use of mdadm in FC3 and later.

Note You need to log in before you can comment on or make changes to this bug.