Bug 159795 - rc.sysinit doesn't handle LVM2 on top of RAID correctly
rc.sysinit doesn't handle LVM2 on top of RAID correctly
Status: CLOSED DUPLICATE of bug 149812
Product: Red Hat Enterprise Linux 4
Classification: Red Hat
Component: initscripts (Show other bugs)
4.0
i386 Linux
medium Severity high
: ---
: ---
Assigned To: Bill Nottingham
Brock Organ
:
Depends On:
Blocks:
  Show dependency treegraph
 
Reported: 2005-06-07 22:24 EDT by Ben Whaley
Modified: 2014-03-16 22:54 EDT (History)
1 user (show)

See Also:
Fixed In Version:
Doc Type: Bug Fix
Doc Text:
Story Points: ---
Clone Of:
Environment:
Last Closed: 2005-06-08 11:24:31 EDT
Type: ---
Regression: ---
Mount Type: ---
Documentation: ---
CRM:
Verified Versions:
Category: ---
oVirt Team: ---
RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: ---


Attachments (Terms of Use)

  None (edit)
Description Ben Whaley 2005-06-07 22:24:40 EDT
From Bugzilla Helper:
User-Agent: Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.7.8) Gecko/20050511 Firefox/1.0.4

Description of problem:
rc.sysinit supports LVM2 on top RAID, but only when using /etc/raidtab, not with /etc/mdadm.conf. 

Version-Release number of selected component (if applicable):
initscripts-7.93.11.EL-1

How reproducible:
Always

Steps to Reproduce:
1. Build a RAID array with mdadm. Make a configuration file (usually /etc/mdadm.conf).
2. Use LVM2 tools to create a volume group and logical volumes, using the RAID device created in step 1.
3. Add the LV's to fstab and reboot. The partitions will not fsck, and the system may hang.
  

Actual Results:  I saw varying behavior. Sometimes the system would hang and force a disk check with the following message:

*** An error occurred during the file system check.
*** Dropping you to a shell; the system will reboot
*** when you leave the shell.

Other times the disk check would fail but the system would continue to boot.

Expected Results:  The partitions should have been successfully mounted.

Additional info:

The problem is that LVM2 is initialized before mdadm has a chance to activate any existing RAID partitions. The following (from rc.sysinit) will successfully start the RAID subsystem:

update_boot_stage RCraid
if [ -f /etc/mdadm.conf ]; then
    /sbin/mdadm -A -s
fi

but at that point LVM2 had already been initialized, and since the RAID device had not yet been started it LVM could not start those volume groups. To fix the problem, I simply move the code quoted above to just before the "# LVM2 initialization" comment.

I'm setting this to High because it destroyed an ext3 partition I had created.
Comment 3 Bill Nottingham 2005-06-08 11:24:31 EDT

*** This bug has been marked as a duplicate of 149812 ***

Note You need to log in before you can comment on or make changes to this bug.