Bug 647592 - mdadm array not properly started after upgrade to F14
mdadm array not properly started after upgrade to F14
Status: CLOSED WONTFIX
Product: Fedora
Classification: Fedora
Component: mdadm (Show other bugs)
14
Unspecified Linux
low Severity medium
: ---
: ---
Assigned To: Doug Ledford
Fedora Extras Quality Assurance
:
Depends On:
Blocks:
  Show dependency treegraph
 
Reported: 2010-10-28 16:49 EDT by Zach Carter
Modified: 2011-07-14 19:30 EDT (History)
6 users (show)

See Also:
Fixed In Version:
Doc Type: Bug Fix
Doc Text:
Story Points: ---
Clone Of:
Environment:
Last Closed: 2011-07-14 19:30:22 EDT
Type: ---
Regression: ---
Mount Type: ---
Documentation: ---
CRM:
Verified Versions:
Category: ---
oVirt Team: ---
RHEL 7.3 requirements from Atomic Host:


Attachments (Terms of Use)

  None (edit)
Description Zach Carter 2010-10-28 16:49:40 EDT
Description of problem:  mdadm array not started after upgrade to F14


Version-Release number of selected component (if applicable): Fedora 14


How reproducible:


Steps to Reproduce:
1. On F13 box, set up two lvm volumes home_mirror_1, and home_mirror_2.
2. Create an mdadm array using those two volumes.  Put a filesystem on there, and add it as /home to /etc/fstab
2. Reboot, see that /home is mounted properly.
3. Upgrade the box to F14, and the box will fail to mount /home, and drop you into single user mode.
  
Actual results:

/home not mounted after upgrade

Expected results:

/home should automount after upgrade


Additional info:

This seems to be due to the entry in grub that is created with
rd_LVM_LV= values.  These values limit the lvm volumes to just the root device, which means that the rest of the lvm devices are missing at the time the mdadm arrays are started.

I worked around the issue by removing the rd_LVM_LV entries from grub.conf which caused the entire volume group to be imported.

If the devices are going to be limited in this way, then all filesystems mounted at boot should be listed in grub.conf, or /etc/rc.sysinit should be updated to rescan for mdadm arrays after starting up lvm.
Comment 2 Karl Auerbach 2010-12-05 14:18:31 EST
I have a similar problem on

Linux frodo.iwl.com 2.6.35.6-48.fc14.x86_64 #1 SMP Fri Oct 22 15:36:08 UTC 2010 x86_64 x86_64 x86_64 GNU/Linux

The underlying cause seems to be that mdadm segfaults during the boot-up sequence.

Here's the lines from dmesg:

[    6.775819] md: bind<sdb1>
[    6.776384] mdadm[863]: segfault at 0 ip 0000003316e67334 sp 00007fff18c0a3b0 error 4 in libc-2.12.90.so[3316e00000+19a000]
[    6.975664] md: array md0 already has disks!

I commented-out my RAID from /etc/fstab and must now do a manual add of the missing drive to the array - which is a nuisance to do on every reboot.

This problem did not occur under Fedora 13.
Comment 3 Harald Hoyer 2011-01-11 07:41:52 EST
(In reply to comment #2)
> I have a similar problem on
> 
> Linux frodo.iwl.com 2.6.35.6-48.fc14.x86_64 #1 SMP Fri Oct 22 15:36:08 UTC 2010
> x86_64 x86_64 x86_64 GNU/Linux

> [    6.776384] mdadm[863]: segfault at 0 ip 0000003316e67334 sp

This is tracked in another bugzilla for component mdadm
Comment 4 David 2011-01-11 17:09:43 EST
Could we get the bugzilla # so can get cc'ed and then maybe close this one?
Comment 5 Harald Hoyer 2011-02-01 05:51:10 EST
might be a dup of bug 653207
Comment 6 Doug Ledford 2011-07-14 19:30:22 EDT
The problem in comment #2 sounds like a dup of 653207, while the original poster's issue is different.  The original problem is caused by the fix to 553295 and that fix is likely not to be reverted.  I think the long and short of it is, if you want a raid mirror, put your lvm pv on top of the raid mirror, not the other way around.  The init scripts simply do not handle raid on lvm nearly as well as they handle lvm on raid.

Note You need to log in before you can comment on or make changes to this bug.