Description of problem: mdadm array not started after upgrade to F14 Version-Release number of selected component (if applicable): Fedora 14 How reproducible: Steps to Reproduce: 1. On F13 box, set up two lvm volumes home_mirror_1, and home_mirror_2. 2. Create an mdadm array using those two volumes. Put a filesystem on there, and add it as /home to /etc/fstab 2. Reboot, see that /home is mounted properly. 3. Upgrade the box to F14, and the box will fail to mount /home, and drop you into single user mode. Actual results: /home not mounted after upgrade Expected results: /home should automount after upgrade Additional info: This seems to be due to the entry in grub that is created with rd_LVM_LV= values. These values limit the lvm volumes to just the root device, which means that the rest of the lvm devices are missing at the time the mdadm arrays are started. I worked around the issue by removing the rd_LVM_LV entries from grub.conf which caused the entire volume group to be imported. If the devices are going to be limited in this way, then all filesystems mounted at boot should be listed in grub.conf, or /etc/rc.sysinit should be updated to rescan for mdadm arrays after starting up lvm.
I have a similar problem on Linux frodo.iwl.com 2.6.35.6-48.fc14.x86_64 #1 SMP Fri Oct 22 15:36:08 UTC 2010 x86_64 x86_64 x86_64 GNU/Linux The underlying cause seems to be that mdadm segfaults during the boot-up sequence. Here's the lines from dmesg: [ 6.775819] md: bind<sdb1> [ 6.776384] mdadm[863]: segfault at 0 ip 0000003316e67334 sp 00007fff18c0a3b0 error 4 in libc-2.12.90.so[3316e00000+19a000] [ 6.975664] md: array md0 already has disks! I commented-out my RAID from /etc/fstab and must now do a manual add of the missing drive to the array - which is a nuisance to do on every reboot. This problem did not occur under Fedora 13.
(In reply to comment #2) > I have a similar problem on > > Linux frodo.iwl.com 2.6.35.6-48.fc14.x86_64 #1 SMP Fri Oct 22 15:36:08 UTC 2010 > x86_64 x86_64 x86_64 GNU/Linux > [ 6.776384] mdadm[863]: segfault at 0 ip 0000003316e67334 sp This is tracked in another bugzilla for component mdadm
Could we get the bugzilla # so can get cc'ed and then maybe close this one?
might be a dup of bug 653207
The problem in comment #2 sounds like a dup of 653207, while the original poster's issue is different. The original problem is caused by the fix to 553295 and that fix is likely not to be reverted. I think the long and short of it is, if you want a raid mirror, put your lvm pv on top of the raid mirror, not the other way around. The init scripts simply do not handle raid on lvm nearly as well as they handle lvm on raid.