Description of problem: booting fails when root is on a raid array. error is: mdadm: failed to RUN_ARRAY /dev/md1: Invalid argument followed shortly by a Kernel panic (obviously). it works fine with 2.6.21-1.3163 Version-Release number of selected component (if applicable): 2.6.21-1.3175.fc7 How reproducible: always
Created attachment 155309 [details] mdadm.conf as written by anaconda This looks like a bug I just experienced on upgrading to 3175. I think it is due to the recent mkinitrd update which now uses the /etc/mdadm.conf file written by anaconda to perform the raid start. On my machine I found the file generated by anaconda (see attachment) had no entry for /dev/md1. md1 is the LVM PV containing / etc so caused a mount failure just like you are seeing. What does your /etc/mdadm.conf look like? I managed to fix my setup by doing: # mdadm --query --detail --brief --scan > /etc/mdadm.conf Then rebuilt the initrd by removing and re-installing the '3175 kernel and all was fine.
I should mention that anaconda had problems with the raid setup during the fc6 -> f7t4 upgrade so that may have contributed to the bad mdadm.conf file. See bug 238926 for further details.
thanks! the mdadm.conf file certainly was the problem. the md1 line had the 'level' and 'uuid' copied from the md0 line when they should have both been different. using mdadm --query as suggested fixed the problem. i'll mark this as duplicate but add a note to the open bug because this mdadm.conf was created by anaconda on a fresh install of F7T4, not an upgrade. *** This bug has been marked as a duplicate of 151653 ***
Okay, I think we've determined that this bug is not actually a dup of 151653. This bug concerns anaconda possibly creating incorrect mdadm.conf files. What was the disk layout / RAID setup you were trying to create? Does a fresh install of rawhide or F7rc2 do the right thing?
hi. sorry, been away for a while. thanks for looking into this. here's the contents of /etc/mdadm.conf (corrected) and /proc/mdstat and the output of mount: [~]$ cat /etc/mdadm.conf ARRAY /dev/md1 level=raid0 num-devices=2 UUID=1616e478:4a8b6b93:9d351b4e:3624de5b ARRAY /dev/md0 level=raid1 num-devices=2 UUID=61bc5504:64623059:dadcf744:9253fda9 [~]$ cat /proc/mdstat Personalities : [raid0] [raid6] [raid5] [raid4] [raid1] md0 : active raid1 sda1[0] sdb1[1] 104320 blocks [2/2] [UU] md1 : active raid0 sda2[0] sdb2[1] 77946880 blocks 256k chunks unused devices: <none> [cje@bed ~]$ mount /dev/mapper/VolGroup00-LogVol01 on / type ext3 (rw) proc on /proc type proc (rw) sysfs on /sys type sysfs (rw) devpts on /dev/pts type devpts (rw,gid=5,mode=620) tmpfs on /dev/shm type tmpfs (rw) /dev/md0 on /boot type ext3 (rw) none on /proc/sys/fs/binfmt_misc type binfmt_misc (rw) sunrpc on /var/lib/nfs/rpc_pipefs type rpc_pipefs (rw) hope that makes the config clear. i've made a volume group on md1 but don't know how to show that. i'm downloading RC2 now and will try it as soon as i can but bittorrent says it'll be a while yet. hopefully give it a go tomorrow evening. (around 1900 UTC 31 May) .. if it speeds up a bit.
Created attachment 155843 [details] partition layout table as it appears in anaconda i'm trying out RC2 now while i download F7 (it did speed up .. but then F7 was released a few hours later!) on another system with a similar layout i tried upgrading from FC4 to F7RC2 today and it booted just fine (lots of problems after that, just not _this_ problem).
F7RC2 seems to be okay. i've done a clean install with the above config and it's booted fine. it's produced a rather different mdadm.conf: # mdadm.conf written out by anaconda DEVICE partitions MAILADDR root ARRAY /dev/md0 super-minor=0 ARRAY /dev/md1 super-minor=1 i'll do a final test with F7 tomorrow. in today's test i used a VNC install and kept the old partition layout (just marked /, /boot and swap to be formetted) - tomorrow i'll do a regular install and change the partition sizes a bit to make sure it's definitely doing a clean setup.
did the test with F7, non-VNC install and a modified layout and all worked well.