Bug 242107 - f7 ks install with RAID1 and RAID5 fails to load RAID1 module
Summary: f7 ks install with RAID1 and RAID5 fails to load RAID1 module
Keywords:
Status: CLOSED WONTFIX
Alias: None
Product: Fedora
Classification: Fedora
Component: mkinitrd
Version: 7
Hardware: All
OS: Linux
low
low
Target Milestone: ---
Assignee: Peter Jones
QA Contact:
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2007-06-01 18:36 UTC by Dale Bewley
Modified: 2008-06-17 01:21 UTC (History)
3 users (show)

Fixed In Version:
Clone Of:
Environment:
Last Closed: 2008-06-17 01:21:01 UTC
Type: ---
Embargoed:


Attachments (Terms of Use)
Kickstart Configuration (4.02 KB, application/octet-stream)
2007-06-01 18:36 UTC, Dale Bewley
no flags Details

Description Dale Bewley 2007-06-01 18:36:57 UTC
Description of problem:
Kickstart install of F7 on Sun v40z with 5 SCSI drives with RAID5 and RAID1
devices, however on boot RAID1 is not available.

Version-Release number of selected component (if applicable):
Default F7 x86_64 installation media

How reproducible:
Every time

Steps to Reproduce:
1. Install system with RAID1 holding /boot and RAID5 holding all else.
2. Observe kernel panic on boot.
  
Actual results:
md: personality for level 1 is not loaded!
mdadm: failed to RUN_ARRAY /dev/md1: Invalid argument


Expected results:
Kernel modules for RAID1 and RAID5 to be loaed in initrd.

Additional info:
With my attached KS file only the Xen kernel was installed, but I booted off the
rescue CD and installed the plain kernel which also didn't add both personalities.

There also seems to be some confusion over anaconda's treatment of 'raid
--device' or my understanding of it.

With the following config:
raid pv.a0e0 --level=RAID5  --fstype="physical volume (LVM)" --device=md0
--spares=1 raid.a0 raid.b0 raid.c0 raid.d0 raid.e0
raid /boot --level=RAID1 --fstype=ext3 --device=md1 --spares=3 raid.a1 raid.b1
raid.c1 raid.d1 raid.e1
I got an error from anaconda that said md1 was already defined.

I then swapped the device args and install completed, but failed to boot as
previously described in this bug.

Comment 1 Dale Bewley 2007-06-01 18:36:57 UTC
Created attachment 155917 [details]
Kickstart Configuration

Comment 2 Dale Bewley 2007-06-01 22:21:40 UTC
A workaround is to place the following in the kickstart file. Modify as
appropriate if not using xen kernel.

%post
mv /boot/initrd-2.6.20-2925.9.fc7xen.img /boot/initrd-2.6.20-2925.9.fc7xen.img.bak
mkinitrd --with=raid1 /boot/initrd-2.6.20-2925.9.fc7xen.img 2.6.20-2925.9.fc7xen

Comment 3 Dale Bewley 2007-11-28 01:08:26 UTC
(My md0/md1 assignments reported above have been swapped as described in bug
#244126 such that I settled down with md0=raid1, and md1=raid5 as described below)

I just did an upgrade to F8 and again had an initrd that failed to load both the
raid1 and raid5 personalities.

I believe this is due to an improper mdadm.conf being generated by anaconda and
then used by mkinitrd.

Before upgrading to F8 my F7 mdadm.conf contained:
 # mdadm.conf written out by anaconda
 DEVICE partitions
 MAILADDR root
 ARRAY /dev/md0 level=raid1 num-devices=2 uuid=e647ace4:b1ebaab8:a97a4567:fdb90984
 ARRAY /dev/md1 level=raid5 num-devices=4 uuid=b5aadbd3:80af1e49:ce9ad803:1070be6c

After the upgrade it contained:
 # mdadm.conf written out by anaconda
 DEVICE partitions
 MAILADDR root
 ARRAY /dev/md0 level=raid1 num-devices=2 uuid=e647ace4:b1ebaab8:a97a4567:fdb90984

Upon restoring the mdadm.conf and running mkinitrd with the "--with=raid1
--with=raid456" we got a working initrd.

p.s. FYI
# cat /proc/mdstat
Personalities : [raid6] [raid5] [raid4] [raid1]
md0 : active raid1 sda1[0] sde1[2](S) sdd1[3](S) sdc1[4](S) sdb1[1]
      513984 blocks [2/2] [UU]

md1 : active raid5 sda2[0] sdd2[4](S) sde2[3] sdc2[2] sdb2[1]
      210346752 blocks level 5, 256k chunk, algorithm 2 [4/4] [UUUU]



Comment 4 Bug Zapper 2008-05-14 12:42:58 UTC
This message is a reminder that Fedora 7 is nearing the end of life. Approximately 30 (thirty) days from now Fedora will stop maintaining and issuing updates for Fedora 7. It is Fedora's policy to close all bug reports from releases that are no longer maintained. At that time this bug will be closed as WONTFIX if it remains open with a Fedora 'version' of '7'.

Package Maintainer: If you wish for this bug to remain open because you plan to fix it in a currently maintained version, simply change the 'version' to a later Fedora version prior to Fedora 7's end of life.

Bug Reporter: Thank you for reporting this issue and we are sorry that we may not be able to fix it before Fedora 7 is end of life. If you would still like to see this bug fixed and are able to reproduce it against a later version of Fedora please change the 'version' of this bug. If you are unable to change the version, please add a comment here and someone will do it for you.

Although we aim to fix as many bugs as possible during every release's lifetime, sometimes those efforts are overtaken by events. Often a more recent Fedora release includes newer upstream software that fixes bugs or makes them obsolete. If possible, it is recommended that you try the newest available Fedora distribution to see if your bug still exists.

Please read the Release Notes for the newest Fedora distribution to make sure it will meet your needs:
http://docs.fedoraproject.org/release-notes/

The process we are following is described here: http://fedoraproject.org/wiki/BugZappers/HouseKeeping

Comment 5 Bug Zapper 2008-06-17 01:21:00 UTC
Fedora 7 changed to end-of-life (EOL) status on June 13, 2008. 
Fedora 7 is no longer maintained, which means that it will not 
receive any further security or bug fix updates. As a result we 
are closing this bug. 

If you can reproduce this bug against a currently maintained version 
of Fedora please feel free to reopen this bug against that version.

Thank you for reporting this bug and we are sorry it could not be fixed.


Note You need to log in before you can comment on or make changes to this bug.