Bug 499141 - Grub doesn't start after Fedora was installed on isw raid1
Summary: Grub doesn't start after Fedora was installed on isw raid1
Keywords:
Status: CLOSED CANTFIX
Alias: None
Product: Fedora
Classification: Fedora
Component: mdadm
Version: rawhide
Hardware: i586
OS: Linux
low
high
Target Milestone: ---
Assignee: Doug Ledford
QA Contact: Fedora Extras Quality Assurance
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2009-05-05 11:38 UTC by Jacek Danecki
Modified: 2009-05-20 14:49 UTC (History)
4 users (show)

Fixed In Version:
Clone Of:
Environment:
Last Closed: 2009-05-20 14:49:02 UTC
Type: ---
Embargoed:


Attachments (Terms of Use)
anaconda.log (47.22 KB, text/plain)
2009-05-05 12:12 UTC, Jacek Danecki
no flags Details
program.log (5.65 KB, text/plain)
2009-05-05 12:13 UTC, Jacek Danecki
no flags Details
storage.log (42.65 KB, text/plain)
2009-05-05 12:13 UTC, Jacek Danecki
no flags Details

Description Jacek Danecki 2009-05-05 11:38:57 UTC
Description of problem:


Version-Release number of selected component (if applicable):
11.5.0.49

How reproducible:
Install Fedora on preexisting isw raid1 array. 

Steps to Reproduce:
1. Create raid 1 volume using mdadm, reboot during initializing raid.
2. In anaconda create / partition on raid device with size 20000MB ext3 filesystem
3. Create swap partition on raid device with size 2000MB
4. Choose raid array as boot device
5. Install packages and reboot
  
Actual results:
Grub don't start

Expected results:
Grub console will start after reboot.

Additional info:

Comment 1 Jacek Danecki 2009-05-05 12:10:41 UTC
In step 1 I've used commands:
mdadm -CR /dev/md/imsm -e imsm -n 2 /dev/sda /dev/sdb
mdadm -CR /dev/md/raid1 -l 1 -n 2 /dev/sda /dev/sdb

cat /proc/mdstat
Personalities : [raid0] [raid1] [raid6] [raid5] [raid4] [raid10] [linear] 
md126 : active (read-only) raid1 sdb[1] sda[0]
      156288647 blocks super external:/md127/0 [2/2] [UU]
      	resync=PENDING
      
md127 : inactive sdb[1](S) sda[0](S)
      418 blocks super external:imsm
       
unused devices: <none> 

reboot

Comment 2 Jacek Danecki 2009-05-05 12:12:14 UTC
Created attachment 342445 [details]
anaconda.log

Comment 3 Jacek Danecki 2009-05-05 12:13:11 UTC
Created attachment 342446 [details]
program.log

Comment 4 Jacek Danecki 2009-05-05 12:13:39 UTC
Created attachment 342447 [details]
storage.log

Comment 5 Joel Andres Granados 2009-05-05 14:25:10 UTC
Note that I get a strange status report from my bios when it sees the device that has been created.  It states that it is uinitialized.

Comment 6 Jacek Danecki 2009-05-20 13:34:40 UTC
The problem was in mdadm tool. Fix was send by Dan to linux-raid list. See http://marc.info/?l=linux-raid&m=124269520027673&w=2

commit 81062a36abd28d2354036da398c2e090fa759198
Author: Dan Williams <dan.j.williams>
Date:   Mon May 18 09:58:55 2009 -0700

    imsm: fix num_domains
    
    The 'num_domains' field simply identifies the number of mirrors.  So it
    is 2 for a 2-disk raid1 or a 4-disk raid10.  The orom does not currently
    support more than 2 mirrors, but a three disk raid1 for example would
    increase num_domains to 3.
    
    Signed-off-by: Dan Williams <dan.j.williams>

Comment 7 Doug Ledford 2009-05-20 14:49:02 UTC
This is exactly an example of the sort of thing that I can't work on without hardware to reproduce.  As I've asked for hardware multiple times, and to date I've not been provided anything, there is nothing I can do about this.  Closing bug out as CANTFIX.


Note You need to log in before you can comment on or make changes to this bug.