Bug 499141 - Grub doesn't start after Fedora was installed on isw raid1
Grub doesn't start after Fedora was installed on isw raid1
Status: CLOSED CANTFIX
Product: Fedora
Classification: Fedora
Component: mdadm (Show other bugs)
rawhide
i586 Linux
low Severity high
: ---
: ---
Assigned To: Doug Ledford
Fedora Extras Quality Assurance
:
Depends On:
Blocks:
  Show dependency treegraph
 
Reported: 2009-05-05 07:38 EDT by Jacek Danecki
Modified: 2009-05-20 10:49 EDT (History)
4 users (show)

See Also:
Fixed In Version:
Doc Type: Bug Fix
Doc Text:
Story Points: ---
Clone Of:
Environment:
Last Closed: 2009-05-20 10:49:02 EDT
Type: ---
Regression: ---
Mount Type: ---
Documentation: ---
CRM:
Verified Versions:
Category: ---
oVirt Team: ---
RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: ---


Attachments (Terms of Use)
anaconda.log (47.22 KB, text/plain)
2009-05-05 08:12 EDT, Jacek Danecki
no flags Details
program.log (5.65 KB, text/plain)
2009-05-05 08:13 EDT, Jacek Danecki
no flags Details
storage.log (42.65 KB, text/plain)
2009-05-05 08:13 EDT, Jacek Danecki
no flags Details

  None (edit)
Description Jacek Danecki 2009-05-05 07:38:57 EDT
Description of problem:


Version-Release number of selected component (if applicable):
11.5.0.49

How reproducible:
Install Fedora on preexisting isw raid1 array. 

Steps to Reproduce:
1. Create raid 1 volume using mdadm, reboot during initializing raid.
2. In anaconda create / partition on raid device with size 20000MB ext3 filesystem
3. Create swap partition on raid device with size 2000MB
4. Choose raid array as boot device
5. Install packages and reboot
  
Actual results:
Grub don't start

Expected results:
Grub console will start after reboot.

Additional info:
Comment 1 Jacek Danecki 2009-05-05 08:10:41 EDT
In step 1 I've used commands:
mdadm -CR /dev/md/imsm -e imsm -n 2 /dev/sda /dev/sdb
mdadm -CR /dev/md/raid1 -l 1 -n 2 /dev/sda /dev/sdb

cat /proc/mdstat
Personalities : [raid0] [raid1] [raid6] [raid5] [raid4] [raid10] [linear] 
md126 : active (read-only) raid1 sdb[1] sda[0]
      156288647 blocks super external:/md127/0 [2/2] [UU]
      	resync=PENDING
      
md127 : inactive sdb[1](S) sda[0](S)
      418 blocks super external:imsm
       
unused devices: <none> 

reboot
Comment 2 Jacek Danecki 2009-05-05 08:12:14 EDT
Created attachment 342445 [details]
anaconda.log
Comment 3 Jacek Danecki 2009-05-05 08:13:11 EDT
Created attachment 342446 [details]
program.log
Comment 4 Jacek Danecki 2009-05-05 08:13:39 EDT
Created attachment 342447 [details]
storage.log
Comment 5 Joel Andres Granados 2009-05-05 10:25:10 EDT
Note that I get a strange status report from my bios when it sees the device that has been created.  It states that it is uinitialized.
Comment 6 Jacek Danecki 2009-05-20 09:34:40 EDT
The problem was in mdadm tool. Fix was send by Dan to linux-raid list. See http://marc.info/?l=linux-raid&m=124269520027673&w=2

commit 81062a36abd28d2354036da398c2e090fa759198
Author: Dan Williams <dan.j.williams@intel.com>
Date:   Mon May 18 09:58:55 2009 -0700

    imsm: fix num_domains
    
    The 'num_domains' field simply identifies the number of mirrors.  So it
    is 2 for a 2-disk raid1 or a 4-disk raid10.  The orom does not currently
    support more than 2 mirrors, but a three disk raid1 for example would
    increase num_domains to 3.
    
    Signed-off-by: Dan Williams <dan.j.williams@intel.com>
Comment 7 Doug Ledford 2009-05-20 10:49:02 EDT
This is exactly an example of the sort of thing that I can't work on without hardware to reproduce.  As I've asked for hardware multiple times, and to date I've not been provided anything, there is nothing I can do about this.  Closing bug out as CANTFIX.

Note You need to log in before you can comment on or make changes to this bug.