Red Hat Bugzilla – Bug 138572
Grub not installed for SATA Software RAID drives on install
Last modified: 2007-11-30 17:10:54 EST
From Bugzilla Helper:
User-Agent: Mozilla/5.0 (X11; U; Linux x86_64; rv:1.7.3)
Description of problem:
I was able to install Fedora Core 3 on two SATA drives that were setup
for software RAID1 with no error messages. On the reboot however,
grub failed to load.
Version-Release number of selected component (if applicable):
Steps to Reproduce:
1. install FC3 on x86_64 with a software RAID setup
Actual Results: blinking cursor on top left of screen
Expected Results: grub boot menu, and then booting into fedora core 3
After bringing up the system in rescue mode, running "grub-install
"/dev/md0 does not have any corresponding BIOS drive"
I worked around this by editing /etc/mtab, replacing /dev/md0 with
/dev/sda1, rerunning grub-install, and then returning mtab back to
normal. The system was then able to boot.
This works for me here (anaconda doesn't use grub-install directly and
instead does basically what you did to install on the first drive).
Were there any error messages from grub on tty5?
I never checked tty5 on the initial install, as I didn't suspect
anything as wrong till the reboot. And I can't try reinstalling as
this is my main system, so I won't be able to help chase it down.
If nobody else has reported a similar problem, I'd say chalk it up to
something idiosyncratic about my setup (like that my DVD-ROM
completely died about 20 minutes later).
Same problem here. 2 Sata disks, software raid 1, i386.
Reproducible: Yes, several times.
Andy's fix worked on this box as well. I'd do more testing but I need
to ship the machine to a customer. I'll try to replicate on another
system and pull data from the other terms, no eta on that however.
Same problem on a new SuperMicro system with SATA drives on a pentium 4.
Reproducable: every time FC3 is installed.
Andy's fix did the trick to get grub installed. After that everything
worked. I can try to gater more information if you know what you want
So, I also have FC2 i386 setup with 2 SATA drives hooked up in a raid1
configuration. These drives have previously reported as hde and hdg.
I upgraded to the 2.6.9 kernel rpm, and at reboot I got a hung grub.
To fix, I did the following:
1. Boot from rescue cd, chroot /mnt/sysimage
2. edit /etc/mtab to replace /dev/md0 with /dev/hde1
3. /sbin/grub-install /dev/hde
4. return mtab to its previous state
On reboot, I choose the new kernel in grub. This kernel now labels my
SATA drives as sda and sdb (instead of hde and hdf). To get grub
installed correctly, I did the following
1. edit /boot/grub/device.map and changed /dev/hde to /dev/sda
2. edit /etc/mtab to replace /dev/md0 with /dev/sda1
3. /sbin/grub-install /dev/sda
4. return mtab to its previous state
5. edit /boot/grub/grub.conf to make the 2.6.9 kernel the default
6. edit /etc/fstab since the location of the swap partitions has changed
Set-up: supermicro/P4, raid1 on SATA
Problem: the same.
After Andy's fix the system boots, but hangs on quota.
I removed quato throug linux resuce and now it hangs in
enabling swap space.
What I also see is that mdadm complains about all drives
mdadm: only specify super-minor once, super-minor=0 ignored.
same problem here. however i might have some additional info. my setup:
SATA 1 (sda): WDC 160GB
SATA 2 (sdb): Seagate 120GB
When I installed with above config, I encountered the same problems as
Andy and was able to resolve them the same way.
howerver, I found two other solutions:
a) when installing, check extended options for the bootloader and
enable force LBA32. when doing this, system starts up correctly
without any manual changes
b) swap the hard disks around (thus seagate becomes the first HD (sda)
and the WD becomes the second). Then the MBR will be installed on the
seagate instead of the WD. Then system also boots normally right after
So my assumption is, that either grub has a problem with SATA with
capacities as from 160 GB or that the WD has a problem.
Maybe other ppl can also report their HD SATA setup (hd and capacity).
maybe this helps tracking it down.
About my hardware:
I'm using dual Seagate SATA 200GB Drives hooked up in RAID1 for both
of my setups.
My motherboards are both ASUS SK8V's, with the SATA drives hooked up
to the VT8237 ports (but not using the built in semi-hardware RAID).
I had the problem that I couldn't get the install working, also
not with the workarounds. But now I have a `workaround'. Not
suitable if you have to do many installs.
I already had one system on md/SATA with Core 2. I upgraded
this machine to Core 3 and told it to stay away from the
bootloader (second option of three during upgrade). This results
in a working Fedora Core 3 system on the mentioned set-up. I also
ran up2date over it and booted the new smp-kernel, it works great.
But soon ... I'll have to install these set-ups in serie, then this
is not an option.
I too wasted many hours trying to get FC3 installed on a new system
with Intel 865 mainboard and 2x Maxtor 200 GB SATA disks in a RAID-1
configuration. Using the Rescue CD to manually install grub as
descibed in Comment #5 worked for me.
The force LBA option to grub during the install did not work for me.
See also bug <a href="show_bug.cgi?id=114690">#114690</a>, it includes
two effective GRUB + RAID1 approaches. The second, specifically, goes
beyond mere damage control.
Basically, it suggests to enhance grub.conf (menu.lst) to include TWO
similar entries when /boot is on a RAID 1 volume and tell GRUB to try
# For booting with disc 0 kernel
title GNU/Linux (hd0,0)
# For booting with disc 1 kernel, if first is unreadable
title GNU/Linux (hd1,0)
I too have reproduced this bug.
Tyan Thunder K8S Pro S2882
Silicon Image SI3114 Controller
2 WDC 250GB SATA 7200RPM Drives 8MB Cache
Install FC3 with Raid1 /boot partion as /dev/sda1 and /dev/sda2.
Grub does not install properly and won't boot upon reboot.
To fix this I have to use the system rescue mode.
Edit /etc/mtab and change /dev/md0 to /dev/sda1.
Do a grub-install /dev/sda.
Change the /etc/mtab back to /dev/md0.
Then I reboot and everything works.
This is a serious issue as most users won't have a clue as to what's
*** This bug has been marked as a duplicate of 114690 ***
Changed to 'CLOSED' state since 'RESOLVED' has been deprecated.