When RAID1 is used for mirroring partitions (by Anaconda), grub is
unable to install itself to MBR.
I tried grub-install with each of /dev/md0, /dev/md1, /dev/sda,
and /dev/sdb. Each reported that /dev/md? does not have any
corresponding BIOS drive, and failed to overwrite the MBR. I tryed to
modify /boot/grub/device.map with no luck.
LILO is able to install itself correctly.
See old bug #55484 for more info (used to be filled for RH 7.2, 8.0
# lsraid -p
[dev 9, 0] /dev/md0 CFCC137A.A19C3EF8.7AD2F777.8F6C7210 online
[dev 22, 1] /dev/hda1 CFCC137A.A19C3EF8.7AD2F777.8F6C7210 good
[dev 22, 65] /dev/hdb1 CFCC137A.A19C3EF8.7AD2F777.8F6C7210 good
md0 is for root FS (/).
*** Bug 55484 has been marked as a duplicate of this bug. ***
This bug gets me with each new install. You'd think I'd learn after
hitting it so many times. Adding myself to the CC: list just in case
the day comes when frozen pigs fly out of hell and RedHat fixes this.
Created attachment 103100 [details]
Patch to add support for md devices to grub-install.
Here's a patch I worked out to make grub-install work against md devices. I
don't understand grub as well as I like, but it appears to work.
I see the same problem with Fedora-2. "grub-install /dev/hda" and
"/dev/md0" reports "/dev/hda does not have a corresponding BIOS
device". How the heck does the installer make it work during the
BTW, the error message is incorrect. *Of course* /dev/hda has a
corresponding BIOS device hd(0,0). The error message should say "I am
too stupid to figure out the BIOS device for blah..." and
"grub-install" should have an option where I can specify which BIOS
disk I want to use.
I had this problem today with my Fedora 3 box (RAID1 setup), after I
installed some official updates. I rebooted and I got stuck to the
GRUB string on the screen. I booted into rescue-cd, grub-install was a
no-go (not have a corresponding BIOS device). I was successful only by
manually giving commands to the grub shell.
Since this is a continuation of a previous bug, I think this is a very
very long standing bug (since RH 7.x or something) that really deserve
May be this will help to solve the problem (at least it worked for me
on Fedora 3):
Did not mention my exact way...
1) Install Fedora 3 on RAID 1 (consists of 2 IDE HDDs). Each drive has
2 partitions (data, swap).
2) First reboot will stuck (eg., no boot device).
3) Boot from first CD of Fedora 3 as 'linux rescue'.
4) From shell start 'grub'
5) From grub execute the following:
This is it. The rest is up to you.
Probably these 4 commands (or some better work around) could be
implemented in anaconda, especially when user installs Linux on
0.95-6 should fix this.
*** Bug 138572 has been marked as a duplicate of this bug. ***
I'm running software raid, and I've not seen the error on RHL 7.3,
8.0, 9, and up until now FC3. I picked up a new kernel-smp over yum
the other day, however, and I got bitten by it this morning when I
tried to boot my box.
grub version is 0.95-3
Same problem but simplier fix :
Boot rescue mode
edit grub.conf and remove the comment on this line :
worked after for me.
You will have only one disc with bootloader then. Repeat for each disc
in RAID-1 array or use manual procedure above.
*** Bug 147411 has been marked as a duplicate of this bug. ***
Also seems to happen with RHEL4
*** Bug 122804 has been marked as a duplicate of this bug. ***
Definitely happens for Red Hat ES 4.0.
The interesting thing is the Anaconda installer. I software-mirrored the boot
partition, and the installer only gave the option to write the boot record
to the partition holding /boot, it said nothing about writing the MBR.
I stupidly missed the hint and fumbled with the BIOS settings a long time.
Then I saw the light, un-RAIDed the boot partition and voilÃ : Anaconda gives
the possibility to write the MBR!
This bug should not be "closed". I will open a new bug for Red Hat ES 4.0 and
reference this one.
Peter Jones said that "0.95-6 should fix this", but unfortunately both RHEL 4
and FC3 ship with 0.95-3.1. This problem may well be fixed in 0.95-6, but we
can't tell since it doesn't seem to exist.
I agree that this bug shouldn't have been closed until the fix shipped.
*** Bug 153060 has been marked as a duplicate of this bug. ***
Although I have managed to install RHEL 4.0, then bind the /dev/hde1 and
/dev/hdg1 partitions into /dev/md0 RAID 1 onto which /boot is then mounted,
the RAID fails on every boot (I more or less used Boris Mironov's idea, above).
At runtime, bliss:
+---> /dev/md0 --> /boot
On boot, GRUB boots from /dev/hde1, then we find this in /var/log/dmsg:
md: considering hdg1 ...
md: adding hdg1 ...
md: created md0
md: running: <hdg1>
raid1: raid set md0 active with 1 out of 2 mirrors
After which /dev/md0 runs in degraded mode (awww what a letdown!):
+---> /dev/md0 --> /boot
mdadm --detail /dev/md0 shows:
Number Major Minor RaidDevice State
0 0 0 -1 removed
1 34 1 1 active sync /dev/hdg1
I then have to re-create the array:
# mdadm /dev/md0 -a /dev/hde1
...So something blows the mirror away. It's not 'bad blocks', I have checked
with 'badblocks'. And using 'dd' and a little script, the only block
that differs is block 2, which is apparently part of the mountpoint metadata
(not sure). Maybe GRUB decides to tweak that, thus causing the mirror to
Conclusion: Even if installation of /boot to a RAID would work, it might
not actually work afterwards.
I am using RHEL4.0 with kernel 2.6.9-667smp. installed on two SATA drives
configured as Sotware RAID1.
The problem i faced is similar to
I installed the OS properly and it was working fine.
Description of problem:
I was formatting my 750GB Hardware RAID partition and in between system stuck at
last stage of inode creation.(unable to tell the error messages)
I did the hardware reset. And after that grub failed to load.
It did not worked for me. Message : /dev/sda not found.
I tried the trick by Julien
https://bugzilla.redhat.com/bugzilla/show_bug.cgi?id=114690#c12 . Again grub
failed to load.
Then tried the trick by Keith McDuffee
After reboot my machine is booting properly.
I am seeing the same thing under CentOS-4.1. Not exactly Redhat - but not
exactly not RHE either :-)
I have a /dev/md1 RAID-1 partition for "/" and no separate "/boot" partition.
Running "grub-install -no-floppy /dev/md1" gives me the "corresponding BIOS
device" error as above. This is with CentOS-4.1's grub-0.95-3.1.
This is on a MPTFUSION SCSI server with disks on /dev/sda and /dev/sdb - but I'm
sure this is purely a grub-vs-RAID issue as mdadm is totally happy with the
disks, and I have also repeated the same problem with an IDE-based install too.
If I manually call grub and do:
device (hd0) /dev/sda
It crashes (segfault) grub straight after entering the setup command. I also
waited until mdadm reported that it had synced the two disks before doing it
again - no difference.
(In reply to comment #25)
> If I manually call grub and do:
> device (hd0) /dev/sda
> root (hd0,0)
> setup (hd0)
> It crashes (segfault) grub straight after entering the setup command.
Try booting an UP kernel and it should work (worked for me at least).
I am also receiving this problem with centos (rhel) 4.0 and 4.1.
Two RAID 1 arrays from 4x80gb Seagate hdd's partitioned as:
DEVICE START END SIZE TYPE MOUNT POINT
VG VolGroup00 152352M VolGroup
LV LogVol00 150272M ext3 /
LV LogVol01 1984M swap
sda1 1 13 101M ext3 /boot
sda2 14 9724 76277M physical v
sdb1 1 9724 76277M physical v
The system hangs after installation at the grub propt (flashing underscore _).
Andy's fix was irrelavent as mtab and devices.map were already pointing to sda
So, will hell freeze over anytime soon? This bug is still here on RHEL3 U6 with
no update in sight.
Anyone who comes across this DISREGARD my comment #22 above. The mirror would
probably work (I'm currently simply not mirroring /boot) were it not for the
hardware. The machine this is running on just cannot be rebooted, it needs to
be powercycled or there will be problems with the first disk on reboot.
August 16, 2006 - bug still exists. Exactly as described above. New 64bit
install (for the millionth try). ide drive and sata2 x2 disks raid1. Trying to
get the computer to boot after the install just stalls, and crashes at a GRUB
message, which just say's "GRUB". Did every fix that can be found on the first
200 results at google, to no avail.
Kind of useless having /boot on raid1 if the OS wont boot. Equally as useless
is being forced to put /boot on a single drive when trying to avoid single
points of failure on the server.
SUSE10 64 bit installs easilly in 10 minutes, and boots right up. Looks like
that's what the servers will be running.
Just cross-referencing this, I guess its best to post to still open bugs ;-)
Hell hath still not frozen, pigs still taxiing on the runway. Tower control do
Bug #191449 NEW install grub incorrect when /boot is a RAID1 device
Bug #170575 NEW grub fails on one of two sata disks of raid1 set during i...
Bug #163460 NEW Installation failed on RAID setup (GRUB error 15 and fail...
Bug #160563 NEW "grub-install /dev/md0" does not install on second disk i...
>> Bug #114690 CLOSED/RAWHIDE grub-install won't install to RAID 1