From Bugzilla Helper: User-Agent: Mozilla/4.77 [en] (Win95; U) Description of problem: Two SCSI drives, /dev/sda and /dev/sdb. RAID-1 used to make mirrored / and /boot. LILO handles this configuration with root=/dev/md0 I tried grub-install with each of /dev/md0, /dev/md1, /dev/sda, and /dev/sdb. Each reported that /dev/md? does not have any corresponding BIOS drive, and failed to overwrite the MBR. A device.map containing fd0, sda, and sdb was built but md0 and md1 were not mentioned. Version-Release number of selected component (if applicable): How reproducible: Always Steps to Reproduce: 1. See description above 2. 3. Additional info: Curiously, a RH7.2 install from scratch handles this configuration. Perhaps it installs to /dev/sda before the RAID is formed.
Yes, we added support in the installer to know how to "do the right thing", but I didn't get that support into grub-install. Will try to do prior to the next release
It would be very nice if the install program installs grub onto both discs in a raid1 configuration (if / is raided). At the moment it is very hard to get linux to boot off the second disc if the first one fails (this is partially because grub-install doesn't work properly from rescue mode). RedHat should have a much better way of putting grub back. It's very easy to re-install lilo, but grub-install doesn't work most of the time.
In the skipjack beta we still get /dev/md0 does not have any corresponding BIOS drive when running grub-install on a raid set. We get the same message if we try to grub-install to /dev/hda or /dev/hdc which are part of the raid set. Does the installer now try to intall grub to all drives in a raid set that a root partition is on?
> Yes, we added support in the installer to know how to "do the right thing", but > I didn't get that support into grub-install. Will try to do prior to the next > release At least on two of my machines the redhat 7.2 installer was not able to "do the right thing" and I was left alone with not bootable systems. This was soft RAID1 too. Now, usually it works via installer. But can you tell us how we can do this after installation? I have upgraded a server from 7.1 to 7.2 yesterday and wanted to boot via grub but the installer didn't install grub because it didn't update the kernel - it was already the latest installed. Can I install grub without grub-install somehow?
*** Bug 56271 has been marked as a duplicate of this bug. ***
*** Bug 79379 has been marked as a duplicate of this bug. ***
*** Bug 80723 has been marked as a duplicate of this bug. ***
I'm seeing this with the latest rawhide grub on a RH 8.0 machine (Sun LX-50 -- only works w/ rawhide grub) as well. grub-install fails for RAID 1 devices, even though I can install grub using the grub command interactively....
I am seeing this on Redhat 8 also. This issue has been open for 2 years?!?
I was incurring the same problems, but was finally able to get grub to install by doing the following: # grub --batch Probing devices to guess BIOS drives. This may take a long time. GRUB version 0.93 (640K lower / 3072K upper memory) [ Minimal BASH-like line editing is supported. For the first word, TAB lists possible command completions. Anywhere else TAB lists the possible completions of a device/filename.] grub> root (hd0,0) root (hd0,0) Filesystem type is ext2fs, partition type 0xfd grub> setup (hd0) setup (hd0) Checking if "/boot/grub/stage1" exists... no Checking if "/grub/stage1" exists... yes Checking if "/grub/stage2" exists... yes Checking if "/grub/e2fs_stage1_5" exists... yes Running "embed /grub/e2fs_stage1_5 (hd0)"... 16 sectors are embedded. succeeded Running "install /grub/stage1 (hd0) (hd0)1+16 p (hd0,0)/grub/stage2 /grub/grub.conf"... succeeded Done. Contents of /boot/grub/device.map: (fd0) /dev/fd0 (hd0) /dev/hda df listing: Filesystem 1K-blocks Used Available Use% Mounted on /dev/md1 10080428 1361604 8206760 15% / /dev/md0 2063440 38036 1920588 2% /boot /dev/md2 62712132 35536 59490916 1% /local none 111476 0 111476 0% /dev/shm My thoughts are that this bug may not get much attention being listed as a "7.2" bug, when in fact it's still happening with even the latest rawhide build of grub (0.93-4). It is, afterall, just a shell script.
Hi it seems to me that grub should be able to write to the mbr on both disks in a raid1 environment...or did i miss something? i set up raid1 fine but if i pull primary disk it wont boot off secondary..so this must be the mbr having nothing to go on right?
Grub is a boot loader, md0,1,.. are devices from md device driver, and exist after the kernel loads, they are not bios devices. Grub can be installed either before creating the raid arrays, or after with this procedure (done with Raid-1 config): 1) Create raid arrays (mdadm is best for this), see: http://www.parisc-linux.org/faq/raidboot-howto.html 2) Create grub boot floppy; an example of boot commands from menu.lst is: kernel /boot/vmlinuz-2.4.20 md=0,/dev/sda6,/dev/sdb6 md=1,/dev/sda7,/dev/sdb7 md=2,/dev/sda8,/dev/sdb8 md=3,/dev/sda5,/dev/sdb5 ether=0,0,eth1 root=/dev/md0 ro (here I have 4 arrays and 2 ethernet, eth0 loaded automatically,notice root is /dev/md0 for me) 3) foreach disk (i.e. seperately, having all but one disks removed): 3.1) Edit fstab to mount the hda0-n/sda0-n devices, and comment out the current md0-n file-systems, then reboot with the floppy and 'grub-install -- root-directory=/boot /dev/sda' 3.2) copy the menu.lst file from the working floppy to /boot/grub 3.3) reboot and check grub working
LILO does this correctly and simply by determining the mirrored devices and writing to both of them. The RedHat-recommended upgrade from LILO to GRUB breaks this. Confusingly, RedHat had managed to make initial installs work with GRUB and RAID-1 but subsequent updates failed. We no longer install GRUB on any of the 20-30 systems we maintain. Install LILO, uninstall GRUB, in /etc/lilo.conf, instead of a line like boot=/dev/hda use a line like boot=/dev/md0
I wanted to see if there was a solution for RH 9.0. I haven't had the opportunity to use 9.0 with a Raid1 set, but if it works, I would be willing to try it. Also, I wondered if there were plans to retrofit the fix all the way back to the 7.x RH releases? Thanks, Carter
I just installed RedHat 9.0. I am experiencing the same problem as previously described. The version of this bug should be updated to 9.0 from 7.2. My system configuration is as follows: hda with extended and logical partition on motherboard IDE controller hde1 and hdg1 on promise PCI controller md0 software RAID1 of hde1 and hdg1 with hde2 and hdg2 as swap partitions The RedHat 9.0 boots off the grub floppy but does not boot off the harddrive. I attempted to use grub-install /dev/hda to install grub onto hda rather than onto md0 as I specified during the install process so as to make the boot floppy unnecessary. grub-install /dev/hda yields /dev/hda does not have any corresponding BIOS drive. grub-install --recheck /dev/hda yields Probing devices to guess BIOS drives. This may take a long time. /dev/md0 does not have any corresponding BIOS drive. grub-install --recheck /dev/md0 Probing devices to guess BIOS drives. This may take a long time. /dev/md0 does not have any corresponding BIOS drive. Any suggestions? Thanks! Jeff
This is biting me with a redhat 8 install that was working fine until ne of the disks had way too many bad sectors on it. I have failed over and am using the spare disk in a raid 1 setup, but I have to keep the broken drive in the machine so the damn thing can boot! Can redhat please just fix the grub-install tool for all versions. You can get this to work at install time, why not the rest of the time. 2 years to fix a bug is quite a long time, especially when it is as serious as this one. It basically renders RAID 1 completely useless, and worse than that, it gives the illusion of protection that is not really there.
I'm using Redhat 9 with a RAID 1 configuration. The primary drive went bad and I moved my second drive to the primary controller. After booting from the install CD's and going into linux rescue and mounting the new primary drive partitions I attempted to run grub-install /dev/hda. I received the '/dev/md1 does not have any corresponding BIOS drive' After reading this tread I performed the instructions given by gudlyf and I was able to get grub into the MBR. My system then booted.
I have the same problem on a FULLY UPDATED RH 9 server with RAID 1: no corresponding BIOS drive server did not boot from hard drive (MBR failure????) but does boot from floppy. tried re-installing grub but then get this strange error. tried searching the web from answer but there do not seem to be any good ones out there please fix this long standing bug!!!
I have customers seeing this behaviour. I really think this is a must-fix
I cannot believe they are leaving this unfixed after all this time. Yes, RH 9 (currently the latest) is still broken in this respect. My simple setup of /dev/hda & /dev/hdb was not even handled by the install script properly, which they claim to have fixed many moons ago.
http://lists.us.dell.com/pipermail/linux-poweredge/2003-July/014331.html has another set of instructions (that looks nice, simple, and sane)
*** Bug 107665 has been marked as a duplicate of this bug. ***
I experienced this bug on RHEL-3; oddly I had two different servers that I use serial consoles on and one did not experience this problem, but the other did. I was actually experiencing: bug id #79379, but since that has been marked as a duplicate of this one, I figured I should respond here. In my case the system hung at: GRUB loading, please wait... Press any key to continue Press any key to continue Press any key to continue Since my systems are meant to be colocated, I can't "Press any key" everytime I reboot. After many hours, I came up with a workaround. Basically change the line which looks like: terminal timeout=10 serial console to: terminal timeout=10 console serial I think there is a bug in the way grub communicates via serial interface but I cannot pinpoint the problem. This solves the problem by going to the "normal" console if no one is connected via serial connection and boots normally. If you are connected via serial connection you will have 10 seconds to "press any key" to get the boot options. Hope this saves someone some time.
Created attachment 96138 [details] grub-install patch to allow "grub-install /dev/hd?" to work for raid arrays This patch allows you to use something like "grub-install /dev/hda" when using raid arrays, and /dev/hda as the boot drive. It does this by examining /etc/raidtab to map the raid array with the contents of /boot on it to the first partition that makes up that array. In my case /boot is mirror array, /dev/md0. The first partition that is a part of /dev/md0 is /dev/hda1. So instead of using /dev/md0 when wanting to use the grub command dump it translates it like "/dev/md0 -> /dev/hda1 -> (hd0,0)". Please test this patch and report any bugs.
So, from a quick look -- realistically, you want further than just the first partition in the mirror and want to do `/sbin/grub-install /dev/md0` and have it install on all components of the array. Also, dependence on /etc/raidtab is probably not the best plan moving forward -- using mdadm is likely to be more robust as /etc/raidtab isn't guaranteed to be correct due to the autostarting of RAID arrays, but I'm not against /etc/raidtab for the first go around :)
Actually, the current patch uses grub-install /dev/hda(aka the device you actually want to write to, instead of the one you think you might want to). I don't think grub-install /dev/md0 would work with my patch. This has the down side of not writing to both mirror drives. But then in my step there is no /dev/hde(second drive in the mirror) mentioned in device.map, and when a drive dies you could make it hda. Replace the failed drive and make it /dev/hde. Then boot off rescue cd, and run grub-install /dev/hda. Which would put you back to where you were before the failure.
Right, which is a good start, but you really want to have the boot record mirrored across the drives so that you don't have to use rescue mode and can just boot into your system, hotswap in your new drive, and have the array reconstructed + rewrite the boot record there. The mailing list post in comment #21 gives a pretty good description of what the ideal would be.
Hadr to use RHEL 3 because same bug. Trying to point attention to this bug.
As this is long-old-not supported bug, I filled new bug report for Red Hat Enterprise Linux 3 and closed this bug as duplicate. *** This bug has been marked as a duplicate of 114690 ***
Changed to 'CLOSED' state since 'RESOLVED' has been deprecated.
August 16, 2006 - bug still exists. Exactly as described above. New 64bit install (for the millionth try). ide drive and sata2 x2 disks raid1. Trying to get the computer to boot after the install just stalls, and crashes at a GRUB message, which just say's "GRUB". Did every fix that can be found on the first 200 results at google, to no avail. Kind of useless having /boot on raid1 if the OS wont boot. Equally as useless is being forced to put /boot on a single drive when trying to avoid single points of failure on the server. SUSE10 64 bit installs easilly in 10 minutes, and boots right up. Looks like that's what the servers will be running.