Description of problem: Well, this is a bug, or might not. If it was fix for Grub on mirrored drives, the old bootloader should be zeroed out of MBR too, and some warnning on what is going on logged somewhere or displayed (or user prompted, or option in ks.cfg somewhere, or whatever). Anyhow, read ahead: I've just noticed that in some cases Anaconda will install bootloader onto partition, instead of MBR. Even if I have "bootloader --location=mbr" in ks.cfg file (which should be default anyhow). I've tested it several times, with brand new drives and with drives that already contained some data. If installation was onto two drives using software mirroring, it installs bootloader onto first partition of first drive. If installation was on single drive (no mirroring) it installs bootloader into MBR as expected. The problem is, if there was old boot loader present in MBR, that's what BIOS will use, and the system will be unbootable. If I look at /root/anaconda-ks.cfg file after installation, the "bootloader --location=mbr" is mysteriously changed to "bootloader --location=partition". I guess possible workaround would be to instruct Anaconda to zeroize MBR before partitioning drives. However, boot loader would still end up on first partition. But at least system will boot. Other workarounds would be to install boot loader into MBR from rescue mode, or to install onto single drive and create mirrors later. Version-Release number of selected component (if applicable): anaconda-10.1.1.46-1 How reproducible: Always Steps to Reproduce: 1. Install system with software mirroring Actual results: Installs boot loader onto first partition of first drive Expected results: Should install bootloader into mbr as instructed Additional info:
requested by Jams Antill
The version of grub we ship in RHEL4 does not support RAID1 (mirroring). So if the root partition is on RAID1 we install the bootloader to the first drive instead of the MBR. There is a simple workaround -- if you use RAID1, make sure you clear any preexisting bootloader from the MBR. It's not ideal, but given the current higher-priority workload and where we are in RHEL4's lifecycle, we're not going to pursue a fix for RHEL4. So I'd like to close this with a release note.
will add to RHEL4.6 release notes. thanks!
Closing - this is too big a change for RHEL4, and we're release noting the workaround as in comment 3
adding to RHEL4.7 release notes (under Installation-Related Notes): <quote> The Red Hat Enterprise Linux 4 version of GRUB does not support software mirroring (RAID1). As such, if you install Red Hat Enterprise Linux 4 on a RAID1 partition, the bootloader will be installed in the first hard drive instead of the master boot record (MBR). This will render the system unbootable. If you wish to install Red Hat Enterprise Linux 4 on a RAID1 partition, you should clear any preexisting bootloader from the MBR first. </quote> Denis, please advise if any further revisions are required. I will be posting this as a kbase as well upon verification to help visibility. thanks!
Hi, the RHEL4.7 release notes deadline is on June 17, 2008 (Tuesday). they will undergo a final proofread before being dropped to translation, at which point no further additions or revisions will be entertained. a mockup of the RHEL4.7 release notes can be viewed here: http://intranet.corp.redhat.com/ic/intranet/RHEL4u7relnotesmockup.html please use the aforementioned link to verify if your bugzilla is already in the release notes (if it needs to be). each item in the release notes contains a link to its original bug; as such, you can search through the release notes by bug number. Cheers, Don
Hi. Having just dealt with this problem and not having read the release notes, I wanted to add my 2 cents worth... First, the documentation as included in the release notes isn't very clear at all. It says: <quote> The Red Hat Enterprise Linux 4 version of GRUB does not support software mirroring (RAID1). As such, if you install Red Hat Enterprise Linux 4 on a RAID1 partition, the bootloader will be installed in the first hard drive instead of the master boot record (MBR). This will render the system unbootable. If you wish to install Red Hat Enterprise Linux 4 on a RAID1 partition, you should clear any preexisting bootloader from the MBR first. </quote> This statement is confusing to a person who might be installing for the first time and has nothing written INTO the MBR! While the bug report started out as such, the wording that made it into the release notes implies that you already have something in the MBR which is not always going to be the case! The first paragraph shouldn't end in "This will render the system unbootable". Having the boot loader installed to the first partition on the first hard disk will not render the system unbootable. I recommend adding something to the effect of "if the MBR contains a pre-existing boot loader from a previous installation". That being said, the documentation also doesn't point out that by writing grub to just /dev/sda1 that one of these three scenerios could occur: (I've seen all 3 of these situations) 1) The boot loader is written to /dev/sda1 during kickstart and never makes it to /dev/sdb1. Is it supposed to? If yes, "great", but if "no", what happens when /dev/sda1 fails and /dev/sdb1 takes over? No boot sector. 2) The boot loader is written to /dev/sda1 and eventually ends up on /dev/sdb1. This has happened for me, but not all the time, and I can't seem to figure out when it happens versus when it doesn't happen. 3) The boot loader is written to /dev/sda1 and is then later CLEARED. I've done a LOT of testing on this scenerio which is definately repeatable. After the system is fully kickstarted, and /dev/sda1 contains the boot loader while /dev/sdb1 does not, if I issue a "reboot -f", I see just before the boot: "md: md0 is still in use", then the system boots perfect fine, but during boot it somehow syncs /dev/sdb1 to /dev/sda1 (dirty cache?) which clears the boot loader data on /dev/sda1 and thus renders the system unbootable ON THE NEXT BOOT. This, of course, is pretty serious because now, the system is completely unbootable on the NEXT boot which might not happen until much later... I chose a different approach. I let Anaconda install to the partition, but then in my kickstart script, I scripted the installation of grub to BOTH MBRs: grub --batch <<EOF root (hd0,0) setup (hd0) root (hd1,0) setup (hd1) EOF Now, nothing the RAID does is going to have any effect on my installation. Of course if you need to rebuild the disk, you need to re-install the boot sector manually, but using the Red Hat strategy of writing to the partition wouldn't change this. We aren't using the RHEL 5 yet, but even if "bootloader" allowed writing to the MBR for RAID-1 (which it seems that 5.4 may?), where would it write to? -- the MBR on the first disk? It's not clear that the kickstart "bootloader" directive allows you to explicitly state which disks the boot loader should get installed to. I don't even know how it would figure this out. Any feedback? Jason.
It's the same in RHEL5 (I tried centos 5.4) After boot I can run "grub-install /dev/md0" (md0 is my /boot on raid1) and then I can unplug either drive and it will still boot (this is on sata drives, not pata drives) But I haven't found a way to make Anacondo do it right. I tried doing it in %post in the kickstart file but then it complains about there not being proper drives