Bug 153060
| Summary: | "MBR not suitable as boot device" printed, then system hangs at reboot | ||
|---|---|---|---|
| Product: | Red Hat Enterprise Linux 4 | Reporter: | Anchor Systems Managed Hosting <managed> |
| Component: | anaconda | Assignee: | Anaconda Maintenance Team <anaconda-maint-list> |
| Status: | CLOSED DUPLICATE | QA Contact: | Mike McLean <mikem> |
| Severity: | medium | Docs Contact: | |
| Priority: | medium | ||
| Version: | 4.0 | ||
| Target Milestone: | --- | ||
| Target Release: | --- | ||
| Hardware: | i386 | ||
| OS: | Linux | ||
| Whiteboard: | |||
| Fixed In Version: | Doc Type: | Bug Fix | |
| Doc Text: | Story Points: | --- | |
| Clone Of: | Environment: | ||
| Last Closed: | 2005-04-01 16:39:19 UTC | Type: | --- |
| Regression: | --- | Mount Type: | --- |
| Documentation: | --- | CRM: | |
| Verified Versions: | Category: | --- | |
| oVirt Team: | --- | RHEL 7.3 requirements from Atomic Host: | |
| Cloudforms Team: | --- | Target Upstream Version: | |
| Embargoed: | |||
|
Description
Anchor Systems Managed Hosting
2005-04-01 06:15:03 UTC
Specifically, here is our SCSI RAID1 kickstart fragment: # / part raid.00 --size 500 --ondisk=sda part raid.01 --size 500 --ondisk=sdb # swap part raid.10 --size 2048 --ondisk=sda part raid.11 --size 2048 --ondisk=sdb # /var part raid.20 --size 2048 --ondisk=sda part raid.21 --size 2048 --ondisk=sdb # /usr part raid.30 --size 2048 --ondisk=sda part raid.31 --size 2048 --ondisk=sdb # /data part raid.40 --size 1 --grow --ondisk=sda part raid.41 --size 1 --grow --ondisk=sdb # Assemble the RAID devices. raid / --fstype ext3 --level=RAID1 raid.00 raid.01 raid swap --fstype ext3 --level=RAID1 raid.10 raid.11 raid /var --fstype ext3 --level=RAID1 raid.20 raid.21 raid /usr --fstype ext3 --level=RAID1 raid.30 raid.31 raid /data --fstype ext3 --level=RAID1 raid.40 raid.41 The workaround for this is to not put /boot on RAID. *** This bug has been marked as a duplicate of 114690 *** FWIW, this is a kickstart configuration that successfully installed GRUB to both disks in the RAID set in each of RH 7.3, 8.0, 9, EL3, FC1, and FC2. There must have been a regression introduced in the development of FC3 and EL4. grub-install hasn't *ever* supported doing this, until very recently in the FC4 tree. Supported or not, the kickstart configuration described above used to result in a bootable operating system. I've successfully installed grub on the MBR of servers where both / and /boot
are RAID1 partitions dozens of times using WS 3. Logically, the choice of
whether to install grub on the MBR or the boot partition should not depend on
what the partition type is.
I think the following short patch would suffice to fix the problem in
anaconda-10. It will take me a while to get around to testing it though, and it
may not be appropriate for non-Intel machines.
--- fsset.py.raid-mbr 2004-12-14 13:25:04.000000000 -0800
+++ fsset.py 2005-04-21 12:20:23.188956840 -0700
@@ -1222,6 +1222,7 @@
if bootDev.getName() == "RAIDDevice":
ret['boot'] = (bootDev.device, N_("RAID Device"))
+ ret['mbr'] = (bl.drivelist[0], N_("Master Boot Record (MBR)"))
return ret
if iutil.getPPCMacGen() == "NewWorld":
I've just now tested the patch, and it works just as expected. The default location for installing the boot loader was /dev/sda, and on the Advanced Boot Loader options screen it gave me the choice to install it on either /dev/sda or /dev/md0. Oh, the patch is applied to anaconda-10.1.1.13-1.src.rpm from WS 4, by the way. (Forgot to include the TLD in the diff.) |