Red Hat Bugzilla – Bug 236452
bootrecord always written to MBR, when device is using the 'disk#p#' partition naming scheme
Last modified: 2010-10-22 10:22:49 EDT
+++ This bug was initially created as a clone of Bug #225551 +++
Description of problem:
Even if during installation write bootrecord to boot-section is selected,
an existing MBR is overwritten.
Version-Release number of selected component (if applicable):
Steps to Reproduce:
1. Install RHEL5 on a system with SATA Raid.
2. Install a second RHEL5 installation on the same system and select
write bootrecord to bootsection.
The second installation will be booted.
Boot of the first fedora Installation.
To get access to the 2nd installation, you should only need to insert 3 lines in
/boot/grub/grub.conf of the 1st installation:
title Fedora (2nd Installation)
similar problems occur on openSUSE 10.2
This request was evaluated by Red Hat Product Management for inclusion in a Red
Hat Enterprise Linux maintenance release. Product Management has requested
further review of this request by Red Hat Engineering, for potential
inclusion in a Red Hat Enterprise Linux Update release for currently deployed
products. This request is not yet committed for inclusion in an Update
Fixed in booty-0.80.4-6 .
can you clarify the exact steps to reproduce here?
In particular looking at the original BZ the reporter seems to use both
partitions and discs in his comments. How many discs and partitions we have here
and what is installed where?
In dmraid, you have multiple disks in a RAID, which we treat as one normal disk
(with a strange path) in the installer. On the combined raid, install like
normal i.e. a /boot and a LVM PV with / in a Logical Volume, and ask the
installer to install the bootloader on the first partition of the raid. That
is, install it to /boot on /dev/mappper/isw_Volume_0p1 , not to the MBR on
Steps to reproduce with hardware RAID (disc cciss/c0d0 - Compaq Smart Array):
1) Install a system with 4 partitions: /boot, /data, / and swap
cciss/c0d0p1 - /boot
cciss/c0d0p2 - /data
cciss/c0d0p3 - /
2) Install GRUB on MBR [/dev/cciss/c0d0 Master Boot Record (MBR)]
3) Complete the install and boot into the system. Edit /boot/grub/grub.conf as
if there was another OS on c0d0p2 (/data). We'd like to chainload this OS later.
Also I have an entry for Anaconda to boot into the install (not relevant for
# grub.conf generated by anaconda
# Note that you do not have to rerun grub after making changes to this file
# NOTICE: You have a /boot partition. This means that
# all kernel and initrd paths are relative to /boot/, eg.
# root (hd0,0)
# kernel /vmlinuz-version ro root=/dev/cciss/c0d0p3
# initrd /initrd-version.img
serial --unit=0 --speed=38400
terminal --timeout=5 serial console
title Red Hat Enterprise Linux Server (2.6.18-88.el5)
kernel /vmlinuz-2.6.18-88.el5 ro root=LABEL=/ console=ttyS0,38400
title Second Installation
kernel /vmlinuz-anaconda console=ttyS0,38400
4) Perform a second install and select c0d0p2 as / (format if you want). leave
other partitions untouched.
cciss/c0d0p1 - untouched (no mount point, was previously /boot)
cciss/c0d0p2 - / (was previously /data)
cciss/c0d0p3 - untouched (no mount point, was previously /)
cciss/c0d0p4 - swap
5) Install GRUB on the first sector of root partition
[/dev/cciss/c0d0p2 First sector of boot partition]
6) Complete install and reboot
* with RHEL 5.1:
The bootloader of the second install gets installed on MBR overwriting the
previous one. Hence the 2nd OS is loaded. The customizations are lost (actually
they are in the c0d0p1 partition and if you mount it you can see them, but this
instance of GRUB is not loaded)
with RHEL5.2 snap #4:
* MBR is not overwritten. GRUB from the 1st install is loaded and all
modifications can be seen from the grub menu. Boot loader of the 2nd OS is
installed on the first sector of its root partition (where we told it to
install). Chainloading works as expected. Booting into 1st OS also works as
Did the system in comment #11 have a dmraid capable controller? Most systems
with CCISS don't and afaik, it is a requirement to reproduce this bug.
Created attachment 301538 [details]
The patch which was applied
This is the patch applied to fix the problem; it should affect any device using
the 'disk#p#' partition naming scheme.
breeves: test was correct but only because CCISS uses the same naming scheme
nstraz: so an install to an aoe or mpath device would exercise the code too
The test in comment #11 turned out to be enough. No need to test with dmraid
Yes I have access to the rhel 5.2 Snapshots -
please read also 225548 ( which concerns fedora ).
I will do additioal tests, nevertheless I found it very hard to deal with
dmraid - you get one item fixed and then you find the next problem.
You should have also a look at 164550 in Issue Tracker, here I saw that
my problem with dmraid with 2 dmraids seem to be FastTrak unique -
I could install a rhel5.2 Snapshot3 on a Primergy TX200-S3 (ESB2) on 2 RAID1,
but after reboot I only saw the sata disks ( /dev/sda .... /dev/sdd ) instead
the root cause of the problem was a bug in the code which affected devices that
used the 'disk#p#' partition naming scheme. dmraid is one of the cases.
If you have the time you can retest this issue with dmraid setup although we
believe the test in comment #11 was enough to verify the fix of this issue.
I have verified that a bootrecord is written to the requested partition,
on my booted system I see /dev/sda7 instead of /dev/mapper/ddf1_.......p7 !
On the same system I installed fedora 8 and there everything looks good.
But fedora 9 does not install at all !
PS.: I know this is a new bugzilla entry, but send me some advice, that
I can prepare as much information as possible.
if your disk was named /dev/sda7 then the code path that got fixed was not
executed (see comment #16, #17).
Please open another bug with the following information:
* Version of RHEL/Fedora and your hardware setup where you saw differences in
disk naming between Fedora and RHEL. If you've been trying to use dmraid on RHEL
it may have silently failed without you noticing that.
* If you're trying to test the same environment with Fedora 9 then ensure you
have a tree that is installable in a default environment (e.g. lvm). If this is
the case then file a bug against Rawhide with the information from the failure.
You'd probably want to post to fedora-test-list _at_ redhat.com to seek more
Please do not use this issue to track dmraid failures or other Fedora issues. It
was created to track the issue where boot loader was not installed on the
requested partition but on MBR.
An advisory has been issued which should help the problem
described in this bug report. This report is therefore being
closed with a resolution of ERRATA. For more information
on the solution and/or where to find the updated files,
please follow the link below. You may reopen this bug report
if the solution does not work for you.